WorldWideScience

Sample records for central processing units computers

  1. Quantum Central Processing Unit and Quantum Algorithm

    Institute of Scientific and Technical Information of China (English)

    王安民

    2002-01-01

    Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.

  2. MATLAB Implementation of a Multigrid Solver for Diffusion Problems: : Graphics Processing Unit vs. Central Processing Unit

    OpenAIRE

    2010-01-01

    Graphics Processing Units are immensely powerful processors and for variety applications they outperform the Central Processing Unit, CPU. The recent generations of GPU’s have a flexible architecture than older generations and programming interface more user friendly, which makes them better suited for general purpose programming. A high end GPU can give a desktop computer the same computational power as a small cluster of CPU’s. Speedup of applications by using the GPU has been shown in...

  3. Exploiting Graphics Processing Units for Computational Biology and Bioinformatics

    OpenAIRE

    Payne, Joshua L.; Nicholas A. Sinnott-Armstrong; Jason H Moore

    2010-01-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of general-purpose GPUs and Nvidia's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational b...

  4. Computational Physics on Graphics Processing Units

    CERN Document Server

    Harju, Ari; Federici-Canova, Filippo; Hakala, Samuli; Rantalaiho, Teemu

    2012-01-01

    The use of graphics processing units for scientific computations is an emerging strategy that can significantly speed up various different algorithms. In this review, we discuss advances made in the field of computational physics, focusing on classical molecular dynamics, and on quantum simulations for electronic structure calculations using the density functional theory, wave function techniques, and quantum field theory.

  5. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700. PMID:20658333

  6. A Universal Quantum Network Quantum Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    WANG An-Min

    2001-01-01

    A new construction scheme of a universal quantum network which is compatible with the known quantum gate- assembly schemes is proposed. Our quantum network is standard, easy-assemble, reusable, scalable and even potentially programmable. Moreover, we can construct a whole quantum network to implement the generalquantum algorithm and quantum simulation procedure. In the above senses, it is a realization of the quantum central processing unit.

  7. Updating process computers at EdF's 900MWe units

    International Nuclear Information System (INIS)

    New centralized data processing systems have been gradually installed at all EdF's 900MWe Pressurized Water Reactor sites since August 1988. This initiative has been accompanied by a comprehensive programme of training. The new systems enable real-time monitoring of the entire installation, helping the operator to summarize information and collect and analyse data. They should improve both the availability and reliability of units, and the quality of experience feedback. (author)

  8. Security central processing unit applications in the protection of nuclear facilities

    International Nuclear Information System (INIS)

    New or upgraded electronic security systems protecting nuclear facilities or complexes will be heavily computer dependent. Proper planning for new systems and the employment of new state-of-the-art 32 bit processors in the processing of subsystem reports are key elements in effective security systems. The processing of subsystem reports represents only a small segment of system overhead. In selecting a security system to meet the current and future needs for nuclear security applications the central processing unit (CPU) applied in the system architecture is the critical element in system performance. New 32 bit technology eliminates the need for program overlays while providing system programmers with well documented program tools to develop effective systems to operate in all phases of nuclear security applications

  9. A Comprehensive Review for Central Processing Unit Scheduling Algorithm

    OpenAIRE

    Ryan Richard H. Guadaandntilde;a; Maria Rona Perez; Larry T. Rutaquio Jr.

    2013-01-01

    This paper describe how does CPU facilitates tasks given by a user through a Scheduling Algorithm. CPU carries out each instruction of the program in sequence then performs the basic arithmetical, logical, and input/output operations of the system while a scheduling algorithm is used by the CPU to handle every process. The authors also tackled different scheduling disciplines and examples were provided in each algorithm in order to know which algorithm is appropriate for various CPU goals.

  10. Process as Content in Computer Science Education: Empirical Determination of Central Processes

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2008-01-01

    Computer science education should not be based on short-term developments but on content that is observable in multiple domains of computer science, may be taught at every intellectual level, will be relevant in the longer term, and is related to everyday language and/or thinking. Recently, a catalogue of "central concepts" for computer science…

  11. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times. PMID:19636394

  12. Designing and Implementing an OVERFLOW Reader for ParaView and Comparing Performance Between Central Processing Units and Graphical Processing Units

    Science.gov (United States)

    Chawner, David M.; Gomez, Ray J.

    2010-01-01

    In the Applied Aerosciences and CFD branch at Johnson Space Center, computational simulations are run that face many challenges. Two of which are the ability to customize software for specialized needs and the need to run simulations as fast as possible. There are many different tools that are used for running these simulations and each one has its own pros and cons. Once these simulations are run, there needs to be software capable of visualizing the results in an appealing manner. Some of this software is called open source, meaning that anyone can edit the source code to make modifications and distribute it to all other users in a future release. This is very useful, especially in this branch where many different tools are being used. File readers can be written to load any file format into a program, to ease the bridging from one tool to another. Programming such a reader requires knowledge of the file format that is being read as well as the equations necessary to obtain the derived values after loading. When running these CFD simulations, extremely large files are being loaded and having values being calculated. These simulations usually take a few hours to complete, even on the fastest machines. Graphics processing units (GPUs) are usually used to load the graphics for computers; however, in recent years, GPUs are being used for more generic applications because of the speed of these processors. Applications run on GPUs have been known to run up to forty times faster than they would on normal central processing units (CPUs). If these CFD programs are extended to run on GPUs, the amount of time they would require to complete would be much less. This would allow more simulations to be run in the same amount of time and possibly perform more complex computations.

  13. Thermal/Heat Transfer Analysis Using a Graphic Processing Unit (GPU) Enabled Computing Environment Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project was to use GPU enabled computing to accelerate the analyses of heat transfer and thermal effects. Graphical processing unit (GPU)...

  14. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  15. The Massive Affordable Computing Project: Prototyping of a High Data Throughput Processing Unit

    Science.gov (United States)

    Cox, Mitchell A.; Mellado, Bruce

    2015-05-01

    Scientific experiments are becoming highly data intensive to the point where offline processing of stored data is infeasible. High data throughput computing or High Volume throughput Computing, for future projects is required to deal with terabytes of data per second. Conventional data-centres based on typical server-grade hardware are expensive and are biased towards processing power rather than I/O bandwidth. This system imbalance can be solved with massive parallelism to increase the I/O capabilities, at the expense of excessive processing power and high energy consumption. The Massive Affordable Computing Project aims to use low-cost, ARM System on Chips to address the issue of system balance, affordability and energy efficiency. An ARM-based Processing Unit prototype is currently being developed, with a design goal of 20 Gb/s I/O throughput and significant processing power. Novel use of PCI-Express is used to address the typically limited I/O capabilities of consumer ARM System on Chips.

  16. Real-Time Computation of Parameter Fitting and Image Reconstruction Using Graphical Processing Units

    CERN Document Server

    Locans, Uldis; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Gunther; Wang, Qiulin

    2016-01-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of muSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the ...

  17. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    International Nuclear Information System (INIS)

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well as the program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudes (FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet processes at the LHC associated with production of single and double weak bosons, a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multiple Higgs bosons via weak-boson fusion, where all the heavy particles are allowed to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those computed by HELAS within the expected numerical accuracy, and the cross sections obtained by gBASES, a GPU version of the Monte Carlo integration program, agree with those obtained by BASES (FORTRAN), as well as those obtained by MadGraph. The performance of GPU was over a factor of 10 faster than CPU for all processes except those with the highest number of jets. (orig.)

  18. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    Energy Technology Data Exchange (ETDEWEB)

    Hagiwara, K. [KEK Theory Center and Sokendai, Tsukuba (Japan); Kanzaki, J. [KEK and Sokendai, Tsukuba (Japan); Li, Q. [Peking University, Department of Physics and State Key, Laboratory of Nuclear Physics and Technology, Beijing (China); Okamura, N. [International University of Health and Welfare, Department of Radiological Sciences, Ohtawara, Tochigi (Japan); Stelzer, T. [University of Illinois, Department of Physics, Urbana, IL (United States)

    2013-11-15

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well as the program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudes (FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet processes at the LHC associated with production of single and double weak bosons, a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multiple Higgs bosons via weak-boson fusion, where all the heavy particles are allowed to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those computed by HELAS within the expected numerical accuracy, and the cross sections obtained by gBASES, a GPU version of the Monte Carlo integration program, agree with those obtained by BASES (FORTRAN), as well as those obtained by MadGraph. The performance of GPU was over a factor of 10 faster than CPU for all processes except those with the highest number of jets. (orig.)

  19. The Cost-Saving Effect of a Centralized Unit for Anticancer Drugs Processing at the Oncology Department of Tirana

    Directory of Open Access Journals (Sweden)

    Artan Shkoza

    2015-11-01

    Full Text Available The worldwide increase in cancer prevalence has led to a substantial cost rising in Medical Oncology. Of particular importance are highly expensive drugs used to treat various types of cancers in developing countries like Albania. Hence, pharmacoeconomics may play an important role in reducing the drug wastage and financial burden placed on patients, family and society in general; of course, without adversely impacting patient’s health outcomes. Our aim was to calculate cost-savings effect of a centralized unit, which allows residual amounts of unused drugs to be reused by patients whose treatments are elaborated in the same working day. We calculated in a comprehensive manner the number of saved vials (flasks for seven drugs generated from residual amounts of the same working day and, converted them into cost-saving monetary value. We did not take into account prescribed drug dosages that fitted exactly with doses contained in a vial. Over a six month period, there were: a total of 6558 prescriptions for a total of 1180 patients, a total of 1524 saved vials and, a total cost-saving of 134, 348 (•. The saved value represents 6.2 percent of the cytostatic drugs budget for 2005. Our experience confirms the economic benefit of waste reduction and cost-savings effect due to a centralized unit of anticancer drug processing. The centralized unit increases also the drug traceability from preparation to patient.

  20. Optical diagnostics of a single evaporating droplet using fast parallel computing on graphics processing units

    Science.gov (United States)

    Jakubczyk, D.; Migacz, S.; Derkachov, G.; Woźniak, M.; Archer, J.; Kolwas, K.

    2016-09-01

    We report on the first application of the graphics processing units (GPUs) accelerated computing technology to improve performance of numerical methods used for the optical characterization of evaporating microdroplets. Single microdroplets of various liquids with different volatility and molecular weight (glycerine, glycols, water, etc.), as well as mixtures of liquids and diverse suspensions evaporate inside the electrodynamic trap under the chosen temperature and composition of atmosphere. The series of scattering patterns recorded from the evaporating microdroplets are processed by fitting complete Mie theory predictions with gradientless lookup table method. We showed that computations on GPUs can be effectively applied to inverse scattering problems. In particular, our technique accelerated calculations of the Mie scattering theory on a single-core processor in a Matlab environment over 800 times and almost 100 times comparing to the corresponding code in C language. Additionally, we overcame problems of the time-consuming data post-processing when some of the parameters (particularly the refractive index) of an investigated liquid are uncertain. Our program allows us to track the parameters characterizing the evaporating droplet nearly simultaneously with the progress of evaporation.

  1. Fast computation of MadGraph amplitudes on graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Li, Q; Okamura, N; Stelzer, T

    2013-01-01

    Continuing our previous studies on QED and QCD processes, we use the graphics processing unit (GPU) for fast calculations of helicity amplitudes for general Standard Model (SM) processes. Additional HEGET codes to handle all SM interactions are introduced, as well assthe program MG2CUDA that converts arbitrary MadGraph generated HELAS amplitudess(FORTRAN) into HEGET codes in CUDA. We test all the codes by comparing amplitudes and cross sections for multi-jet srocesses at the LHC associated with production of single and double weak bosonss a top-quark pair, Higgs boson plus a weak boson or a top-quark pair, and multisle Higgs bosons via weak-boson fusion, where all the heavy particles are allowes to decay into light quarks and leptons with full spin correlations. All the helicity amplitudes computed by HEGET are found to agree with those comsuted by HELAS within the expected numerical accuracy, and the cross sections obsained by gBASES, a GPU version of the Monte Carlo integration program, agree wish those obt...

  2. STRATEGIC BUSINESS UNIT – THE CENTRAL ELEMENT OF THE BUSINESS PORTFOLIO STRATEGIC PLANNING PROCESS

    OpenAIRE

    FLORIN TUDOR IONESCU

    2011-01-01

    Over time, due to changes in the marketing environment, generated by the tightening competition, technological, social and political pressures the companies have adopted a new approach, by which the potential businesses began to be treated as strategic business units. A strategic business unit can be considered a part of a company, a product line within a division, and sometimes a single product or brand. From a strategic perspective, the diversified companies represent a collection of busine...

  3. From Central Guidance Unit to Student Support Services Unit: The Outcome of a Consultation Process in Trinidad and Tobago

    Science.gov (United States)

    Watkins, Marley W.; Hall, Tracey E.; Worrell, Frank C.

    2014-01-01

    In this article, we report on a multiyear consultation project between a consulting team based in the United States and the Ministry of Education in Trinidad and Tobago. The project was initiated with a request for training in counseling for secondary school students but ended with the training of personnel from the Ministry of Education in…

  4. Experience with a mobile data storage device for transfer of studies from the critical care unit to a central nuclear medicine computer

    International Nuclear Information System (INIS)

    The introduction of mobile scintillation cameras has enabled the more immediate provision of nuclear medicine services in areas remote from the central nuclear medicine laboratory. Since a large number of such studies involve the use of a computer for data analysis, the concurrent problem of how to transmit those data to the computer becomes critical. A device is described using hard magnetic discs as the recording media and which can be wheeled from the patient's bedside to the central computer for playback. Some initial design problems, primarily associated with the critical timing which is necessary for the collection of gated studies, were overcome and the unit has been in service for the past two years. The major limitations are the relatively small capacity of the discs and the fact that the data are recorded in list mode. These constraints result in studies having poor statistical validity. The slow turn-around time, which results from the necessity to transport the system to the department and replay the study into the computer before analysis can begin, is also of particular concern. The use of this unit has clearly demonstrated the very important role that nuclear medicine can play in the care of the critically ill patient. The introduction of a complete acquisition and analysis unit is planned so that prompt diagnostic decisions can be made available within the intensive care unit. (author)

  5. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain–Computer Interface Feature Extraction

    OpenAIRE

    J. Adam Wilson; Williams, Justin C.

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a ...

  6. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  7. Distribution of lithostratigraphic units within the central block of Yucca Mountain, Nevada: A three-dimensional computer-based model, Version YMP.R2.0

    International Nuclear Information System (INIS)

    Yucca Mountain, Nevada is underlain by 14.0 to 11.6 Ma volcanic rocks tilted eastward 3 degree to 20 degree and cut by faults that were primarily active between 12.7 and 11.6 Ma. A three-dimensional computer-based model of the central block of the mountain consists of seven structural subblocks composed of six formations and the interstratified-bedded tuffaceous deposits. Rocks from the 12.7 Ma Tiva Canyon Tuff, which forms most of the exposed rocks on the mountain, to the 13.1 Ma Prow Pass Tuff are modeled with 13 surfaces. Modeled units represent single formations such as the Pah Canyon Tuff, grouped units such as the combination of the Yucca Mountain Tuff with the superjacent bedded tuff, and divisions of the Topopah Spring Tuff such as the crystal-poor vitrophyre interval. The model is based on data from 75 boreholes from which a structure contour map at the base of the Tiva Canyon Tuff and isochore maps for each unit are constructed to serve as primary input. Modeling consists of an iterative cycle that begins with the primary structure-contour map from which isochore values of the subjacent model unit are subtracted to produce the structure contour map on the base of the unit. This new structure contour map forms the input for another cycle of isochore subtraction to produce the next structure contour map. In this method of solids modeling, the model units are presented by surfaces (structure contour maps), and all surfaces are stored in the model. Surfaces can be converted to form volumes of model units with additional effort. This lithostratigraphic and structural model can be used for (1) storing data from, and planning future, site characterization activities, (2) preliminary geometry of units for design of Exploratory Studies Facility and potential repository, and (3) performance assessment evaluations

  8. 21 CFR 1305.24 - Central processing of orders.

    Science.gov (United States)

    2010-04-01

    ... order with all linked records on the central computer system. (b) A company that has central processing... or more registered locations and maintains a central processing computer system in which orders are... the company owns and operates....

  9. Vortex particle method in parallel computations on graphical processing units used in study of the evolution of vortex structures

    International Nuclear Information System (INIS)

    Understanding the dynamics and the mutual interaction among various types of vortical motions is a key ingredient in clarifying and controlling fluid motion. In the paper several different cases related to vortex tube interactions are presented. Due to problems with very long computation times on the single processor, the vortex-in-cell (VIC) method is implemented on the multicore architecture of a graphics processing unit (GPU). Numerical results of leapfrogging of two vortex rings for inviscid and viscous fluid are presented as test cases for the new multi-GPU implementation of the VIC method. Influence of the Reynolds number on the reconnection process is shown for two examples: antiparallel vortex tubes and orthogonally offset vortex tubes. Our aim is to show the great potential of the VIC method for solutions of three-dimensional flow problems and that the VIC method is very well suited for parallel computation. (paper)

  10. Performance evaluation for volumetric segmentation of multiple sclerosis lesions using MATLAB and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Le, Anh H.; Park, Young W.; Ma, Kevin; Jacobs, Colin; Liu, Brent J.

    2010-03-01

    Multiple Sclerosis (MS) is a progressive neurological disease affecting myelin pathways in the brain. Multiple lesions in the white matter can cause paralysis and severe motor disabilities of the affected patient. To solve the issue of inconsistency and user-dependency in manual lesion measurement of MRI, we have proposed a 3-D automated lesion quantification algorithm to enable objective and efficient lesion volume tracking. The computer-aided detection (CAD) of MS, written in MATLAB, utilizes K-Nearest Neighbors (KNN) method to compute the probability of lesions on a per-voxel basis. Despite the highly optimized algorithm of imaging processing that is used in CAD development, MS CAD integration and evaluation in clinical workflow is technically challenging due to the requirement of high computation rates and memory bandwidth in the recursive nature of the algorithm. In this paper, we present the development and evaluation of using a computing engine in the graphical processing unit (GPU) with MATLAB for segmentation of MS lesions. The paper investigates the utilization of a high-end GPU for parallel computing of KNN in the MATLAB environment to improve algorithm performance. The integration is accomplished using NVIDIA's CUDA developmental toolkit for MATLAB. The results of this study will validate the practicality and effectiveness of the prototype MS CAD in a clinical setting. The GPU method may allow MS CAD to rapidly integrate in an electronic patient record or any disease-centric health care system.

  11. Graphics Processing Unit Implementation of the Particle Filter

    OpenAIRE

    Hendeby, Gustaf; Hol, Jeroen; Karlsson, Rickard; Gustafsson, Fredrik

    2006-01-01

    Modern graphics cards for computers, and especially their graphics processing units (GPUs), are designed for fast rendering of graphics. In order to achieve this GPUs are equipped with a parallel architecture which can be exploited for general-purpose computing on GPU (GPGPU) as a complement to the central processing unit (CPU). In this paper GPGPU techniques are used to make a parallel GPU implementation of state-of-the-art recursive Bayesian estimation using particle filters (PF). The modif...

  12. A Graphics Processing Unit Implementation of the Particle Filter

    OpenAIRE

    Hendeby, Gustaf; Hol, Jeroen; Karlsson, Rickard; Gustafsson, Fredrik

    2007-01-01

    Modern graphics cards for computers, and especially their graphics processing units (GPUs), are designed for fast rendering of graphics. In order to achieve this GPUs are equipped with a parallel architecture which can be exploited for general-purpose computing on GPU (GPGPU) as a complement to the central processing unit (CPU). In this paper GPGPU techniques are used to make a parallel GPU implementation of state-of-the-art recursive Bayesian estimation using particle filters (PF). The modif...

  13. Reduction of computing time for seismic applications based on the Helmholtz equation by Graphics Processing Units

    NARCIS (Netherlands)

    Knibbe, H.P.

    2015-01-01

    The oil and gas industry makes use of computational intensive algorithms to provide an image of the subsurface. The image is obtained by sending wave energy into the subsurface and recording the signal required for a seismic wave to reflect back to the surface from the Earth interfaces that may have

  14. Central control element expands computer capability

    Science.gov (United States)

    Easton, R. A.

    1975-01-01

    Redundant processing and multiprocessing modes can be obtained from one computer by using logic configuration. Configuration serves as central control element which can automatically alternate between high-capacity multiprocessing mode and high-reliability redundant mode using dynamic mode switching in real time.

  15. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  16. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2011-07-01

    We describe and evaluate a fast implementation of a classical block-matching motion estimation algorithm for multiple graphical processing units (GPUs) using the compute unified device architecture computing engine. The implemented block-matching algorithm uses summed absolute difference error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation, we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and noninteger search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a noninteger search grid. The additional speedup for a noninteger search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition, we compared the execution time of the proposed FS GPU implementation with two existing, highly optimized nonfull grid search CPU-based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and simplified unsymmetrical multi-hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720 × 480 pixels in resolution commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  17. Improvement of MS (multiple sclerosis) CAD (computer aided diagnosis) performance using C/C++ and computing engine in the graphical processing unit (GPU)

    Science.gov (United States)

    Suh, Joohyung; Ma, Kevin; Le, Anh

    2011-03-01

    Multiple Sclerosis (MS) is a disease which is caused by damaged myelin around axons of the brain and spinal cord. Currently, MR Imaging is used for diagnosis, but it is very highly variable and time-consuming since the lesion detection and estimation of lesion volume are performed manually. For this reason, we developed a CAD (Computer Aided Diagnosis) system which would assist segmentation of MS to facilitate physician's diagnosis. The MS CAD system utilizes K-NN (k-nearest neighbor) algorithm to detect and segment the lesion volume in an area based on the voxel. The prototype MS CAD system was developed under the MATLAB environment. Currently, the MS CAD system consumes a huge amount of time to process data. In this paper we will present the development of a second version of MS CAD system which has been converted into C/C++ in order to take advantage of the GPU (Graphical Processing Unit) which will provide parallel computation. With the realization of C/C++ and utilizing the GPU, we expect to cut running time drastically. The paper investigates the conversion from MATLAB to C/C++ and the utilization of a high-end GPU for parallel computing of data to improve algorithm performance of MS CAD.

  18. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid.

    Science.gov (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn

    2016-09-15

    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905

  19. A solution-processed high performance organic solar cell using a small molecule with the thieno[3,2-b]thiophene central unit.

    Science.gov (United States)

    Zhang, Qian; Wang, Yunchuang; Kan, Bin; Wan, Xiangjian; Liu, Feng; Ni, Wang; Feng, Huanran; Russell, Thomas P; Chen, Yongsheng

    2015-10-25

    A solution processed acceptor-donor-acceptor (A-D-A) small molecule with thieno[3,2-b]thiophene as the central building block and 2-(1,1-dicyanomethylene)-rhodanine as the terminal unit, DRCN8TT, was designed and synthesized. The optimized power conversion efficiency (PCE) of 8.11% was achieved, which is much higher than that of its analogue molecule DRCN8T. The improved performance was ascribed to the morphology which consisted of small, highly crystalline domains that were nearly commensurate with the exiton diffusion length. PMID:26329677

  20. 中央核处理器的真空热解%The Vacuum Pyrolysis of Central Processing Unit

    Institute of Scientific and Technical Information of China (English)

    王晓雅

    2012-01-01

    The low temperature pyrolysis of an important electronic waste,central processing unit(CPU) was investigated under vacuum condition and was compared with the results of higher temperature pyrolysis.Results showed that the pyrolysis of CPU took place adequately with a high pyrolysis oils yield which was good for the recovery of organics in the CPU and the pins could be separated from the base plates at pyrolysis temperature of 500~700 ℃.When the pyrolysis was carried out at 300~400 ℃,the solder mask of the CPU was pyrolysed and the pins could be separated from the base plates with a relatively intact gold-plated layer.Meanwhile,the pyrolysis oils yield was lower but the composition of the pyrolysis oils was relatively simple which was easy for separation and purification.%在真空条件下对中央核处理器(CPU)这一重要的电子废弃物进行低温热解研究,并对比较高温度下的热解效果。结果表明:500~700℃热解温度下,CPU基板充分裂解,产油率高,有利于CPU中有机物的回收,且针脚可与基板分离完全。低温热解300~400℃条件下,CPU的阻焊层发生裂解,针脚可与基板分离,且针脚镀金层较为完整,产油率相对较低,但液体产物组分较为单一,易于分离提纯。

  1. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Science.gov (United States)

    Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy

    2016-03-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  2. Computers for WWER-440 unit production and technology control

    International Nuclear Information System (INIS)

    The systems for technological process inspection and control are of the Soviet origin and were designed in the 70's. They should thus be reconstructed and upgraded or replaced. In the meantime, a number of minor innovations have been accomplished, such as the replacement of relays, substitution of floppy disk drives by Winchester and RAM disk drives, temperature measurement standby systems and a direct control of the Hindukus system. The most important thing is the production of a communication system interconnecting the unit information systems to the central computer and expansion of the unit information system functions. Schematics of the systems are shown

  3. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  4. Graphics processing units: more than the way to realistic video games

    OpenAIRE

    GARCIA-SUCERQUIA, JORGE; Trujillo, Carlos

    2011-01-01

    The huge market of the video games has propelled the development of hardware and software focused on making the game environment more realistic. Among such developments are the graphics processing units (GPU), which are intended to alleviate the central processing unit (CPU) of the host computer from the computation that creates “life” for the video games. GPUs reach this goal with the use of multiple computation cores operating on a parallel architecture; these features have made the GPUs at...

  5. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    OpenAIRE

    Andrade, Xavier; Aspuru-Guzik, Alan

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a su...

  6. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  7. Hyperspectral processing in graphical processing units

    Science.gov (United States)

    Winter, Michael E.; Winter, Edwin M.

    2011-06-01

    With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.

  8. Relativistic hydrodynamics on graphics processing units

    CERN Document Server

    Sikorski, Jan; Porter-Sobieraj, Joanna; Słodkowski, Marcin; Krzyżanowski, Piotr; Książek, Natalia; Duda, Przemysław

    2016-01-01

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D~program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a~slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the t...

  9. Relativistic hydrodynamics on graphics processing units

    International Nuclear Information System (INIS)

    Hydrodynamics calculations have been successfully used in studies of the bulk properties of the Quark-Gluon Plasma, particularly of elliptic flow and shear viscosity. However, there are areas (for instance event-by-event simulations for flow fluctuations and higher-order flow harmonics studies) where further advancement is hampered by lack of efficient and precise 3+1D program. This problem can be solved by using Graphics Processing Unit (GPU) computing, which offers unprecedented increase of the computing power compared to standard CPU simulations. In this work, we present an implementation of 3+1D ideal hydrodynamics simulations on the Graphics Processing Unit using Nvidia CUDA framework. MUSTA-FORCE (MUlti STAge, First ORder CEntral, with a slope limiter and MUSCL reconstruction) and WENO (Weighted Essentially Non-Oscillating) schemes are employed in the simulations, delivering second (MUSTA-FORCE), fifth and seventh (WENO) order of accuracy. Third order Runge-Kutta scheme was used for integration in the time domain. Our implementation improves the performance by about 2 orders of magnitude compared to a single threaded program. The algorithm tests of 1+1D shock tube and 3+1D simulations with ellipsoidal and Hubble-like expansion are presented

  10. Real-Space Density Functional Theory on Graphical Processing Units: Computational Approach and Comparison to Gaussian Basis Set Methods.

    Science.gov (United States)

    Andrade, Xavier; Aspuru-Guzik, Alán

    2013-10-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs. PMID:26589153

  11. Real-space density functional theory on graphical processing units: computational approach and comparison to Gaussian basis set methods

    CERN Document Server

    Andrade, Xavier

    2013-01-01

    We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code OCTOPUS, can reach a sustained performance of up to 90 GFlops for a single GPU, representing an important speed-up when compared to the CPU version of the code. Moreover, for some systems our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.

  12. Computers for symbolic processing

    Science.gov (United States)

    Wah, Benjamin W.; Lowrie, Matthew B.; Li, Guo-Jie

    1989-01-01

    A detailed survey on the motivations, design, applications, current status, and limitations of computers designed for symbolic processing is provided. Symbolic processing computations are performed at the word, relation, or meaning levels, and the knowledge used in symbolic applications may be fuzzy, uncertain, indeterminate, and ill represented. Various techniques for knowledge representation and processing are discussed from both the designers' and users' points of view. The design and choice of a suitable language for symbolic processing and the mapping of applications into a software architecture are then considered. The process of refining the application requirements into hardware and software architectures is treated, and state-of-the-art sequential and parallel computers designed for symbolic processing are discussed.

  13. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units.

    Science.gov (United States)

    Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi

    2011-11-01

    Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. PMID:21764258

  14. Using compute unified device architecture-enabled graphic processing unit to accelerate fast Fourier transform-based regression Kriging interpolation on a MODIS land surface temperature image

    Science.gov (United States)

    Hu, Hongda; Shu, Hong; Hu, Zhiyong; Xu, Jianhui

    2016-04-01

    Kriging interpolation provides the best linear unbiased estimation for unobserved locations, but its heavy computation limits the manageable problem size in practice. To address this issue, an efficient interpolation procedure incorporating the fast Fourier transform (FFT) was developed. Extending this efficient approach, we propose an FFT-based parallel algorithm to accelerate regression Kriging interpolation on an NVIDIA® compute unified device architecture (CUDA)-enabled graphic processing unit (GPU). A high-performance cuFFT library in the CUDA toolkit was introduced to execute computation-intensive FFTs on the GPU, and three time-consuming processes were redesigned as kernel functions and executed on the CUDA cores. A MODIS land surface temperature 8-day image tile at a resolution of 1 km was resampled to create experimental datasets at eight different output resolutions. These datasets were used as the interpolation grids with different sizes in a comparative experiment. Experimental results show that speedup of the FFT-based regression Kriging interpolation accelerated by GPU can exceed 1000 when processing datasets with large grid sizes, as compared to the traditional Kriging interpolation running on the CPU. These results demonstrate that the combination of FFT methods and GPU-based parallel computing techniques greatly improves the computational performance without loss of precision.

  15. Power plant process computer

    International Nuclear Information System (INIS)

    The concept of instrumentation and control in nuclear power plants incorporates the use of process computers for tasks which are on-line in respect to real-time requirements but not closed-loop in respect to closed-loop control. The general scope of tasks is: - alarm annunciation on CRT's - data logging - data recording for post trip reviews and plant behaviour analysis - nuclear data computation - graphic displays. Process computers are used additionally for dedicated tasks such as the aeroball measuring system, the turbine stress evaluator. Further applications are personal dose supervision and access monitoring. (orig.)

  16. Central Limit Theorem for Nonlinear Hawkes Processes

    CERN Document Server

    Zhu, Lingjiong

    2012-01-01

    Hawkes process is a self-exciting point process with clustering effect whose jump rate depends on its entire past history. It has wide applications in neuroscience, finance and many other fields. Linear Hawkes process has an immigration-birth representation and can be computed more or less explicitly. It has been extensively studied in the past and the limit theorems are well understood. On the contrary, nonlinear Hawkes process lacks the immigration-birth representation and is much harder to analyze. In this paper, we obtain a functional central limit theorem for nonlinear Hawkes process.

  17. Aquarius Digital Processing Unit

    Science.gov (United States)

    Forgione, Joshua; Winkert, George; Dobson, Norman

    2009-01-01

    Three documents provide information on a digital processing unit (DPU) for the planned Aquarius mission, in which a radiometer aboard a spacecraft orbiting Earth is to measure radiometric temperatures from which data on sea-surface salinity are to be deduced. The DPU is the interface between the radiometer and an instrument-command-and-data system aboard the spacecraft. The DPU cycles the radiometer through a programmable sequence of states, collects and processes all radiometric data, and collects all housekeeping data pertaining to operation of the radiometer. The documents summarize the DPU design, with emphasis on innovative aspects that include mainly the following: a) In the radiometer and the DPU, conversion from analog voltages to digital data is effected by means of asynchronous voltage-to-frequency converters in combination with a frequency-measurement scheme implemented in field-programmable gate arrays (FPGAs). b) A scheme to compensate for aging and changes in the temperature of the DPU in order to provide an overall temperature-measurement accuracy within 0.01 K includes a high-precision, inexpensive DC temperature measurement scheme and a drift-compensation scheme that was used on the Cassini radar system. c) An interface among multiple FPGAs in the DPU guarantees setup and hold times.

  18. Computed tomography of the central nervous system in small animals

    International Nuclear Information System (INIS)

    With computed tomography in 44 small animals some well defined anatomical structures and pathological processes of the central nervous system are described. Computed tomography is not only necessary for the diagnosis of tumors; malformations, inflammatory, degenerative and vascular diseases and traumas are also visible

  19. Fast point-based method of a computer-generated hologram for a triangle-patch model by using a graphics processing unit.

    Science.gov (United States)

    Sugawara, Takuya; Ogihara, Yuki; Sakamoto, Yuji

    2016-01-20

    The point-based method and fast-Fourier-transform-based method are commonly used for calculation methods of computer-generation holograms. This paper proposes a novel fast calculation method for a patch model, which uses the point-based method. The method provides a calculation time that is proportional to the number of patches but not to that of the point light sources. This means that the method is suitable for calculating a wide area covered by patches quickly. Experiments using a graphics processing unit indicated that the proposed method is about 8 times or more faster than the ordinary point-based method. PMID:26835949

  20. Development of new process network for gas chromatograph and analyzers connected with SCADA system and Digital Control Computers at Cernavoda NPP Unit 1

    International Nuclear Information System (INIS)

    The continuous monitoring of gas mixture concentrations (deuterium/ hydrogen/oxygen/nitrogen) accumulated in 'Moderator Cover Gas', 'Liquid Control Zone' and 'Heat Transport D2O Storage Tank Cover Gas', as well as the continuous monitoring of Heavy Water into Light Water concentration in 'Boilers Steam', 'Boilers Blown Down', 'Moderator heat exchangers', and 'Recirculated Water System', sensing any leaks of Cernavoda NPP U1 led to requirement of developing a new process network for gas chromatograph and analyzers connected to the SCADA system and Digital Control Computers of Cernavoda NPP Unit 1. In 2005 it was designed and implemented the process network for gas chromatograph which connected the gas chromatograph equipment to the SCADA system and Digital Control Computers of the Cernavoda NPP Unit 1. Later this process network for gas chromatograph has been extended to connect the AE13 and AE14 Fourier Transform Infrared (FTIR) analyzers with either. The Gas Chromatograph equipment measures with best accuracy the mixture gases (deuterium/ hydrogen/oxygen/nitrogen) concentration. The Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers measure the Heavy Water into Light Water concentration in Boilers Steam, Boilers BlownDown, Moderator heat exchangers, and Recirculated Water System, monitoring and signaling any leaks. The Gas Chromatograph equipment and Fourier Transform Infrared (FTIR) AE13 and AE14 Analyzers use the new OPC (Object Link Embedded for Process Control) technologies available in ABB's VistaNet network for interoperability with automation equipment. This new process network has interconnected the ABB chromatograph and Fourier Transform Infrared analyzers with plant Digital Control Computers using new technology. The result was an increased reliability and capability for inspection and improved system safety

  1. Sono-leather technology with ultrasound: a boon for unit operations in leather processing - review of our research work at Central Leather Research Institute (CLRI), India.

    Science.gov (United States)

    Sivakumar, Venkatasubramanian; Swaminathan, Gopalaraman; Rao, Paruchuri Gangadhar; Ramasami, Thirumalachari

    2009-01-01

    Ultrasound is a sound wave with a frequency above the human audible range of 16 Hz to 16 kHz. In recent years, numerous unit operations involving physical as well as chemical processes are reported to have been enhanced by ultrasonic irradiation. There have been benefits such as improvement in process efficiency, process time reduction, performing the processes under milder conditions and avoiding the use of some toxic chemicals to achieve cleaner processing. These could be a better way of augmentation for the processes as an advanced technique. The important point here is that ultrasonic irradiation is physical method activation rather than using chemical entities. Detailed studies have been made in the unit operations related to leather such as diffusion rate enhancement through porous leather matrix, cleaning, degreasing, tanning, dyeing, fatliquoring, oil-water emulsification process and solid-liquid tannin extraction from vegetable tanning materials as well as in precipitation reaction in wastewater treatment. The fundamental mechanism involved in these processes is ultrasonic cavitation in liquid media. In addition to this there also exist some process specific mechanisms for the enhancement of the processes. For instance, possible real-time reversible pore-size changes during ultrasound propagation through skin/leather matrix could be a reason for diffusion rate enhancement in leather processing as reported for the first time. Exhaustive scientific research work has been carried out in this area by our group working in Chemical Engineering Division of CLRI and most of these benefits have been proven with publications in valued peer-reviewed international journals. The overall results indicate that about 2-5-fold increase in the process efficiency due to ultrasound under the given process conditions for various unit operations with additional benefits. Scale-up studies are underway for converting these concepts in to a real viable larger scale operation. In

  2. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study

    Science.gov (United States)

    Alajarin, Mateo; Marin-Luna, Marta; Vidal, Angel

    2016-01-01

    Summary 6-Phenylfulvenes bearing (1,3-dioxolan or dioxan)-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxy)alkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for explaining these conversions. Deuterium labelling experiments exclude the [1,4]-hydride shift as the first step. A computational study scrutinized the reaction channels of these tandem conversions starting by [1,5]-, [1,7]- and [1,9]-H shifts, revealing that this first step is the rate-determining one and that the [1,9]-H shift is the one with the lowest energy barrier. PMID:26977185

  3. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study.

    Science.gov (United States)

    Alajarin, Mateo; Marin-Luna, Marta; Sanchez-Andrada, Pilar; Vidal, Angel

    2016-01-01

    6-Phenylfulvenes bearing (1,3-dioxolan or dioxan)-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxy)alkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for explaining these conversions. Deuterium labelling experiments exclude the [1,4]-hydride shift as the first step. A computational study scrutinized the reaction channels of these tandem conversions starting by [1,5]-, [1,7]- and [1,9]-H shifts, revealing that this first step is the rate-determining one and that the [1,9]-H shift is the one with the lowest energy barrier. PMID:26977185

  4. 2011 floods of the central United States

    Science.gov (United States)

    U.S. Geological Survey

    2013-01-01

    The Central United States experienced record-setting flooding during 2011, with floods that extended from headwater streams in the Rocky Mountains, to transboundary rivers in the upper Midwest and Northern Plains, to the deep and wide sand-bedded lower Mississippi River. The U.S. Geological Survey (USGS), as part of its mission, collected extensive information during and in the aftermath of the 2011 floods to support scientific analysis of the origins and consequences of extreme floods. The information collected for the 2011 floods, combined with decades of past data, enables scientists and engineers from the USGS to provide syntheses and scientific analyses to inform emergency managers, planners, and policy makers about life-safety, economic, and environmental-health issues surrounding flood hazards for the 2011 floods and future floods like it. USGS data, information, and scientific analyses provide context and understanding of the effect of floods on complex societal issues such as ecosystem and human health, flood-plain management, climate-change adaptation, economic security, and the associated policies enacted for mitigation. Among the largest societal questions is "How do we balance agricultural, economic, life-safety, and environmental needs in and along our rivers?" To address this issue, many scientific questions have to be answered including the following: * How do the 2011 weather and flood conditions compare to the past weather and flood conditions and what can we reasonably expect in the future for flood magnitudes?

  5. Data Sorting Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    M. J. Mišić

    2012-06-01

    Full Text Available Graphics processing units (GPUs have been increasingly used for general-purpose computation in recent years. The GPU accelerated applications are found in both scientific and commercial domains. Sorting is considered as one of the very important operations in many applications, so its efficient implementation is essential for the overall application performance. This paper represents an effort to analyze and evaluate the implementations of the representative sorting algorithms on the graphics processing units. Three sorting algorithms (Quicksort, Merge sort, and Radix sort were evaluated on the Compute Unified Device Architecture (CUDA platform that is used to execute applications on NVIDIA graphics processing units. Algorithms were tested and evaluated using an automated test environment with input datasets of different characteristics. Finally, the results of this analysis are briefly discussed.

  6. Mobility in process calculi and natural computing

    CERN Document Server

    Aman, Bogdan

    2011-01-01

    The design of formal calculi in which fundamental concepts underlying interactive systems can be described and studied has been a central theme of theoretical computer science in recent decades, while membrane computing, a rule-based formalism inspired by biological cells, is a more recent field that belongs to the general area of natural computing. This is the first book to establish a link between these two research directions while treating mobility as the central topic. In the first chapter the authors offer a formal description of mobility in process calculi, noting the entities that move

  7. THOR Particle Processing Unit PPU

    Science.gov (United States)

    Federica Marcucci, Maria; Bruno, Roberto; Consolini, Giuseppe; D'Amicis, Raffaella; De Lauretis, Marcello; De Marco, Rossana; De Michelis, Paola; Francia, Patrizia; Laurenza, Monica; Materassi, Massimo; Vellante, Massimo; Valentini, Francesco

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. On board THOR, data collected by the Turbulent Electron Analyser, the Ion Mass Spectrum analyser and the Cold Solar Wind ion analyser instruments will be processed by a common digital processor unit, the Particle Processing Unit (PPU). PPU architecture will be based on the state of the art space flight processors and will be fully redundant, in order to efficiently and safely handle the data from the numerous sensors of the instruments suite. The approach of a common processing unit for particle instruments is very important for the enabling of an efficient management for correlative plasma measurements, also facilitating interoperation with other instruments on the spacecraft. Moreover, it permits technical and programmatic synergies giving the possibility to optimize and save spacecraft resources.

  8. Computer Processed Evaluation.

    Science.gov (United States)

    Griswold, George H.; Kapp, George H.

    A student testing system was developed consisting of computer generated and scored equivalent but unique repeatable tests based on performance objectives for undergraduate chemistry classes. The evaluation part of the computer system, made up of four separate programs written in FORTRAN IV, generates tests containing varying numbers of multiple…

  9. Parallel Computers in Signal Processing

    Directory of Open Access Journals (Sweden)

    Narsingh Deo

    1985-07-01

    Full Text Available Signal processing often requires a great deal of raw computing power for which it is important to take a look at parallel computers. The paper reviews various types of parallel computer architectures from the viewpoint of signal and image processing.

  10. Magnetohydrodynamics simulations on graphics processing units

    CERN Document Server

    Wong, Hon-Cheng; Feng, Xueshang; Tang, Zesheng

    2009-01-01

    Magnetohydrodynamics (MHD) simulations based on the ideal MHD equations have become a powerful tool for modeling phenomena in a wide range of applications including laboratory, astrophysical, and space plasmas. In general, high-resolution methods for solving the ideal MHD equations are computationally expensive and Beowulf clusters or even supercomputers are often used to run the codes that implemented these methods. With the advent of the Compute Unified Device Architecture (CUDA), modern graphics processing units (GPUs) provide an alternative approach to parallel computing for scientific simulations. In this paper we present, to the authors' knowledge, the first implementation to accelerate computation of MHD simulations on GPUs. Numerical tests have been performed to validate the correctness of our GPU MHD code. Performance measurements show that our GPU-based implementation achieves speedups of 2 (1D problem with 2048 grids), 106 (2D problem with 1024^2 grids), and 43 (3D problem with 128^3 grids), respec...

  11. Representation of Musical Computer Processes

    OpenAIRE

    Fober, Dominique; Orlarey, Yann; Letz, Stéphane

    2014-01-01

    International audience The paper presents a study about the representation of musical computer processes within a music score. The idea is to provide performers with information that could be useful especially in the context of interactive music. The paper starts with a characterization of a musical computer process in order to define the values to be represented. Next it proposes an approach to time representation suitable to asynchronous processes representation.

  12. Graphics Processing Units for HEP trigger systems

    Science.gov (United States)

    Ammendola, R.; Bauce, M.; Biagioni, A.; Chiozzi, S.; Cotta Ramusino, A.; Fantechi, R.; Fiorini, M.; Giagu, S.; Gianoli, A.; Lamanna, G.; Lonardo, A.; Messina, A.; Neri, I.; Paolucci, P. S.; Piandani, R.; Pontisso, L.; Rescigno, M.; Simula, F.; Sozzi, M.; Vicini, P.

    2016-07-01

    General-purpose computing on GPUs (Graphics Processing Units) is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerator in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughput, the use of such devices for real-time applications in high-energy physics data acquisition and trigger systems is becoming ripe. We will discuss the use of online parallel computing on GPU for synchronous low level trigger, focusing on CERN NA62 experiment trigger system. The use of GPU in higher level trigger system is also briefly considered.

  13. Computer program for pulmonoscintigram processing

    International Nuclear Information System (INIS)

    The paper is concerned with an algorithm of a program for pulmonoscintigram m processing. The program developed in the algorithmic langauges Basic and Assembler on the basis of this algorithm, made it possible to facilitate the da ata processing, to decrease a physician's competence as a computer operator, to raise the objectivity of diagnosis of pulmonary diseases, etc. The progpam is i intended for computers Vip-450, Vip-550 and mcs-560, Technicare, usa

  14. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  15. Partial wave analysis using graphics processing units

    International Nuclear Information System (INIS)

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  16. Guide to Computational Geometry Processing

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Gravesen, Jens; Anton, François;

    processed before it is useful. This Guide to Computational Geometry Processing reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. This is balanced with an introduction to the......Optical scanning is rapidly becoming ubiquitous. From industrial laser scanners to medical CT, MR and 3D ultrasound scanners, numerous organizations now have easy access to optical acquisition devices that provide huge volumes of image data. However, the raw geometry data acquired must first be......, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations Reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces Examines techniques for computing curvature from polygonal meshes Describes...

  17. Grace: A cross-platform micromagnetic simulator on graphics processing units

    Science.gov (United States)

    Zhu, Ru

    2015-12-01

    A micromagnetic simulator running on graphics processing units (GPUs) is presented. Different from GPU implementations of other research groups which are predominantly running on NVidia's CUDA platform, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform independent. It runs on GPUs from venders including NVidia, AMD and Intel, and achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude. The simulator paved the way for running large size micromagnetic simulations on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics cards, and is freely available to download.

  18. Grace: a Cross-platform Micromagnetic Simulator On Graphics Processing Units

    CERN Document Server

    Zhu, Ru

    2014-01-01

    A micromagnetic simulator running on graphics processing unit (GPU) is presented. It achieves significant performance boost as compared to previous central processing unit (CPU) simulators, up to two orders of magnitude for large input problems. Different from GPU implementations of other research groups, this simulator is developed with C++ Accelerated Massive Parallelism (C++ AMP) and is hardware platform compatible. It runs on GPU from venders include NVidia, AMD and Intel, which paved the way for fast micromagnetic simulation on both high-end workstations with dedicated graphics cards and low-end personal computers with integrated graphics card. A copy of the simulator software is publicly available.

  19. Tandem processes promoted by a hydrogen shift in 6-arylfulvenes bearing acetalic units at ortho position: a combined experimental and computational study

    OpenAIRE

    ALAJARIN, Mateo; Marin-Luna, Marta; Sanchez-Andrada, Pilar; Vidal, Angel

    2016-01-01

    6-Phenylfulvenes bearing (1,3-dioxolan or dioxan)-2-yl substituents at ortho position convert into mixtures of 4- and 9-(hydroxy)alkoxy-substituted benz[f]indenes as result of cascade processes initiated by a thermally activated hydrogen shift. Structurally related fulvenes with non-cyclic acetalic units afforded mixtures of 4- and 9-alkoxybenz[f]indenes under similar thermal conditions. Mechanistic paths promoted by an initial [1,4]-, [1,5]-, [1,7]- or [1,9]-H shift are conceivable for expla...

  20. Retinoblastoma protein: a central processing unit

    Indian Academy of Sciences (India)

    M Poznic

    2009-06-01

    The retinoblastoma protein (pRb) is one of the key cell-cycle regulating proteins and its inactivation leads to neoplastic transformation and carcinogenesis. This protein regulates critical G1-to-S phase transition through interaction with the E2F family of cell-cycle transcription factors repressing transcription of genes required for this cell-cycle check-point transition. Its activity is regulated through network sensing intracellular and extracellular signals which block or permit phosphorylation (inactivation) of the Rb protein. Mechanisms of Rb-dependent cell-cycle control have been widely studied over the past couple of decades. However, recently it was found that pRb also regulates apoptosis through the same interaction with E2F transcription factors and that Rb–E2F complexes play a role in regulating the transcription of genes involved in differentiation and development.

  1. CHARACTERISTICS OF FARMLAND LEASING IN THE NORTH CENTRAL UNITED STATES

    OpenAIRE

    Patterson, Brian; Hanson, Steven D.; Robison, Lindon J.

    1998-01-01

    Leasing behavior differs across the North Central United States. Survey data is used to characterize leasing activity in the region. Data is collected on the amount of leased farmland, amount of cash and share leased land, and common output share levels. Factors influencing leasing and arrangements are also identified.

  2. Fast blood flow visualization of high-resolution laser speckle imaging data using graphics processing unit.

    Science.gov (United States)

    Liu, Shusen; Li, Pengcheng; Luo, Qingming

    2008-09-15

    Laser speckle contrast analysis (LASCA) is a non-invasive, full-field optical technique that produces two-dimensional map of blood flow in biological tissue by analyzing speckle images captured by CCD camera. Due to the heavy computation required for speckle contrast analysis, video frame rate visualization of blood flow which is essentially important for medical usage is hardly achieved for the high-resolution image data by using the CPU (Central Processing Unit) of an ordinary PC (Personal Computer). In this paper, we introduced GPU (Graphics Processing Unit) into our data processing framework of laser speckle contrast imaging to achieve fast and high-resolution blood flow visualization on PCs by exploiting the high floating-point processing power of commodity graphics hardware. By using GPU, a 12-60 fold performance enhancement is obtained in comparison to the optimized CPU implementations. PMID:18794967

  3. Computational Intelligence in Image Processing

    CERN Document Server

    Siarry, Patrick

    2013-01-01

    Computational intelligence based techniques have firmly established themselves as viable, alternate, mathematical tools for more than a decade. They have been extensively employed in many systems and application domains, among these signal processing, automatic control, industrial and consumer electronics, robotics, finance, manufacturing systems, electric power systems, and power electronics. Image processing is also an extremely potent area which has attracted the atten­tion of many researchers who are interested in the development of new computational intelligence-based techniques and their suitable applications, in both research prob­lems and in real-world problems. Part I of the book discusses several image preprocessing algorithms; Part II broadly covers image compression algorithms; Part III demonstrates how computational intelligence-based techniques can be effectively utilized for image analysis purposes; and Part IV shows how pattern recognition, classification and clustering-based techniques can ...

  4. Environmental Engineering Unit Operations and Unit Processes Laboratory Manual.

    Science.gov (United States)

    O'Connor, John T., Ed.

    This manual was prepared for the purpose of stimulating the development of effective unit operations and unit processes laboratory courses in environmental engineering. Laboratory activities emphasizing physical operations, biological, and chemical processes are designed for various educational and equipment levels. An introductory section reviews…

  5. Computer Modelling of Dynamic Processes

    Directory of Open Access Journals (Sweden)

    B. Rybakin

    2000-10-01

    Full Text Available Results of numerical modeling of dynamic problems are summed in the article up. These problems are characteristic for various areas of human activity, in particular for problem solving in ecology. The following problems are considered in the present work: computer modeling of dynamic effects on elastic-plastic bodies, calculation and determination of performances of gas streams in gas cleaning equipment, modeling of biogas formation processes.

  6. Strategy as Central and Peripheral Processes

    OpenAIRE

    Juul Andersen, Torben; Fredens, Kjeld

    2012-01-01

    Corporate entrepreneurship is deemed essential to uncover opportunities that shape the future strategic path and adapt the firm to environmental change (e.g., Covin and Miles, 1999; Wolcott and Lippitz, 2007). At the same time, rational central processes are important to execute strategic actions in a coordinated manner (e.g., Baum and Wally, 2003; Brews and Hunt, 1999; Goll and Rasheed, 1997). That is, the organization’s adaptive responses and dynamic capabilities are embedded...

  7. The Executive Process, Grade Eight. Resource Unit (Unit III).

    Science.gov (United States)

    Minnesota Univ., Minneapolis. Project Social Studies Curriculum Center.

    This resource unit, developed by the University of Minnesota's Project Social Studies, introduces eighth graders to the executive process. The unit uses case studies of presidential decision making such as the decision to drop the atomic bomb on Hiroshima, the Cuba Bay of Pigs and quarantine decisions, and the Little Rock decision. A case study of…

  8. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    OpenAIRE

    Xu, Ji; Ren, Ying; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU ar...

  9. Programming Graphic Processing Units (GPUs)

    OpenAIRE

    Bakke, Glenn Ruben Årthun

    2009-01-01

    In this thesis we do a broad study of the languages, libraries and frameworks for general purpose computations on graphics processors. We have also studied the different graphics processor architectures that has been developed the last decade. 8 example programs in OpenGL, CUDA, MPI and OpenMP has been made to emphasize the mechanisms for parallelization and memory managment. The example programs have been benchmarked and the source lines are been counted. We found out that programs for th...

  10. Reducing Central Line-Associated Bloodstream Infections 
on Inpatient Oncology Units Using Peer Review.

    Science.gov (United States)

    Zavotsky, Kathleen Evanovich; Malast, Tracey; Festus, Onyekachi; Riskie, Vickie

    2015-12-01

    The purpose of this article is to describe a peer-to-peer program and the outcomes of interventions to reduce the incidence of central line-associated bloodstream infections in patients in bone marrow transplantation, medical, and surgical oncology units. The article reviews the process and describes tools used to achieve success in a Magnet®-designated academic medical center. PMID:26583628

  11. Computing with impure numbers - Automatic consistency checking and units conversion using computer algebra

    Science.gov (United States)

    Stoutemyer, D. R.

    1977-01-01

    The computer algebra language MACSYMA enables the programmer to include symbolic physical units in computer calculations, and features automatic detection of dimensionally-inhomogeneous formulas and conversion of inconsistent units in a dimensionally homogeneous formula. Some examples illustrate these features.

  12. Real-time imaging implementation of the Army Research Laboratory synchronous impulse reconstruction radar on a graphics processing unit architecture

    Science.gov (United States)

    Park, Song Jun; Nguyen, Lam H.; Shires, Dale R.; Henz, Brian J.

    2009-05-01

    High computing requirements for the synchronous impulse reconstruction (SIRE) radar algorithm present a challenge for near real-time processing, particularly the calculations involved in output image formation. Forming an image requires a large number of parallel and independent floating-point computations. To reduce the processing time and exploit the abundant parallelism of image processing, a graphics processing unit (GPU) architecture is considered for the imaging algorithm. Widely available off the shelf, high-end GPUs offer inexpensive technology that exhibits great capacity of computing power in one card. To address the parallel nature of graphics processing, the GPU architecture is designed for high computational throughput realized through multiple computing resources to target data parallel applications. Due to a leveled or in some cases reduced clock frequency in mainstream single and multi-core general-purpose central processing units (CPUs), GPU computing is becoming a competitive option for compute-intensive radar imaging algorithm prototyping. We describe the translation and implementation of the SIRE radar backprojection image formation algorithm on a GPU platform. The programming model for GPU's parallel computing and hardware-specific memory optimizations are discussed in the paper. A considerable level of speedup is available from the GPU implementation resulting in processing at real-time acquisition speeds.

  13. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  14. Ecosystem process interactions between central Chilean habitats

    Directory of Open Access Journals (Sweden)

    Meredith Root-Bernstein

    2015-01-01

    Full Text Available Understanding ecosystem processes is vital for developing dynamic adaptive management of human-dominated landscapes. We focus on conservation and management of the central Chilean silvopastoral savanna habitat called “espinal”, which often occurs near matorral, a shrub habitat. Although matorral, espinal and native sclerophyllous forest are linked successionally, they are not jointly managed and conserved. Management goals in “espinal” include increasing woody cover, particularly of the dominant tree Acacia caven, improving herbaceous forage quality, and increasing soil fertility. We asked whether adjacent matorral areas contribute to espinal ecosystem processes related to the three main espinal management goals. We examined input and outcome ecosystem processes related to these goals in matorral and espinal with and without shrub understory. We found that matorral had the largest sets of inputs to ecosystem processes, and espinal with shrub understory had the largest sets of outcomes. Moreover, we found that these outcomes were broadly in the directions preferred by management goals. This supports our prediction that matorral acts as an ecosystem process bank for espinal. We recommend that management plans for landscape resilience consider espinal and matorral as a single landscape cover class that should be maintained as a dynamic mosaic. Joint management of espinal and matorral could create new management and policy opportunities.

  15. Empirical Foundation of Central Concepts for Computer Science Education

    Science.gov (United States)

    Zendler, Andreas; Spannagel, Christian

    2008-01-01

    The design of computer science curricula should rely on central concepts of the discipline rather than on technical short-term developments. Several authors have proposed lists of basic concepts or fundamental ideas in the past. However, these catalogs were based on subjective decisions without any empirical support. This article describes the…

  16. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  17. The south-central United States magnetic anomaly

    Science.gov (United States)

    Starich, P. J.; Hinze, W. J.; Braile, L. W.

    1985-01-01

    A positive magnetic anomaly, which dominates the MAGSAT scalar field over the south-central United States, results from the superposition of magnetic effects from several geologic sources and tectonic structures in the crust. The highly magnetic basement rocks of this region show good correlation with increased crustal thickness, above average crustal velocity and predominantly negative free-air gravity anomalies, all of which are useful constraints for modeling the magnetic sources. The positive anomaly is composed of two primary elements. The western-most segment is related to middle Proterozoic granite intrusions, rhyolite flows and interspersed metamorphic basement rocks in the Texas panhandle and eastern New Mexico. The anomaly and the magnetic crust are bounded to the west by the north-south striking Rio Grande Rift. The anomaly extends eastward over the Grenville age basement rocks of central Texas, and is terminated to the south and east by the buried extension of the Ouachita System. The northern segment of the anomaly extends eastward across Oklahoma and Arkansas to the Mississippi Embayment. It corresponds to a general positive magnetic region associated with the Wichita Mountains igneous complex in south-central Oklahoma and 1.2 to 1.5 Ga. felsic terrane to the north.

  18. Large-scale Ferrofluid Simulations on Graphics Processing Units

    OpenAIRE

    Polyakov, A. Yu.; Lyutyy, T. V.; Denisov, S.(State Research Center Institute for High Energy Physics, Protvino, Russia); Reva, V. V.; Hanggi, P.

    2012-01-01

    We present an approach to molecular-dynamics simulations of ferrofluids on graphics processing units (GPUs). Our numerical scheme is based on a GPU-oriented modification of the Barnes-Hut (BH) algorithm designed to increase the parallelism of computations. For an ensemble consisting of one million of ferromagnetic particles, the performance of the proposed algorithm on a Tesla M2050 GPU demonstrated a computational-time speed-up of four order of magnitude compared to the performance of the se...

  19. Reflector antenna analysis using physical optics on Graphics Processing Units

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd;

    2014-01-01

    The Physical Optics approximation is a widely used asymptotic method for calculating the scattering from electrically large bodies. It requires significant computational work and little memory, and is thus well suited for application on a Graphics Processing Unit. Here, we investigate the perform...... performance of an implementation and demonstrate that while there are some implementational pitfalls, a careful implementation can result in impressive improvements....

  20. Offline Processing in the Online Computer Farm

    Science.gov (United States)

    Cardoso, L. G.; Gaspar, C.; Callot, O.; Closier, J.; Neufeld, N.; Frank, M.; Jost, B.; Charpentier, P.; Liu, G.

    2012-12-01

    LHCb is one of the 4 experiments at the LHC accelerator at CERN. LHCb has approximately 1500 PCs for processing the High Level Trigger (HLT) during physics data acquisition. During periods when data acquisition is not required or the resources needed for data acquisition are reduced most of these PCs are idle or very little used. In these periods it is possible to profit from the unused processing capacity to reprocess earlier datasets with the newest applications (code and calibration constants), thus reducing the CPU capacity needed on the Grid. The offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control) to process physics data on the Grid. In DIRAC, agents are started on Worker Nodes, pull available jobs from the DIRAC central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the agents for the offline data processing on the HLT Farm. It can do so without overwhelming the offline resources (e.g. DBs) and in case of change of the accelerator planning it can easily return the used resources for online purposes. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit.

  1. Accelerating NBODY6 with Graphics Processing Units

    CERN Document Server

    Nitadori, Keigo

    2012-01-01

    We describe the use of Graphics Processing Units (GPUs) for speeding up the code NBODY6 which is widely used for direct $N$-body simulations. Over the years, the $N^2$ nature of the direct force calculation has proved a barrier for extending the particle number. Following an early introduction of force polynomials and individual time-steps, the calculation cost was first reduced by the introduction of a neighbour scheme. After a decade of GRAPE computers which speeded up the force calculation further, we are now in the era of GPUs where relatively small hardware systems are highly cost-effective. A significant gain in efficiency is achieved by employing the GPU to obtain the so-called regular force which typically involves some 99 percent of the particles, while the remaining local forces are evaluated on the host. However, the latter operation is performed up to 20 times more frequently and may still account for a significant cost. This effort is reduced by parallel SSE/AVX procedures where each interaction ...

  2. New Generation General Purpose Computer (GPC) compact IBM unit

    Science.gov (United States)

    1991-01-01

    New Generation General Purpose Computer (GPC) compact IBM unit replaces a two-unit earlier generation computer. The new IBM unit is documented in table top views alone (S91-26867, S91-26868), with the onboard equipment it supports including the flight deck CRT screen and keypad (S91-26866), and next to the two earlier versions it replaces (S91-26869).

  3. Parallel processing for scientific computations

    Science.gov (United States)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  4. Tensors in image processing and computer vision

    CERN Document Server

    De Luis García, Rodrigo; Tao, Dacheng; Li, Xuelong

    2009-01-01

    Tensor signal processing is an emerging field with important applications to computer vision and image processing. This book presents the developments in this branch of signal processing, offering research and discussions by experts in the area. It is suitable for advanced students working in the area of computer vision and image processing.

  5. Computer program developed for flowsheet calculations and process data reduction

    Science.gov (United States)

    Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.

    1969-01-01

    Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.

  6. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip;

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret...... theoretical understanding of chromatographic behavior can augment available experimental data and aid in the design of specific experiments to develop a more complete understanding of the behavior of a unit operation....

  7. Using Parallel Computing Methods in Business Processes

    OpenAIRE

    Machek, Ondrej; Hejda, Jan

    2012-01-01

    In computer science, engineers deal with the issue how to accelerate the execution of extensive tasks with parallel computing algorithms, which are executed on large network of cooperating processors.The business world forms large networks of business units, too, and in business management, managers often face similar problems. The aim of this paper is to consider the possibilities of using parallel computing methods in business networks. In the first part, weintroduce the issue and make some...

  8. Origin of haze in the central United States and its effect on solar irradiation

    International Nuclear Information System (INIS)

    The depletion by atmospheric haze of solar irradiation at the earth's surface in the central United States is estimated and some aspects of the origin of the haze investigated. Observed optical properties of the haze are reviewed and their relation to visual range measurements demonstrated. An approximate radiative transfer model relates visual range and mixing-height observations to solar irradiance at the ground, and the relation is validated against detailed irradiance observations on two days, and against observed monthly and annual irradiation at one station. Statistics of irradiation depletion are computed for 24 stations. The annual average depletion is approx.7.5%

  9. Optimization models of the supply of power structures’ organizational units with centralized procurement

    Directory of Open Access Journals (Sweden)

    Sysoiev Volodymyr

    2013-01-01

    Full Text Available Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. This article presents optimization models of the supply of state power structures’ organizational units with centralized procurement, for different levels of simulated materiel and technical support processes. The models allow us to find the most profitable options for state power structures’ organizational supply units in a centre-oriented logistics system in conditions of the changing needs, volume of allocated funds, and logistics costs that accompany the process of supply, by maximizing the provision level of organizational units with necessary material and technical resources for the entire planning period of supply by minimizing the total logistical costs, taking into account the diverse nature and the different priorities of organizational units and material and technical resources.

  10. Improved usage of the LOFT process computer

    International Nuclear Information System (INIS)

    This paper describes work recently done to upgrade usage of the plant process computer at the Loss-of-Fluid Test (LOFT) facility. The use of computers to aid reactor operators in understanding plant status and diagnosing plant difficulties is currently being widely studied by the nuclear industry. In this regard, an effort was initiated to improve LOFT process computer usage, since the existing plant process computer has been an available, but only lightly used resource, for aiding LOFT reactor operators. This is a continuing effort and has, to date, produced improvements in data collection, data display for operators, and methods of computer operation

  11. Air pollution modelling using a graphics processing unit with CUDA

    CERN Document Server

    Molnar, Ferenc; Meszaros, Robert; Lagzi, Istvan; 10.1016/j.cpc.2009.09.008

    2010-01-01

    The Graphics Processing Unit (GPU) is a powerful tool for parallel computing. In the past years the performance and capabilities of GPUs have increased, and the Compute Unified Device Architecture (CUDA) - a parallel computing architecture - has been developed by NVIDIA to utilize this performance in general purpose computations. Here we show for the first time a possible application of GPU for environmental studies serving as a basement for decision making strategies. A stochastic Lagrangian particle model has been developed on CUDA to estimate the transport and the transformation of the radionuclides from a single point source during an accidental release. Our results show that parallel implementation achieves typical acceleration values in the order of 80-120 times compared to CPU using a single-threaded implementation on a 2.33 GHz desktop computer. Only very small differences have been found between the results obtained from GPU and CPU simulations, which are comparable with the effect of stochastic tran...

  12. Graphics processing unit-assisted lossless decompression

    Energy Technology Data Exchange (ETDEWEB)

    Loughry, Thomas A.

    2016-04-12

    Systems and methods for decompressing compressed data that has been compressed by way of a lossless compression algorithm are described herein. In a general embodiment, a graphics processing unit (GPU) is programmed to receive compressed data packets and decompress such packets in parallel. The compressed data packets are compressed representations of an image, and the lossless compression algorithm is a Rice compression algorithm.

  13. Central nervous system infections in the intensive care unit

    Directory of Open Access Journals (Sweden)

    B. Vengamma

    2014-04-01

    Full Text Available Neurological infections constitute an uncommon, but important aetiological cause requiring admission to an intensive care unit (ICU. In addition, health-care associated neurological infections may develop in critically ill patients admitted to an ICU for other indications. Central nervous system infections can develop as complications in ICU patients including post-operative neurosurgical patients. While bacterial infections are the most common cause, mycobacterial and fungal infections are also frequently encountered. Delay in institution of specific treatment is considered to be the single most important poor prognostic factor. Empirical antibiotic therapy must be initiated while awaiting specific culture and sensitivity results. Choice of empirical antimicrobial therapy should take into consideration the most likely pathogens involved, locally prevalent drug-resistance patterns, underlying predisposing, co-morbid conditions, and other factors, such as age, immune status. Further, the antibiotic should adequately penetrate the blood-brain and blood- cerebrospinal fluid barriers. The presence of a focal collection of pus warrants immediate surgical drainage. Following strict aseptic precautions during surgery, hand-hygiene and care of catheters, devices constitute important preventive measures. A high index of clinical suspicion and aggressive efforts at identification of aetiological cause and early institution of specific treatment in patients with neurological infections can be life saving.

  14. Sandia`s computer support units: The first three years

    Energy Technology Data Exchange (ETDEWEB)

    Harris, R.N. [Sandia National Labs., Albuquerque, NM (United States). Labs. Computing Dept.

    1997-11-01

    This paper describes the method by which Sandia National Laboratories has deployed information technology to the line organizations and to the desktop as part of the integrated information services organization under the direction of the Chief Information officer. This deployment has been done by the Computer Support Unit (CSU) Department. The CSU approach is based on the principle of providing local customer service with a corporate perspective. Success required an approach that was both customer compelled at times and market or corporate focused in most cases. Above all, a complete solution was required that included a comprehensive method of technology choices and development, process development, technology implementation, and support. It is the authors hope that this information will be useful in the development of a customer-focused business strategy for information technology deployment and support. Descriptions of current status reflect the status as of May 1997.

  15. New FORTRAN computer programs to acquire and process isotopic mass-spectrometric data

    International Nuclear Information System (INIS)

    The computer programs described in New Computer Programs to Acquire and Process Isotopic Mass Spectrometric Data have been revised. This report describes in some detail the operation of these programs, which acquire and process isotopic mass spectrometric data. Both functional and overall design aspects are addressed. The three basic program units - file manipulation, data acquisition, and data processing - are discussed in turn. Step-by-step instructions are included where appropriate, and each subsection is described in enough detail to give a clear picture of its function. Organization of file structure, which is central to the entire concept, is extensively discussed with the help of numerous tables. Appendices contain flow charts and outline file structure to help a programmer unfamiliar with the programs to alter them with a minimum of lost time

  16. Organization of international market introduction: Can cooperation between central units and local product management influence success

    OpenAIRE

    Baumgarten, Antje; Herstatt, Cornelius; Fantapié Altobelli, Claudia

    2006-01-01

    When organizing international market introductions multinational companies face coordination problems between the leading central organizational unit and local product management. Based on the assumption that international market introductions are initiated and managed by a central unit we examine the impact of cooperation between the central unit and local product management on success. Our survey of 51 international market introductions reveals that the quality of the cooperation with local...

  17. Molecular dynamics simulation of moderately coupled Yukawa liquids on graphics processing units

    International Nuclear Information System (INIS)

    Complete text of publication follows. During the past decade Graphic Processing Unit (GPU) architectures have seen not only continuous performance increase, but a completely new horizon through general purpose computing as well. Thus, being integrated inside personal computers (PC), besides high-performance graphics applications, they provide a new platform for scientific computing, too, at moderate cost. Single instruction multiple data (SIMD) parallelism of GPUs is attractive for molecular simulations, as particle methods can largely be parallelized. We have developed a molecular dynamics (MD) simulation code for the NVIDIA Compute Unified Device Architecture (CUDA) GPU architecture that allows massive parallel computing, thereby permitting relatively big systems to be simulated on PC class computers, compared to the traditional Central Processing Unit (CPU) computations. We have carried out simulations of moderately coupled (01. ≤ Γ ≤ 10) 3-dimensional Yukawa liquids [2], using particle numbers in the 105-106 range. Besides the MD simulations we have as well obtained pair correlation functions using the Hypernetted Chain (HNC) Approximation, and have compared the results with the GPU-MD data. The analysis of the asymptotic long-range behaviour of the pair correlation functions (transition between monotonic vs. oscillating decay) confirmed the results of [3]. Figure 1 shows pair correlation functions obtained from the numerical simulations and the theoretical HNC method, in which the bridge function was set to zero. We find a very good agreement between the curves at Γ=0.1 and 1, over several orders of magnitude. The only difference seen at Γ = 10 is the (expected) slightly higher correlation peak amplitude obtained from the MD simulation, compared to the HNC result. We thank OTKA for supporting this work (grant K77653) and Dr A. Archer for useful discussions.

  18. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    OpenAIRE

    Francois Bodin; Stephane Bihan

    2009-01-01

    Hybrid parallel multicore architectures based on graphics processing units (GPUs) can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allow...

  19. Porting a Hall MHD Code to a Graphic Processing Unit

    Science.gov (United States)

    Dorelli, John C.

    2011-01-01

    We present our experience porting a Hall MHD code to a Graphics Processing Unit (GPU). The code is a 2nd order accurate MUSCL-Hancock scheme which makes use of an HLL Riemann solver to compute numerical fluxes and second-order finite differences to compute the Hall contribution to the electric field. The divergence of the magnetic field is controlled with Dedner?s hyperbolic divergence cleaning method. Preliminary benchmark tests indicate a speedup (relative to a single Nehalem core) of 58x for a double precision calculation. We discuss scaling issues which arise when distributing work across multiple GPUs in a CPU-GPU cluster.

  20. Semi-automatic film processing unit

    International Nuclear Information System (INIS)

    The design concept applied in the development of an semi-automatic film processing unit needs creativity and user support in channelling the required information to select materials and operation system that suit the design produced. Low cost and efficient operation are the challenges that need to be faced abreast with the fast technology advancement. In producing this processing unit, there are few elements which need to be considered in order to produce high quality image. Consistent movement and correct time coordination for developing and drying are a few elements which need to be controlled. Other elements which need serious attentions are temperature, liquid density and the amount of time for the chemical liquids to react. Subsequent chemical reaction that take place will cause the liquid chemical to age and this will adversely affect the quality of image produced. This unit is also equipped with liquid chemical drainage system and disposal chemical tank. This unit would be useful in GP clinics especially in rural area which practice manual system for developing and require low operational cost. (Author)

  1. Pn Tomography of the Central and Eastern United States

    Science.gov (United States)

    Zhang, Q.; Sandvol, E. A.; Liu, M.

    2005-12-01

    Approximately 44,000 Pn phase readings from the ISC and NEIC catalogs and 750 hand picked arrivals were inverted to map the velocity structure of mantle lithosphere in the Central and Eastern United States (CEUS). Overall we have a high density of ray paths within the active seismic zones in the eastern and southern parts of the CEUS, while ray coverage is relatively poor to the west of Great Lakes as well as along the eastern and southern coastlines of the U.S. The average Pn velocity in the CEUS is approximately 8.03 km/s. High Pn velocities (~8.18 km/s) within the northeastern part of the North American shield are reliable, while the resolution of the velocity image of the American shield around the mid-continent rift (MCR) is relatively low due to the poor ray coverage. Under the East Continent Rift (EC), the northern part of the Reelfoot Rift Zone (RRZ), and the South Oklahoma Aulacogen (SO), we also observe high velocity lithospheric mantle (~8.13-8.18 km/s). Typical Pn velocities (~7.98 km/s) are found between those three high velocity blocks. Low velocities are shown in the northern and southern Appalachians (~7.88-7.98 km/s) as well as the Rio Grande Rift (~7.88 km/s). In the portion of our model with the highest ray density, the Pn azimuthal anisotropy seems to be robust. These fast directions appear to mirror the boundaries of the low Pn velocity zone and parallel the Appalachians down to the southwest.

  2. Molecular Dynamics Simulation of Macromolecules Using Graphics Processing Unit

    CERN Document Server

    Xu, Ji; Ge, Wei; Yu, Xiang; Yang, Xiaozhen; Li, Jinghai

    2010-01-01

    Molecular dynamics (MD) simulation is a powerful computational tool to study the behavior of macromolecular systems. But many simulations of this field are limited in spatial or temporal scale by the available computational resource. In recent years, graphics processing unit (GPU) provides unprecedented computational power for scientific applications. Many MD algorithms suit with the multithread nature of GPU. In this paper, MD algorithms for macromolecular systems that run entirely on GPU are presented. Compared to the MD simulation with free software GROMACS on a single CPU core, our codes achieve about 10 times speed-up on a single GPU. For validation, we have performed MD simulations of polymer crystallization on GPU, and the results observed perfectly agree with computations on CPU. Therefore, our single GPU codes have already provided an inexpensive alternative for macromolecular simulations on traditional CPU clusters and they can also be used as a basis to develop parallel GPU programs to further spee...

  3. Introduction to computer image processing

    Science.gov (United States)

    Moik, J. G.

    1973-01-01

    Theoretical backgrounds and digital techniques for a class of image processing problems are presented. Image formation in the context of linear system theory, image evaluation, noise characteristics, mathematical operations on image and their implementation are discussed. Various techniques for image restoration and image enhancement are presented. Methods for object extraction and the problem of pictorial pattern recognition and classification are discussed.

  4. Study of multi-programming scheduling of batch processed jobs on third generation computers

    International Nuclear Information System (INIS)

    This research thesis addresses technical aspects of the organisation, management and operation of a computer fleet. The main issue is the search for an appropriate compromise between throughput and turnaround time, i.e. the possibility to increase throughput while taking time constraints of each computing centre into account. The author first presents the different systems and properties of third-generation computers (those developed after 1964). He analyses and discusses problems related to multi-programming for these systems (concept of multi-programming, design issues regarding memory organisation and resource allocation, operational issues regarding memory use, the use of central processing unit, conflict between peripheral resources, and so on). He addresses scheduling issues (presentation of the IBM/370 system, internal and external scheduling techniques), and presents a simulator, its parameters related to the use of resources, and the job generation software. He presents a micro-planning pre-processor, describes its operation, and comments test results

  5. Molecular dynamics simulations using graphics processing units

    OpenAIRE

    Baker, J.A.; Hirst, J.D.

    2011-01-01

    It is increasingly easy to develop software that exploits Graphics Processing Units (GPUs). The molecular dynamics simulation community has embraced this recent opportunity. Herein, we outline the current approaches that exploit this technology. In the context of biomolecular simulations, we discuss some of the algorithms that have been implemented and some of the aspects that distinguish the GPU from previous parallel environments. The ubiquity of GPUs and the ingenuity of the simulation com...

  6. Central Data Processing System (CDPS) users manual: solar heating and cooling program

    Energy Technology Data Exchange (ETDEWEB)

    1976-09-01

    The Central Data Processing System (CDPS) provides the software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple remote sites. The instrumentation data associated with these systems is collected, processed, and presented in a form which supports continuity of performance evaluation across all applications. The CDPS consists of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. The CDPS Users Manual identifies users of the performance data base, procedures for operation, and guidelines for software maintenance. The manual also defines the output capabilities of the CDPS in support of external users of the system.

  7. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  8. Computer Support for Document Management in the Danish Central Government

    DEFF Research Database (Denmark)

    Hertzum, Morten

    1995-01-01

    Document management systems are generally assumed to hold a potential for delegating the recording and retrieval of documents to professionals such as civil servants and for supporting the coordination and control of work, so-called workflow management. This study investigates the use...... and organizational impact of document management systems in the Danish central government. The currently used systems unfold around the recording of incoming and outgoing paper mail and have typically not been accompanied by organizational changes. Rather, document management tends to remain an appendix...... to the primary work and be delegated to a specialized organizational unit. Several factors contribute to the present document management practices, for example it takes an extraordinary effort to achieve the benefits, and few institutions are forced to pursue them. Furthermore, document and workflow management...

  9. Data processing in high energy physics and vector processing computers

    International Nuclear Information System (INIS)

    The data handling done in high energy physics in order to extract the results from the large volumes of data collected in typical experiments is a very large consumer of computing capacity. More than 70 vector processing computers have now been installed and many fields of applications have been tried on such computers as the ILLIAC IV, the TI ASC, the CDC STAR-100 and more recently on the CRAY-1, the CDC Cyber 205, the ICL DAP and the CRAY X-MP. This paper attempts to analyze the reasons for the lack of use of these computers in processing results from high energy physics experiments. Little work has been done to look at the possible vectorisation of the large codes in this field, but the motivation to apply vector processing computers in high energy physics data handling may be increasing as the gap between the scalar performance and the vector performance offered by large computers available on the market widens

  10. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167. ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  11. Five Computational Actions in Information Processing

    Directory of Open Access Journals (Sweden)

    Stefan Vladutescu

    2014-12-01

    Full Text Available This study is circumscribed to the Information Science. The zetetic aim of research is double: a to define the concept of action of information computational processing and b to design a taxonomy of actions of information computational processing. Our thesis is that any information processing is a computational processing. First, the investigation trays to demonstrate that the computati onal actions of information processing or the informational actions are computationalinvestigative configurations for structuring information: clusters of highlyaggregated operations which are carried out in a unitary manner operate convergent and behave like a unique computational device. From a methodological point of view, they are comprised within the category of analytical instruments for the informational processing of raw material, of data, of vague, confused, unstructured informational elements. As internal articulation, the actions are patterns for the integrated carrying out of operations of informational investigation. Secondly, we propose an inventory and a description of five basic informational computational actions: exploring, grouping, anticipation, schematization, inferential structuring. R. S. Wyer and T. K. Srull (2014 speak about "four information processing". We would like to continue with further and future investigation of the relationship between operations, actions, strategies and mechanisms of informational processing.

  12. Controlling Laboratory Processes From A Personal Computer

    Science.gov (United States)

    Will, H.; Mackin, M. A.

    1991-01-01

    Computer program provides natural-language process control from IBM PC or compatible computer. Sets up process-control system that either runs without operator or run by workers who have limited programming skills. Includes three smaller programs. Two of them, written in FORTRAN 77, record data and control research processes. Third program, written in Pascal, generates FORTRAN subroutines used by other two programs to identify user commands with device-driving routines written by user. Also includes set of input data allowing user to define user commands to be executed by computer. Requires personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. Also requires FORTRAN 77 compiler and device drivers written by user.

  13. Overview of Central Auditory Processing Deficits in Older Adults.

    Science.gov (United States)

    Atcherson, Samuel R; Nagaraj, Naveen K; Kennett, Sarah E W; Levisee, Meredith

    2015-08-01

    Although there are many reported age-related declines in the human body, the notion that a central auditory processing deficit exists in older adults has not always been clear. Hearing loss and both structural and functional central nervous system changes with advancing age are contributors to how we listen, hear, and process auditory information. Even older adults with normal or near normal hearing sensitivity may exhibit age-related central auditory processing deficits as measured behaviorally and/or electrophysiologically. The purpose of this article is to provide an overview of assessment and rehabilitative approaches for central auditory processing deficits in older adults. It is hoped that the outcome of the information presented here will help clinicians with older adult patients who do not exhibit the typical auditory processing behaviors exhibited by others at the same age and with comparable hearing sensitivity all in the absence of other health-related conditions. PMID:27516715

  14. Environmental process tomography in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Daily, W.; Ramirez, A.

    1994-01-01

    The US Government is supporting development of new technology and transfer of existing technology from other disciplines to apply to the problem. Part of this effort is development of geophysical tools used for underground imaging. These tools are closely related to many of those used in industrial process tomography. Both seismic and electromagnetic methods are used for underground imaging. In either case, sensitivity and resolution are greatly improved by making measurements from boreholes instead of only from the surface. Seismic signals are usually more sensitive to subsurface structure such as lithologic boundaries, but recent work has also shown seismic tomography to be sensitive to the degree of saturation. Electrical methods can be useful for delineation of aquitards such as clay layers. Electrical tomography is shown to be particularly sensitive to movement of fluids such as steam. Examples of both seismic and electromagnetic process tomography will be discussed in relation to environmental remediation of soils and ground water in the United States.

  15. Accelerating Radio Astronomy Cross-Correlation with Graphics Processing Units

    CERN Document Server

    Clark, M A; Greenhill, L J

    2011-01-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from "Large-N" arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi architecture, sustaining up to 79% of the peak single precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared to ASIC and FPGA implementations have the potential to greatly shorten the cycle of correlator development and deployment, for case...

  16. Computer programmes for mineral processing presentation

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj; Golomeova, Mirjana

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-5, Minteh-6 and Cyclone in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fast and sure presentation of some complex circuits in the mineral processing technologies.

  17. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  18. Modern process computers in supervision of NPPs

    International Nuclear Information System (INIS)

    The implementation of a modern process computer system is not necessarily lengthy and costly. For a successful replacement project the implementation of the system should be divided into several development phases. The modern technology of distributed systems facilitate a very casy gradual implementation of a process computer sytsem allowing the basic functions to be implemented in a start-up-configuration were only limited scope of safety related monitoring functions like CFMS and reactor monitoring are included. Based on a proven process management system structure this can easily and without a risk of disturbance for the plant operation be expanded according to operational requirements of the power plant

  19. Algorithms for image processing and computer vision

    CERN Document Server

    Parker, J R

    2010-01-01

    A cookbook of algorithms for common image processing applications Thanks to advances in computer hardware and software, algorithms have been developed that support sophisticated image processing without requiring an extensive background in mathematics. This bestselling book has been fully updated with the newest of these, including 2D vision methods in content-based searches and the use of graphics cards as image processing computational aids. It's an ideal reference for software engineers and developers, advanced programmers, graphics programmers, scientists, and other specialists wh

  20. A Performance Comparison of Different Graphics Processing Units Running Direct N-Body Simulations

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2013-01-01

    Hybrid computational architectures based on the joint power of Central Processing Units and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc.. In this paper we present a comparison of performance of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code (HiGPUs) to use for these tests, because this version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed...

  1. A performance comparison of different graphics processing units running direct N-body simulations

    Science.gov (United States)

    Capuzzo-Dolcetta, R.; Spera, M.

    2013-11-01

    Hybrid computational architectures based on the joint power of Central Processing Units (CPUs) and Graphic Processing Units (GPUs) are becoming popular and powerful hardware tools for a wide range of simulations in biology, chemistry, engineering, physics, etc. In this paper we present a performance comparison of various GPUs available on market when applied to the numerical integration of the classic, gravitational, N-body problem. To do this, we developed an OpenCL version of the parallel code HiGPUs used for these tests, because this portable version is the only apt to work on GPUs of different makes. The main general result is that we confirm the reliability, speed and cheapness of GPUs when applied to the examined kind of problems (i.e. when the forces to evaluate are dependent on the mutual distances, as it happens in gravitational physics and molecular dynamics). More specifically, we find that also the cheap GPUs built to be employed just for gaming applications are very performant in terms of computing speed also in scientific applications and, although with some limitations concerning on-board memory, can be a good choice to build a cheap and efficient machine for scientific applications.

  2. Centralized processing of contact-handled TRU waste feasibility analysis

    International Nuclear Information System (INIS)

    This report presents work for the feasibility study of central processing of contact-handled TRU waste. Discussion of scenarios, transportation options, summary of cost estimates, and institutional issues are a few of the subjects discussed

  3. Heterogeneous Multicore Parallel Programming for Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Francois Bodin

    2009-01-01

    Full Text Available Hybrid parallel multicore architectures based on graphics processing units (GPUs can provide tremendous computing power. Current NVIDIA and AMD Graphics Product Group hardware display a peak performance of hundreds of gigaflops. However, exploiting GPUs from existing applications is a difficult task that requires non-portable rewriting of the code. In this paper, we present HMPP, a Heterogeneous Multicore Parallel Programming workbench with compilers, developed by CAPS entreprise, that allows the integration of heterogeneous hardware accelerators in a unintrusive manner while preserving the legacy code.

  4. Graphics processing unit-based alignment of protein interaction networks.

    Science.gov (United States)

    Xie, Jiang; Zhou, Zhonghua; Ma, Jin; Xiang, Chaojuan; Nie, Qing; Zhang, Wu

    2015-08-01

    Network alignment is an important bridge to understanding human protein-protein interactions (PPIs) and functions through model organisms. However, the underlying subgraph isomorphism problem complicates and increases the time required to align protein interaction networks (PINs). Parallel computing technology is an effective solution to the challenge of aligning large-scale networks via sequential computing. In this study, the typical Hungarian-Greedy Algorithm (HGA) is used as an example for PIN alignment. The authors propose a HGA with 2-nearest neighbours (HGA-2N) and implement its graphics processing unit (GPU) acceleration. Numerical experiments demonstrate that HGA-2N can find alignments that are close to those found by HGA while dramatically reducing computing time. The GPU implementation of HGA-2N optimises the parallel pattern, computing mode and storage mode and it improves the computing time ratio between the CPU and GPU compared with HGA when large-scale networks are considered. By using HGA-2N in GPUs, conserved PPIs can be observed, and potential PPIs can be predicted. Among the predictions based on 25 common Gene Ontology terms, 42.8% can be found in the Human Protein Reference Database. Furthermore, a new method of reconstructing phylogenetic trees is introduced, which shows the same relationships among five herpes viruses that are obtained using other methods. PMID:26243827

  5. Soft Computing Techniques for Process Control Applications

    Directory of Open Access Journals (Sweden)

    Rahul Malhotra

    2011-09-01

    Full Text Available Technological innovations in soft computing techniques have brought automation capabilities to new levelsof applications. Process control is an important application of any industry for controlling the complexsystem parameters, which can greatly benefit from such advancements. Conventional control theory isbased on mathematical models that describe the dynamic behaviour of process control systems. Due to lackin comprehensibility, conventional controllers are often inferior to the intelligent controllers. Softcomputing techniques provide an ability to make decisions and learning from the reliable data or expert’sexperience. Moreover, soft computing techniques can cope up with a variety of environmental and stabilityrelated uncertainties. This paper explores the different areas of soft computing techniques viz. Fuzzy logic,genetic algorithms and hybridization of two and abridged the results of different process control casestudies. It is inferred from the results that the soft computing controllers provide better control on errorsthan conventional controllers. Further, hybrid fuzzy genetic algorithm controllers have successfullyoptimized the errors than standalone soft computing and conventional techniques.

  6. Understanding the Functional Central Limit Theorems with Some Applications to Unit Root Testing with Structural Change

    Directory of Open Access Journals (Sweden)

    Juan Carlos Aquino

    2013-06-01

    Full Text Available The application of different unit root statistics is by now a standard practice in empirical work. Even when it is a practical issue, these statistics have complex nonstandard distributions depending on functionals of certain stochastic processes, and their derivations represent a barrier even for many theoretical econometricians. These derivations are based on rigorous and fundamental statistical tools which are not (very well known by standard econometricians. This paper aims to fill this gap by explaining in a simple way one of these fundamental tools: namely, the Functional Central Limit Theorem. To this end, this paper analyzes the foundations and applicability of two versions of the Functional Central Limit Theorem within the framework of a unit root with a structural break. Initial attention is focused on the probabilistic structure of the time series to be considered. Thereafter, attention is focused on the asymptotic theory for nonstationary time series proposed by Phillips (1987a, which is applied by Perron (1989 to study the effects of an (assumed exogenous structural break on the power of the augmented Dickey-Fuller test and by Zivot and Andrews (1992 to criticize the exogeneity assumption and propose a method for estimating an endogenous breakpoint. A systematic method for dealing with efficiency issues is introduced by Perron and Rodriguez (2003, which extends the Generalized Least Squares detrending approach due to Elliot et al. (1996. An empirical application is provided.

  7. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    International Nuclear Information System (INIS)

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the benchmark values. The computational time scaling of two GPUs (NVIDIA Tesla C2075 and Kepler K20), respectively, versus two CPUs (Intel Core i5 and Intel Xeon E5-2687W) and the critical issues related to the GPU implementation are discussed. The resulting reduction in computational time and power consumption is significant and semiclassical GPU calculations are shown to be environment friendly

  8. Digital image processing mathematical and computational methods

    CERN Document Server

    Blackledge, J M

    2005-01-01

    This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research

  9. CSI computer system/remote interface unit acceptance test results

    Science.gov (United States)

    Sparks, Dean W., Jr.

    1992-01-01

    The validation tests conducted on the Control/Structures Interaction (CSI) Computer System (CCS)/Remote Interface Unit (RIU) is discussed. The CCS/RIU consists of a commercially available, Langley Research Center (LaRC) programmed, space flight qualified computer and a flight data acquisition and filtering computer, developed at LaRC. The tests were performed in the Space Structures Research Laboratory (SSRL) and included open loop excitation, closed loop control, safing, RIU digital filtering, and RIU stand alone testing with the CSI Evolutionary Model (CEM) Phase-0 testbed. The test results indicated that the CCS/RIU system is comparable to ground based systems in performing real-time control-structure experiments.

  10. Risk for Disability and Poverty Among Central Asians in the United States

    OpenAIRE

    Carlos Siordia; Athena K. Ramos

    2016-01-01

    Understanding the disability-poverty relationship among minority groups within the United States (US) populations may help inform interventions aimed at reducing health disparities. Limited information exists on risk factors for disability and poverty among “Central Asians” (immigrants born in Kazakhstan, Uzbekistan, and other Central Asian regions of the former Soviet Union) in the US. The current cross-sectional analysis used information on 6,820 Central Asians to identify risk factors for ...

  11. INTEGRATION PROCESSES IN CENTRAL ASIA. PROSPECTS FOR A COMMON MARKET

    OpenAIRE

    Rakhmatullina, Gulnur

    2005-01-01

    Globalization processes have a growing effect on the development of individual countries and the world economy, with the Central Asian states, among others, being drawn into their orbit. The advantages of globalization are realized precisely at the integration and regional levels. That is why it is so important today to implement the initiative launched by President Islam Karimov of Uzbekistan for creating a Central Asian Common Market (CACM). The idea is that this market should include Kazak...

  12. Evaluating Computer Technology Integration in a Centralized School System

    Science.gov (United States)

    Eteokleous, N.

    2008-01-01

    The study evaluated the current situation in Cyprus elementary classrooms regarding computer technology integration in an attempt to identify ways of expanding teachers' and students' experiences with computer technology. It examined how Cypriot elementary teachers use computers, and the factors that influence computer integration in their…

  13. The First Prototype for the FastTracker Processing Unit

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  14. Computer processing of Moessbauer spectrum data

    International Nuclear Information System (INIS)

    Computer processing was adopted to pick up significant signals from the undefined Moessbauer spectra. A program, by which smoothing and curve fitting was made possible, was devised and applied to the analysis of the Moessbauer spectra of 57Fe enriched iron and other specimens. Although this processing sometimes distorted the absorption peaks, it was quite effective for elimination of noise and finding of exact positions of absorption peaks. Availability of the processing was demonstrated by several examples obtained for 57Fe enriched iron, natural iron, calcined ferric oxyhydroxides, red mud residue and its calcined product. (auth.)

  15. Adaptive image processing a computational intelligence perspective

    CERN Document Server

    Guan, Ling; Wong, Hau San

    2002-01-01

    Adaptive image processing is one of the most important techniques in visual information processing, especially in early vision such as image restoration, filtering, enhancement, and segmentation. While existing books present some important aspects of the issue, there is not a single book that treats this problem from a viewpoint that is directly linked to human perception - until now. This reference treats adaptive image processing from a computational intelligence viewpoint, systematically and successfully, from theory to applications, using the synergies of neural networks, fuzzy logic, and

  16. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...... architectural projects. At the core lies the formulation of a methodology that is based upon the idea of human and computational selection in accordance with pre-defined performance criteria that can be adapted to different requirements by the mere change of parameter input in order to reach location specific...

  17. Fluid Centrality: A Social Network Analysis of Social-Technical Relations in Computer-Mediated Communication

    Science.gov (United States)

    Enriquez, Judith Guevarra

    2010-01-01

    In this article, centrality is explored as a measure of computer-mediated communication (CMC) in networked learning. Centrality measure is quite common in performing social network analysis (SNA) and in analysing social cohesion, strength of ties and influence in CMC, and computer-supported collaborative learning research. It argues that measuring…

  18. 2008 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-03-01

    This report presents the 2008 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of the CNTA was transferred from the DOE Office of Environmental Management (DOE-EM) to DOE-LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 2005) entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site during fiscal year 2008. This is the second groundwater monitoring report prepared by DOE-LM for the CNTA.

  19. FamSeq: a variant calling program for family-based sequencing data using graphics processing units.

    Directory of Open Access Journals (Sweden)

    Gang Peng

    2014-10-01

    Full Text Available Various algorithms have been developed for variant calling using next-generation sequencing data, and various methods have been applied to reduce the associated false positive and false negative rates. Few variant calling programs, however, utilize the pedigree information when the family-based sequencing data are available. Here, we present a program, FamSeq, which reduces both false positive and false negative rates by incorporating the pedigree information from the Mendelian genetic model into variant calling. To accommodate variations in data complexity, FamSeq consists of four distinct implementations of the Mendelian genetic model: the Bayesian network algorithm, a graphics processing unit version of the Bayesian network algorithm, the Elston-Stewart algorithm and the Markov chain Monte Carlo algorithm. To make the software efficient and applicable to large families, we parallelized the Bayesian network algorithm that copes with pedigrees with inbreeding loops without losing calculation precision on an NVIDIA graphics processing unit. In order to compare the difference in the four methods, we applied FamSeq to pedigree sequencing data with family sizes that varied from 7 to 12. When there is no inbreeding loop in the pedigree, the Elston-Stewart algorithm gives analytical results in a short time. If there are inbreeding loops in the pedigree, we recommend the Bayesian network method, which provides exact answers. To improve the computing speed of the Bayesian network method, we parallelized the computation on a graphics processing unit. This allowed the Bayesian network method to process the whole genome sequencing data of a family of 12 individuals within two days, which was a 10-fold time reduction compared to the time required for this computation on a central processing unit.

  20. Computer processing of dynamic scintigraphic studies

    International Nuclear Information System (INIS)

    The methods are discussed of the computer processing of dynamic scintigraphic studies which were developed, studied or implemented by the authors within research task no. 30-02-03 in nuclear medicine within the five year plan 1981 to 85. This was mainly the method of computer processing radionuclide angiography, phase radioventriculography, regional lung ventilation, dynamic sequential scintigraphy of kidneys and radionuclide uroflowmetry. The problems are discussed of the automatic definition of fields of interest, the methodology of absolute volumes of the heart chamber in radionuclide cardiology, the design and uses are described of the multipurpose dynamic phantom of heart activity for radionuclide angiocardiography and ventriculography developed within the said research task. All methods are documented with many figures showing typical clinical (normal and pathological) and phantom measurements. (V.U.)

  1. Managing internode data communications for an uninitialized process in a parallel computer

    Science.gov (United States)

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-05-20

    A parallel computer includes nodes, each having main memory and a messaging unit (MU). Each MU includes computer memory, which in turn includes, MU message buffers. Each MU message buffer is associated with an uninitialized process on the compute node. In the parallel computer, managing internode data communications for an uninitialized process includes: receiving, by an MU of a compute node, one or more data communications messages in an MU message buffer associated with an uninitialized process on the compute node; determining, by an application agent, that the MU message buffer associated with the uninitialized process is full prior to initialization of the uninitialized process; establishing, by the application agent, a temporary message buffer for the uninitialized process in main computer memory; and moving, by the application agent, data communications messages from the MU message buffer associated with the uninitialized process to the temporary message buffer in main computer memory.

  2. Sanitary Engineering Unit Operations and Unit Processes Laboratory Manual.

    Science.gov (United States)

    American Association of Professors in Sanitary Engineering.

    This manual contains a compilation of experiments in Physical Operations, Biological and Chemical Processes for various education and equipment levels. The experiments are designed to be flexible so that they can be adapted to fit the needs of a particular program. The main emphasis is on hands-on student experiences to promote understanding.…

  3. Implicit Theories of Creativity in Computer Science in the United States and China

    Science.gov (United States)

    Tang, Chaoying; Baer, John; Kaufman, James C.

    2015-01-01

    To study implicit concepts of creativity in computer science in the United States and mainland China, we first asked 308 Chinese computer scientists for adjectives that would describe a creative computer scientist. Computer scientists and non-computer scientists from China (N = 1069) and the United States (N = 971) then rated how well those…

  4. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...

  5. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    OpenAIRE

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th International Conference on Web-based Learning (ICWL 2008) (pp. 132-144). August, 20-22, 2008, Jinhua, China: Lecture Notes in Computer Science 5145 Springer 2008, ISBN 978-3-540-85032-8.

  6. Chemical computing with reaction-diffusion processes.

    Science.gov (United States)

    Gorecki, J; Gizynski, K; Guzowski, J; Gorecka, J N; Garstecki, P; Gruenert, G; Dittrich, P

    2015-07-28

    Chemical reactions are responsible for information processing in living organisms. It is believed that the basic features of biological computing activity are reflected by a reaction-diffusion medium. We illustrate the ideas of chemical information processing considering the Belousov-Zhabotinsky (BZ) reaction and its photosensitive variant. The computational universality of information processing is demonstrated. For different methods of information coding constructions of the simplest signal processing devices are described. The function performed by a particular device is determined by the geometrical structure of oscillatory (or of excitable) and non-excitable regions of the medium. In a living organism, the brain is created as a self-grown structure of interacting nonlinear elements and reaches its functionality as the result of learning. We discuss whether such a strategy can be adopted for generation of chemical information processing devices. Recent studies have shown that lipid-covered droplets containing solution of reagents of BZ reaction can be transported by a flowing oil. Therefore, structures of droplets can be spontaneously formed at specific non-equilibrium conditions, for example forced by flows in a microfluidic reactor. We describe how to introduce information to a droplet structure, track the information flow inside it and optimize medium evolution to achieve the maximum reliability. Applications of droplet structures for classification tasks are discussed. PMID:26078345

  7. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities

    International Nuclear Information System (INIS)

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  8. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  9. Integration Process for the Habitat Demonstration Unit

    Science.gov (United States)

    Gill, Tracy; Merbitz, Jerad; Kennedy, Kriss; Tn, Terry; Toups, Larry; Howe, A. Scott; Smitherman, David

    2011-01-01

    The Habitat Demonstration Unit (HDU) is an experimental exploration habitat technology and architecture test platform designed for analog demonstration activities. The HDU previously served as a test bed for testing technologies and sub-systems in a terrestrial surface environment. in 2010 in the Pressurized Excursion Module (PEM) configuration. Due to the amount of work involved to make the HDU project successful, the HDU project has required a team to integrate a variety of contributions from NASA centers and outside collaborators The size of the team and number of systems involved With the HDU makes Integration a complicated process. However, because the HDU shell manufacturing is complete, the team has a head start on FY--11 integration activities and can focus on integrating upgrades to existing systems as well as integrating new additions. To complete the development of the FY-11 HDU from conception to rollout for operations in July 2011, a cohesive integration strategy has been developed to integrate the various systems of HDU and the payloads. The highlighted HDU work for FY-11 will focus on performing upgrades to the PEM configuration, adding the X-Hab as a second level, adding a new porch providing the astronauts a larger work area outside the HDU for EVA preparations, and adding a Hygiene module. Together these upgrades result in a prototype configuration of the Deep Space Habitat (DSH), an element under evaluation by NASA's Human Exploration Framework Team (HEFT) Scheduled activates include early fit-checks and the utilization of a Habitat avionics test bed prior to installation into HDU. A coordinated effort to utilize modeling and simulation systems has aided in design and integration concept development. Modeling tools have been effective in hardware systems layout, cable routing, sub-system interface length estimation and human factors analysis. Decision processes on integration and use of all new subsystems will be defined early in the project to

  10. Hydrologic Terrain Processing Using Parallel Computing

    Science.gov (United States)

    Tarboton, D. G.; Watson, D. W.; Wallace, R. M.; Schreuders, K.; Tesfa, T. K.

    2009-12-01

    Topography in the form of Digital Elevation Models (DEMs), is widely used to derive information for the modeling of hydrologic processes. Hydrologic terrain analysis augments the information content of digital elevation data by removing spurious pits, deriving a structured flow field, and calculating surfaces of hydrologic information derived from the flow field. The increasing availability of high-resolution terrain datasets for large areas poses a challenge for existing algorithms that process terrain data to extract this hydrologic information. This paper will describe parallel algorithms that have been developed to enhance hydrologic terrain pre-processing so that larger datasets can be more efficiently computed. Message Passing Interface (MPI) parallel implementations have been developed for pit removal, flow direction, and generalized flow accumulation methods within the Terrain Analysis Using Digital Elevation Models (TauDEM) package. The parallel algorithm works by decomposing the domain into striped or tiled data partitions where each tile is processed by a separate processor. This method also reduces the memory requirements of each processor so that larger size grids can be processed. The parallel pit removal algorithm is adapted from the method of Planchon and Darboux that starts from a high elevation then progressively scans the grid, lowering each grid cell to the maximum of the original elevation or the lowest neighbor. The MPI implementation reconciles elevations along process domain edges after each scan. Generalized flow accumulation extends flow accumulation approaches commonly available in GIS through the integration of multiple inputs and a broad class of algebraic rules into the calculation of flow related quantities. It is based on establishing a flow field through DEM grid cells, that is then used to evaluate any mathematical function that incorporates dependence on values of the quantity being evaluated at upslope (or downslope) grid cells

  11. Insulating process for HT-7U central solenoid model coils

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The HT-7U superconducting Tokamak is a whole superconducting magnetically confined fusion device. The insulating system of its central solenoid coils is critical to its properties. In this paper the forming of the insulating system and the vacuum-pressure-impregnating (VPI) are introduced, and the whole insulating process is verified under the superconducting experiment condition.

  12. Central Data Processing System (CDPS) user's manual: Solar heating and cooling program

    Science.gov (United States)

    1976-01-01

    The software and data base management system required to assess the performance of solar heating and cooling systems installed at multiple sites is presented. The instrumentation data associated with these systems is collected, processed, and presented in a form which supported continuity of performance evaluation across all applications. The CDPS consisted of three major elements: communication interface computer, central data processing computer, and performance evaluation data base. Users of the performance data base were identified, and procedures for operation, and guidelines for software maintenance were outlined. The manual also defined the output capabilities of the CDPS in support of external users of the system.

  13. Central Diffractive Processes at the Tevatron, RHIC and LHC

    CERN Document Server

    Harland-Lang, L A; Ryskin, M G; Stirling, W J

    2011-01-01

    Central exclusive production (CEP) processes in high-energy hadron collisions offer a very promising framework for studying both novel aspects of QCD and new physics signals. We report on the results of a theoretical study of the CEP of heavy quarkonia (chi and eta) at the Tevatron, RHIC and LHC. These processes provide important information on the physics of bound states and can probe the current ideas and methods of QCD, such as effective field theories and lattice QCD.

  14. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    Science.gov (United States)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  15. A computer program for processing microdosimetry spectra

    International Nuclear Information System (INIS)

    A small computer program for processing a microdosimetry single event energy deposition spectrum is presented. The program can perform smoothing of a spectrum and present a comparison of the smoothed and unsmoothed spectrum in order to detect distortions introduced by excessively rough smoothing. To increase the resolution of the spectrum and to reduce the influence of the uncertainty in the zero point setting of the multichannel analyzer, spectra are usually measured with different gain settings and are thereafter overlapped into one spectrum. The computer can perform such an overlapping and make a chi-square-analysis of the overlapping region. Such an analysis may reveal unsatisfactory experimental conditions, such as drifts in the gain between the two measurements, pile up effects or an unproper zero point setting of the multichannel analyzer. A method of dealing with the last mentioned problem is also presented. The program was written for a Nuclear Data computer (ND 812) with a memory of 12 k but it should be easy to apply it to other computers. (author)

  16. Technical evaluation of proposed Ukrainian Central Radioactive Waste Processing Facility

    International Nuclear Information System (INIS)

    This technical report is a comprehensive evaluation of the proposal by the Ukrainian State Committee on Nuclear Power Utilization to create a central facility for radioactive waste (not spent fuel) processing. The central facility is intended to process liquid and solid radioactive wastes generated from all of the Ukrainian nuclear power plants and the waste generated as a result of Chernobyl 1, 2 and 3 decommissioning efforts. In addition, this report provides general information on the quantity and total activity of radioactive waste in the 30-km Zone and the Sarcophagus from the Chernobyl accident. Processing options are described that may ultimately be used in the long-term disposal of selected 30-km Zone and Sarcophagus wastes. A detailed report on the issues concerning the construction of a Ukrainian Central Radioactive Waste Processing Facility (CRWPF) from the Ukrainian Scientific Research and Design institute for Industrial Technology was obtained and incorporated into this report. This report outlines various processing options, their associated costs and construction schedules, which can be applied to solving the operating and decommissioning radioactive waste management problems in Ukraine. The costs and schedules are best estimates based upon the most current US industry practice and vendor information. This report focuses primarily on the handling and processing of what is defined in the US as low-level radioactive wastes

  17. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  18. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. Accelerating compartmental modeling on a graphical processing unit

    Directory of Open Access Journals (Sweden)

    Alon Korngreen

    2013-03-01

    Full Text Available Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs. We use NVIDIA’s CUDA, which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than NEURON was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets.

  1. Accelerating compartmental modeling on a graphical processing unit.

    Science.gov (United States)

    Ben-Shalom, Roy; Liberman, Gilad; Korngreen, Alon

    2013-01-01

    Compartmental modeling is a widely used tool in neurophysiology but the detail and scope of such models is frequently limited by lack of computational resources. Here we implement compartmental modeling on low cost Graphical Processing Units (GPUs), which significantly increases simulation speed compared to NEURON. Testing two methods for solving the current diffusion equation system revealed which method is more useful for specific neuron morphologies. Regions of applicability were investigated using a range of simulations from a single membrane potential trace simulated in a simple fork morphology to multiple traces on multiple realistic cells. A runtime peak 150-fold faster than the CPU was achieved. This application can be used for statistical analysis and data fitting optimizations of compartmental models and may be used for simultaneously simulating large populations of neurons. Since GPUs are forging ahead and proving to be more cost-effective than CPUs, this may significantly decrease the cost of computation power and open new computational possibilities for laboratories with limited budgets. PMID:23508232

  2. Seismic risk assessment and application in the central United States

    Science.gov (United States)

    Wang, Z.

    2011-01-01

    Seismic risk is a somewhat subjective, but important, concept in earthquake engineering and other related decision-making. Another important concept that is closely related to seismic risk is seismic hazard. Although seismic hazard and seismic risk have often been used interchangeably, they are fundamentally different: seismic hazard describes the natural phenomenon or physical property of an earthquake, whereas seismic risk describes the probability of loss or damage that could be caused by a seismic hazard. The distinction between seismic hazard and seismic risk is of practical significance because measures for seismic hazard mitigation may differ from those for seismic risk reduction. Seismic risk assessment is a complicated process and starts with seismic hazard assessment. Although probabilistic seismic hazard analysis (PSHA) is the most widely used method for seismic hazard assessment, recent studies have found that PSHA is not scientifically valid. Use of PSHA will lead to (1) artifact estimates of seismic risk, (2) misleading use of the annual probability of exccedance (i.e., the probability of exceedance in one year) as a frequency (per year), and (3) numerical creation of extremely high ground motion. An alternative approach, which is similar to those used for flood and wind hazard assessments, has been proposed. ?? 2011 ASCE.

  3. Computer-based remote programming and control of stimulation units

    OpenAIRE

    Passama, Robin; Andreu, David; Guiraud, David

    2011-01-01

    This paper describes the architecture of the functional electrical stimulation systems developed in the context of the TIME European project. Contributions are the definition of a generic FES architecture and the specialization of this architecture, depending on the applicative context, by the deployment, the programming and the control of hardware units, notably stimulation ones. This specialization process is managed by a dedicated software environment, named SENIS Manager.

  4. 15 CFR 971.209 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States... Applications Contents § 971.209 Processing outside the United States. (a) Except as provided in this section and § 971.408, the processing of hard minerals recovered pursuant to a permit shall be...

  5. Establishing a central waste processing and storage facility in Ghana

    International Nuclear Information System (INIS)

    regulations. About 50 delegates from various ministries and establishment participated in the seminar. The final outcome of the draft regulation was sent to the Attorney General's office for the necessary legal review before been presented to Parliament through the Ministry of Environment, Science and Technology. A radiation sources and radioactive waste inventory have been established using the Regulatory Authority Information System (RAIS) and the Sealed Radiation Sources Registry System (SRS). A central waste processing and storage facility was constructed in the mid sixties to handle waste from a 2MW reactor that was never installed. The facility consists of a decontamination unit, two concrete vaults (about 5x15 m and 4m deep) intended for low and intermediate level waste storage and 60 wells (about 0.5m diameter x 4.6m) for storage of spent fuel. This Facility will require significant rehabilitation. Safety and performance assessment studies have been carried out with the help of three IAEA experts. The recommendations from the assessment indicate that the vaults are very old and deteriorated to be considered for any future waste storage. However the decontamination unit and the wells are still in good condition and were earmarked for refurbishment and use as waste processing and storage facilities respectively. The decontamination unit has a surface area of 60m2 and a laboratory of surface area 10m2. The decontamination unit will have four technological areas. An area for cementation of non-compactible solid waste and spent sealed sources. An area for compaction of compactable solid waste and a controlled area for conditioned wastes in 200L drums. Provision has been made to condition liquid waste. There will be a section for receipt and segregation of the waste. The laboratory will be provided with the necessary equipment for quality control. Research to support technological processes will be carried out in the laboratory. A quality assurance and control systems shall

  6. Language and central temporal auditory processing in childhood epilepsies.

    Science.gov (United States)

    Boscariol, Mirela; Casali, Raquel L; Amaral, M Isabel R; Lunardi, Luciane L; Matas, Carla G; Collela-Santos, M Francisca; Guerreiro, Marilisa M

    2015-12-01

    Because of the relationship between rolandic, temporoparietal, and centrotemporal areas and language and auditory processing, the aim of this study was to investigate language and central temporal auditory processing of children with epilepsy (rolandic epilepsy and temporal lobe epilepsy) and compare these with those of children without epilepsy. Thirty-five children aged between eight and 14 years old were studied. Two groups of children participated in this study: a group with childhood epilepsy (n=19), and a control group without epilepsy or linguistic changes (n=16). There was a significant difference between the two groups, with the worst performance in children with epilepsy for the gaps-in-noise test, right ear (preceptive vocabulary (PPVT) (p<0.001) and phonological working memory (nonwords repetition task) tasks (p=0.001). We conclude that the impairment of central temporal auditory processing and language skills may be comorbidities in children with rolandic epilepsy and temporal lobe epilepsy. PMID:26580215

  7. Polymer Dynamic Field Theory on Graphics Processing Units

    International Nuclear Information System (INIS)

    This paper explores the application of graphics processing units (GPUs) to a study of the dynamics of diblock copolymer (DBCP) melts. DBCP melts exhibit ordered mesophases with potential applications in nanotechnology, but are often found out of equilibrium. The length scales involved have previously rendered the numerical modelling of these materials intractable in certain regimes. We adapt a code originally written in parallel using the Message Passing Interface to GPUs using the NVIDIA (registered) CUDATM architecture. We gain a factor of 30 performance improvement over the original code at large system size. We then use this performance improvement to study DBCP melts in two computationally time-intensive regimes: droplet nucleation close to phase coexistence, and dynamics under confinement in a small cylindrical nanopore.

  8. Accelerating radio astronomy cross-correlation with graphics processing units

    Science.gov (United States)

    Clark, M. A.; LaPlante, P. C.; Greenhill, L. J.

    2013-05-01

    We present a highly parallel implementation of the cross-correlation of time-series data using graphics processing units (GPUs), which is scalable to hundreds of independent inputs and suitable for the processing of signals from 'large-Formula' arrays of many radio antennas. The computational part of the algorithm, the X-engine, is implemented efficiently on NVIDIA's Fermi architecture, sustaining up to 79% of the peak single-precision floating-point throughput. We compare performance obtained for hardware- and software-managed caches, observing significantly better performance for the latter. The high performance reported involves use of a multi-level data tiling strategy in memory and use of a pipelined algorithm with simultaneous computation and transfer of data from host to device memory. The speed of code development, flexibility, and low cost of the GPU implementations compared with application-specific integrated circuit (ASIC) and field programmable gate array (FPGA) implementations have the potential to greatly shorten the cycle of correlator development and deployment, for cases where some power-consumption penalty can be tolerated.

  9. On the use of Graphics Processing Units (GPUs) for molecular dynamics simulation of spherical particles

    OpenAIRE

    Cruz-Hidalgo, R. (Raúl); Kanzaki, T.; Alonso-Marroquin, F.; Luding, S.

    2013-01-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybrid CPU-GPU implementation takes into account all the degrees of freedom, including the quaternion representation of 3D rotations. For additional versatility, the contact interaction bet...

  10. On the use of graphics processing units (GPUs) for molecular dynamics simulation of spherical particles

    OpenAIRE

    Hidalgo, R. C.; Kanzaki, T.; Alonso-Marroquin, F.; Yu, A.; Dong, K.; Yang, R; Luding, S.

    2013-01-01

    General-purpose computation on Graphics Processing Units (GPU) on personal computers has recently become an attractive alternative to parallel computing on clusters and supercomputers. We present the GPU-implementation of an accurate molecular dynamics algorithm for a system of spheres. The new hybrid CPU-GPU implementation takes into account all the degrees of freedom, including the quaternion representation of 3D rotations. For additional versatility, the contact interaction between particl...

  11. Pyrolysis oil upgrading for Co-processing in standard refinery units

    OpenAIRE

    De Miguel Mercader, Ferran

    2010-01-01

    This thesis considers the route that comprises the upgrading of pyrolysis oil (produced from lingo-cellulosic biomass) and its further co-processing in standard refineries to produce transportation fuels. In the present concept, pyrolysis oil is produced where biomass is available and then transported to a central upgrading unit. This unit is located next or inside a standard petroleum refinery, enabling the use of existing facilities. The obtained product can be further distributed using exi...

  12. Kinematics of the New Madrid seismic zone, central United States, based on stepover models

    Science.gov (United States)

    Pratt, Thomas L.

    2012-01-01

    Seismicity in the New Madrid seismic zone (NMSZ) of the central United States is generally attributed to a stepover structure in which the Reelfoot thrust fault transfers slip between parallel strike-slip faults. However, some arms of the seismic zone do not fit this simple model. Comparison of the NMSZ with an analog sandbox model of a restraining stepover structure explains all of the arms of seismicity as only part of the extensive pattern of faults that characterizes stepover structures. Computer models show that the stepover structure may form because differences in the trends of lower crustal shearing and inherited upper crustal faults make a step between en echelon fault segments the easiest path for slip in the upper crust. The models predict that the modern seismicity occurs only on a subset of the faults in the New Madrid stepover structure, that only the southern part of the stepover structure ruptured in the A.D. 1811–1812 earthquakes, and that the stepover formed because the trends of older faults are not the same as the current direction of shearing.

  13. Report on the Fourth Reactor Refueling. Laguna Verde Nuclear Central. Unit 1. April-May 1995

    International Nuclear Information System (INIS)

    The fourth refueling of the Unit 1 of Laguna Verde Nuclear Central was executed in the period of April 17 to May 31 of 1995 with the participation of a task group of 358 persons, included technicians and radiation protection officials and auxiliaries.The radiation monitoring and radiological surveillance to the workers was present length ways the refueling process and always attached to the ALARA criteria. The check points for radiation levels were set at: primary container or dry well, reloading floor, decontamination room (level 10.5), turbine building and radioactive waste building. To take advantage of the refueling process, rooms 203 and 213 of the turbine buildings were subject to inspection and maintenance work in valves, heaters and drains of heaters. Management aspects as personnel selection and training, costs, and countable are also presented in this report. Owing to the high cost of man-hour of the members of the ININ staff, its participation in the refueling process was in smaller number than years before. (Author)

  14. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  15. Feature extraction & image processing for computer vision

    CERN Document Server

    Nixon, Mark

    2012-01-01

    This book is an essential guide to the implementation of image processing and computer vision techniques, with tutorial introductions and sample code in Matlab. Algorithms are presented and fully explained to enable complete understanding of the methods and techniques demonstrated. As one reviewer noted, ""The main strength of the proposed book is the exemplar code of the algorithms."" Fully updated with the latest developments in feature extraction, including expanded tutorials and new techniques, this new edition contains extensive new material on Haar wavelets, Viola-Jones, bilateral filt

  16. Viscoelastic Finite Difference Modeling Using Graphics Processing Units

    Science.gov (United States)

    Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.

    2014-12-01

    Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size

  17. Handling geophysical flows: Numerical modelling using Graphical Processing Units

    Science.gov (United States)

    Garcia-Navarro, Pilar; Lacasta, Asier; Juez, Carmelo; Morales-Hernandez, Mario

    2016-04-01

    Computational tools may help engineers in the assessment of sediment transport during the decision-making processes. The main requirements are that the numerical results have to be accurate and simulation models must be fast. The present work is based on the 2D shallow water equations in combination with the 2D Exner equation [1]. The resulting numerical model accuracy was already discussed in previous work. Regarding the speed of the computation, the Exner equation slows down the already costly 2D shallow water model as the number of variables to solve is increased and the numerical stability is more restrictive. On the other hand, the movement of poorly sorted material over steep areas constitutes a hazardous environmental problem. Computational tools help in the predictions of such landslides [2]. In order to overcome this problem, this work proposes the use of Graphical Processing Units (GPUs) for decreasing significantly the simulation time [3, 4]. The numerical scheme implemented in GPU is based on a finite volume scheme. The mathematical model and the numerical implementation are compared against experimental and field data. In addition, the computational times obtained with the Graphical Hardware technology are compared against Single-Core (sequential) and Multi-Core (parallel) CPU implementations. References [Juez et al.(2014)] Juez, C., Murillo, J., & Garca-Navarro, P. (2014) A 2D weakly-coupled and efficient numerical model for transient shallow flow and movable bed. Advances in Water Resources. 71 93-109. [Juez et al.(2013)] Juez, C., Murillo, J., & Garca-Navarro, P. (2013) . 2D simulation of granular flow over irregular steep slopes using global and local coordinates. Journal of Computational Physics. 225 166-204. [Lacasta et al.(2014)] Lacasta, A., Morales-Hernndez, M., Murillo, J., & Garca-Navarro, P. (2014) An optimized GPU implementation of a 2D free surface simulation model on unstructured meshes Advances in Engineering Software. 78 1-15. [Lacasta

  18. Codifications of anaesthetic information for computer processing.

    Science.gov (United States)

    Harrison, M J; Johnson, F

    1981-07-01

    In order for any decision-making process to be computer-assisted it is necessary for the information to be encodable in some way so that the computer can manipulate the data using logical operations. In this paper the information used to generate an anaesthetic regiment is examined. A method is presented for obtaining a suitable set of statements to describe the patient's history and surgical requirements. These statements are then sorted by an algorithm which uses standard Boolean operators to produce a protocol for six phases of anaesthetic procedure. An example is given of the system in operation. The system incorporate knowledge at the level of a consultant anaesthetist. The program used 428 statements to encode patient data, and drew upon a list of 163 possible prescriptions. The program ran on an LSI-11/2 computer using one disc drive. The scheme has direct application in training of junior anaesthetist, as well as producing guidelines to application in other areas of medicine where the possibility of a similar codification may exist. PMID:7306370

  19. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  20. Data processing device for computed tomography system

    International Nuclear Information System (INIS)

    A data processing device applied to a computed tomography system which examines a living body utilizing radiation of X-rays is disclosed. The X-rays which have penetrated the living body are converted into electric signals in a detecting section. The electric signals are acquired and converted from an analog form into a digital form in a data acquisition section, and then supplied to a matrix data-generating section included in the data processing device. By this matrix data-generating section are generated matrix data which correspond to a plurality of projection data. These matrix data are supplied to a partial sum-producing section. The partial sums respectively corresponding to groups of the matrix data are calculated in this partial sum-producing section and then supplied to an accumulation section. In this accumulation section, the final value corresponding to the total sum of the matrix data is calculated, whereby the calculation for image reconstruction is performed

  1. Stochastic Analysis of a Queue Length Model Using a Graphics Processing Unit

    Czech Academy of Sciences Publication Activity Database

    Přikryl, Jan; Kocijan, J.

    2012-01-01

    Roč. 5, č. 2 (2012), s. 55-62. ISSN 1802-971X R&D Projects: GA MŠk(CZ) MEB091015 Institutional support: RVO:67985556 Keywords : graphics processing unit * GPU * Monte Carlo simulation * computer simulation * modeling Subject RIV: BC - Control Systems Theory http://library.utia.cas.cz/separaty/2012/AS/prikryl-stochastic analysis of a queue length model using a graphics processing unit.pdf

  2. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  3. Effects of sleep deprivation on central auditory processing

    OpenAIRE

    Liberalesso Paulo Breno; D’Andrea Karlin Fabianne; Cordeiro Mara L; Zeigelboim Bianca; Marques Jair; Jurkiewicz Ari

    2012-01-01

    AbstractBackgroundSleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP). Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse informat...

  4. Central limit theorem for Fourier transform of stationary processes

    CERN Document Server

    Peligrad, Magda

    2009-01-01

    We consider asymptotic behavior of Fourier transforms of stationary ergodic sequences with finite second moments. We establish the central limit theorem (CLT) for almost all frequencies and also the annealed CLT. The theorems hold for all regular sequences. Our results shed new light on the foundation of spectral analysis and on the asymptotic distribution of periodogram, and it provides a nice blend of harmonic analysis, theory of stationary processes and theory of martingales.

  5. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    Science.gov (United States)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  6. Computer aided design of fast neutron therapy units

    International Nuclear Information System (INIS)

    Conceptual design of a radiation-therapy unit using fusion neutrons is presently being considered by KMS Fusion, Inc. As part of this effort, a powerful and versatile computer code, TBEAM, has been developed which enables the user to determine physical characteristics of the fast neutron beam generated in the facility under consideration, using certain given design parameters of the facility as inputs. TBEAM uses the method of statistical sampling (Monte Carlo) to solve the space, time and energy dependent neutron transport equation relating to the conceptual design described by the user-supplied input parameters. The code traces the individual source neutrons as they propagate throughout the shield-collimator structure of the unit, and it keeps track of each interaction by type, position and energy. In its present version, TBEAM is applicable to homogeneous and laminated shields of spherical geometry, to collimator apertures of conical shape, and to neutrons emitted by point sources or such plate sources as are used in neutron generators of various types. TBEAM-generated results comparing the performance of point or plate sources in otherwise identical shield-collimator configurations are presented in numerical form. (H.K.)

  7. Centralized computer-based controls of the Nova Laser Facility

    International Nuclear Information System (INIS)

    This article introduces the overall architecture of the computer-based Nova Laser Control System and describes its basic components. Use of standard hardware and software components ensures that the system, while specialized and distributed throughout the facility, is adaptable. 9 references, 6 figures

  8. Perceptual weights for loudness reflect central spectral processing

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    Weighting patterns for loudness obtained using the reverse correlation method are thought to reveal the relative contributions of different frequency regions to total loudness, the equivalent of specific loudness. Current models of loudness assume that specific loudness is determined by peripheral...... processes such as compression and masking. Here we test this hypothesis using 20-tone harmonic complexes (200Hz f0, 200 to 4000Hz, 250 ms, 65 dB/Component) added in opposite phase relationships (Schroeder positive and negative). Due to the varying degree of envelope modulations, these time-reversed harmonic...... processes and reflect a central frequency weighting template....

  9. Optimization models of the supply of power structures’ organizational units with centralized procurement

    OpenAIRE

    Sysoiev Volodymyr

    2013-01-01

    Management of the state power structures’ organizational units for materiel and technical support requires the use of effective tools for supporting decisions, due to the complexity, interdependence, and dynamism of supply in the market economy. The corporate nature of power structures is of particular interest to centralized procurement management, as it provides significant advantages through coordination, eliminating duplication, and economy of scale. Th...

  10. Optical signal processing using photonic reservoir computing

    Science.gov (United States)

    Salehi, Mohammad Reza; Dehyadegari, Louiza

    2014-10-01

    As a new approach to recognition and classification problems, photonic reservoir computing has such advantages as parallel information processing, power efficient and high speed. In this paper, a photonic structure has been proposed for reservoir computing which is investigated using a simple, yet, non-partial noisy time series prediction task. This study includes the application of a suitable topology with self-feedbacks in a network of SOA's - which lends the system a strong memory - and leads to adjusting adequate parameters resulting in perfect recognition accuracy (100%) for noise-free time series, which shows a 3% improvement over previous results. For the classification of noisy time series, the rate of accuracy showed a 4% increase and amounted to 96%. Furthermore, an analytical approach was suggested to solve rate equations which led to a substantial decrease in the simulation time, which is an important parameter in classification of large signals such as speech recognition, and better results came up compared with previous works.

  11. Innovative Processes in Computer Assisted Language Learning

    Directory of Open Access Journals (Sweden)

    Khaled M. Alhawiti

    2015-02-01

    Full Text Available Reading ability of an individual is believed to be one of the major sections in language competency. From this perspective, determination of topical writings for second language learners is considered tough exam for language instructor. This mixed i.e. qualitative and quantitative research study aims to address the innovative processes in computer-assisted language learning through surveying the reading level and streamline content of the ESL students in the classrooms designed for students. This study is based on empirical research to measure the reading level among the ESL students. The findings of this study have revealed that using the procedures of language preparing such as shortened text as well as assessed component tools used for automatic text simplification is profitable for both the ESL students and the teachers.

  12. Standard candle central exclusive processes at the Tevatron and LHC

    CERN Document Server

    Harland-Lang, L A; Ryskin, M G; Stirling, W J

    2010-01-01

    Central exclusive production (CEP) processes in high-energy proton -- (anti)proton collisions offer a very promising framework within which to study both novel aspects of QCD and new physics signals. Among the many interesting processes that can be studied in this way, those involving the production of heavy (c,b) quarkonia and gamma gamma states have sufficiently well understood theoretical properties and sufficiently large cross sections that they can serve as `standard candle' processes with which we can benchmark predictions for new physics CEP at the CERN Large Hadron Collider. Motivated by the broad agreement with theoretical predictions of recent CEP measurements at the Fermilab Tevatron, we perform a detailed quantitative study of heavy quarkonia (chi and eta) and gamma gamma production at the Tevatron, RHIC and LHC, paying particular attention to the various uncertainties in the calculations. Our results confirm the rich phenomenology that these production processes offer at present and future high-e...

  13. Accelerating glassy dynamics using graphics processing units

    CERN Document Server

    Colberg, Peter H

    2009-01-01

    Modern graphics hardware offers peak performances close to 1 Tflop/s, and NVIDIA's CUDA provides a flexible and convenient programming interface to exploit these immense computing resources. We demonstrate the ability of GPUs to perform high-precision molecular dynamics simulations for nearly a million particles running stably over many days. Particular emphasis is put on the numerical long-time stability in terms of energy and momentum conservation. Floating point precision is a crucial issue here, and sufficient precision is maintained by double-single emulation of the floating point arithmetic. As a demanding test case, we have reproduced the slow dynamics of a binary Lennard-Jones mixture close to the glass transition. The improved numerical accuracy permits us to follow the relaxation dynamics of a large system over 4 non-trivial decades in time. Further, our data provide evidence for a negative power-law decay of the velocity autocorrelation function with exponent 5/2 in the close vicinity of the transi...

  14. Eros details enhanced by computer processing

    Science.gov (United States)

    2000-01-01

    The NEAR camera's ability to show details of Eros's surface is limited by the spacecraft's distance from the asteroid. That is, the closer the spacecraft is to the surface, the more that details are visible. However mission scientists regularly use computer processing to squeeze an extra measure of information from returned data. In a technique known as 'superresolution', many images of the same scene acquired at very, very slightly different camera pointing are carefully overlain and processed to bright out details even smaller than would normally be visible. In this rendition constructed out of 20 image frames acquired Feb. 12, 2000, the images have first been enhanced ('high-pass filtered') to accentuate small-scale details. Superresolution was then used to bring out features below the normal ability of the camera to resolve.Built and managed by The Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, NEAR was the first spacecraft launched in NASA's Discovery Program of low-cost, small-scale planetary missions. See the NEAR web page at http://near.jhuapl.edu for more details.

  15. System concept for electronic data processing in nuclear medicine

    International Nuclear Information System (INIS)

    Electronic data processing is seen here as a network. Part of this network are measuring equipment, data pre-processing units conceived especially for measuring equipment, as well as a central computer of the clinic (called 'clinic computer' in the following). Good communication between data pre-processing, clinic computer and the appropriate evaluation units guarantee a high degree of operational safety and reliability. This is very important for the use of data processing in clinical diagnostics. (orig./WB)

  16. Radiation processing in the United States

    International Nuclear Information System (INIS)

    In animal feeding studies, including the huge animal feeding studies on radiation sterilized poultry products irradiated with sterilizing dose of 58 kGy revealed no harmful effects. This finding is corroborated by the very extensive analysis of the radiolytic products, which indicated that the radiolytic products could not in the quantity found in the food be expected to produce any toxic effect. It thus appears to be proven with reasonable certainty that no harm will result from the proposed use of the process. Accordingly, FDA is moving forward with approvals while allowing the required time for hearings and objection. On July 5, 1983 FDA permitted gamma irradiation for control of microbial contamination in dried spices and dehydrated vegetable seasoning at doses up to 10 kGy; on June 19, 1984 the approval was expanded to cover insect infection; and additional seasonings and irradiation of dry or dehydrated enzyme preparations were approved on February 12 and June 4, respectively, 1985. In addition, in July 1985, FDA cleared irradiation of pork products with doses of 0.3 to 1 kGy for eliminating trichinosis. Approvals of other agencies, including Food and Drug Administration, Department of Agriculture, the Nuclear Regulatory Commission, Occupational Safety and Health Administration, Department of Transportation, Environmental Protection Agency, and States and local communities, are usually of a technological nature and can then be obtained if the process is technologically feasible. (Namekawa, K.)

  17. Accelerating chemical database searching using graphics processing units.

    Science.gov (United States)

    Liu, Pu; Agrafiotis, Dimitris K; Rassokhin, Dmitrii N; Yang, Eric

    2011-08-22

    The utility of chemoinformatics systems depends on the accurate computer representation and efficient manipulation of chemical compounds. In such systems, a small molecule is often digitized as a large fingerprint vector, where each element indicates the presence/absence or the number of occurrences of a particular structural feature. Since in theory the number of unique features can be exceedingly large, these fingerprint vectors are usually folded into much shorter ones using hashing and modulo operations, allowing fast "in-memory" manipulation and comparison of molecules. There is increasing evidence that lossless fingerprints can substantially improve retrieval performance in chemical database searching (substructure or similarity), which have led to the development of several lossless fingerprint compression algorithms. However, any gains in storage and retrieval afforded by compression need to be weighed against the extra computational burden required for decompression before these fingerprints can be compared. Here we demonstrate that graphics processing units (GPU) can greatly alleviate this problem, enabling the practical application of lossless fingerprints on large databases. More specifically, we show that, with the help of a ~$500 ordinary video card, the entire PubChem database of ~32 million compounds can be searched in ~0.2-2 s on average, which is 2 orders of magnitude faster than a conventional CPU. If multiple query patterns are processed in batch, the speedup is even more dramatic (less than 0.02-0.2 s/query for 1000 queries). In the present study, we use the Elias gamma compression algorithm, which results in a compression ratio as high as 0.097. PMID:21696144

  18. 2009 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-09-01

    This report presents the 2009 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from October 2008 through December 2009. It also represents the first year of the enhanced monitoring network and begins the new 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  19. Closure Report Central Nevada Test Area Subsurface Corrective Action Unit 443 January 2016

    Energy Technology Data Exchange (ETDEWEB)

    Findlay, Rick [US Department of Energy, Washington, DC (United States). Office of Legacy Management

    2015-11-01

    The U.S. Department of Energy (DOE) Office of Legacy Management (LM) prepared this Closure Report for the subsurface Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA), Nevada, Site. CNTA was the site of a 0.2- to 1-megaton underground nuclear test in 1968. Responsibility for the site’s environmental restoration was transferred from the DOE, National Nuclear Security Administration, Nevada Field Office to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order (FFACO 1996, as amended 2011) and all applicable Nevada Division of Environmental Protection (NDEP) policies and regulations. This Closure Report provides justification for closure of CAU 443 and provides a summary of completed closure activities; describes the selected corrective action alternative; provides an implementation plan for long-term monitoring with well network maintenance and approaches/policies for institutional controls (ICs); and presents the contaminant, compliance, and use-restriction boundaries for the site.

  20. 2010 Groundwater Monitoring Report Central Nevada Test Area, Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2011-02-01

    This report presents the 2010 groundwater monitoring results collected by the U.S. Department of Energy (DOE) Office of Legacy Management (LM) for the Central Nevada Test Area (CNTA) Subsurface Corrective Action Unit (CAU) 443. Responsibility for the environmental site restoration of CNTA was transferred from the DOE Office of Environmental Management to LM on October 1, 2006. The environmental restoration process and corrective action strategy for CAU 443 are conducted in accordance with the Federal Facility Agreement and Consent Order entered into by DOE, the U.S. Department of Defense, and the State of Nevada. The corrective action strategy for the site includes proof-of-concept monitoring in support of site closure. This report summarizes investigation activities associated with CAU 443 that were conducted at the site from December 2009 through December 2010. It also represents the second year of the enhanced monitoring network and the 5-year proof-of-concept monitoring period that is intended to validate the compliance boundary

  1. Evaluation of the Central Hearing Process in Parkinson Patients

    Directory of Open Access Journals (Sweden)

    Santos, Rosane Sampaio

    2011-04-01

    Full Text Available Introduction: Parkinson disease (PD is a degenerating disease with a deceitful character, impairing the central nervous system and causing biological, psychological and social changes. It shows motor signs and symptoms characterized by trembling, postural instability, rigidity and bradykinesia. Objective: To evaluate the central hearing function in PD patients. Method: A descriptive, prospect and transversal study, in which 10 individuals diagnosed of PD named study group (SG and 10 normally hearing individuals named control group (CG were evaluated, age average of 63.8 and (SD 5.96. Both groups went through otorhinolaryngological and ordinary audiological evaluations, and dichotic test of alternate disyllables (SSW. Results: In the quantitative analysis, CG showed 80% normality on competitive right-ear hearing (RC and 60% on the competitive left-ear hearing (LC in comparison with the SG that presented 70% on RC and 40% on LC. In the qualitative analysis, the biggest percentage of errors was evident in the SG in the order effect. The results showed a difficulty in identifying a sound when there is another competitive sound and in the memory ability. Conclusion: A qualitative and quantitative difference was observed in the SSW test between the evaluated groups, although statistical data does not show significant differences. The importance to evaluate the central hearing process is emphasized when contributing to the procedures to be taken at the therapeutic follow-up.

  2. Invalidation: a central process underlying maltreatment of women with disabilities.

    Science.gov (United States)

    Hassouneh-Phillips, Dena; McNeff, Elizabeth; Powers, Laurie; Curry, Mary Ann

    2005-01-01

    Recent qualitative studies indicate that maltreatment of women with disabilities by health care providers is a serious quality of care issue. To begin to address this problem, we conducted a secondary analysis of data derived from three qualitative studies of abuse of women with disabilities. Findings identified Invalidation as a central process underlying maltreatment. Invalidation was characterized by health care providers Taking Over care, Discounting, Objectifying, and Hurting women with disabilities during health care encounters. These findings highlight the need to educate health care providers about social and interpersonal aspects of disability and address the problem of Invalidation in health care settings. PMID:16048867

  3. General circulation model simulations of recent cooling in the east-central United States

    Science.gov (United States)

    Robinson, Walter A.; Reudy, Reto; Hansen, James E.

    2002-12-01

    In ensembles of retrospective general circulation model (GCM) simulations, surface temperatures in the east-central United States cool between 1951 and 1997. This cooling, which is broadly consistent with observed surface temperatures, is present in GCM experiments driven by observed time varying sea-surface temperatures (SSTs) in the tropical Pacific, whether or not increasing greenhouse gases and other time varying climate forcings are included. Here we focus on ensembles with fixed radiative forcing and with observed varying SST in different regions. In these experiments the trend and variability in east-central U.S. surface temperatures are tied to tropical Pacific SSTs. Warm tropical Pacific SSTs cool U.S. temperatures by diminishing solar heating through an increase in cloud cover. These associations are embedded within a year-round response to warm tropical Pacific SST that features tropospheric warming throughout the tropics and regions of tropospheric cooling in midlatitudes. Precipitable water vapor over the Gulf of Mexico and the Caribbean and the tropospheric thermal gradient across the Gulf Coast of the United States increase when the tropical Pacific is warm. In observations, recent warming in the tropical Pacific is also associated with increased precipitable water over the southeast United States. The observed cooling in the east-central United States, relative to the rest of the globe, is accompanied by increased cloud cover, though year-to-year variations in cloud cover, U.S. surface temperatures, and tropical Pacific SST are less tightly coupled in observations than in the GCM.

  4. GPU-accelerated micromagnetic simulations using cloud computing

    Science.gov (United States)

    Jermain, C. L.; Rowlands, G. E.; Buhrman, R. A.; Ralph, D. C.

    2016-03-01

    Highly parallel graphics processing units (GPUs) can improve the speed of micromagnetic simulations significantly as compared to conventional computing using central processing units (CPUs). We present a strategy for performing GPU-accelerated micromagnetic simulations by utilizing cost-effective GPU access offered by cloud computing services with an open-source Python-based program for running the MuMax3 micromagnetics code remotely. We analyze the scaling and cost benefits of using cloud computing for micromagnetics.

  5. Steroid induced central serous retinopathy following follicular unit extraction in androgenic alopecia

    Directory of Open Access Journals (Sweden)

    Rakesh Tilak Raj

    2016-06-01

    Full Text Available Dermatologists for various conditions and procedures commonly use corticosteroids worldwide. The development of central serous retinopathy is a lesser known complication occurring in <10% of the cases with steroid use. This case report highlights the development of central serous retinopathy after prescribing low dose of prednisolone 20 mg per day for androgenic alopecia during post-surgical follicular unit extraction (FUE surgery follow-up that recovered spontaneously after gradual withdrawal of steroids. Therefore, awareness is required for its early detection and management as it has a potential of causing irreversible visual impairment. [Int J Basic Clin Pharmacol 2016; 5(3.000: 1152-1155

  6. Horizontal velocities in the central and eastern United States from GPS surveys during the 1987-1996 interval

    International Nuclear Information System (INIS)

    The National Geodetic Survey and the Nuclear Regulatory Commission jointly organized GPS surveys in 1987, 1990, 1993, and 1996 to search for crustal deformation in the central and eastern United States (east of longitude 108 degrees W). We have analyzed the data of these four surveys in combination with VLBI data observed during the 1979-1995 interval and GPS data for 22 additional surveys observed during the 1990-1996 interval. These latter GPS surveys served to establish accurately positioned geodetic marks in various states. Accordingly, we have computed horizontal velocities for 64 GPS sites and 12 VLBI sites relative to a reference frame for which the interior of the North American plate is considered fixed on average. None of our derived velocities exceeds 6 mm/yr in magnitude. Moreover, the derived velocity at each GPS site is statistically zero at the 95% confidence level except for the site BOLTON in central Ohio and the site BEARTOWN in southeastern Pennsylvania. However, as statistical theory would allow approximately 5% of the 64 GPS sites to fall our zero-velocity hypothesis, we are uncertain whether or not these estimated velocities for BOLTON and BEARTOWN reflect actual motion relative to the North American plate. We also computed horizontal strain rates for the cells formed by a 1 degrees by 1 degrees grid spanning the central and eastern United States. Corresponding shearing rates are everywhere less than 60 nanoradians/yr in magnitude, and no shearing rate differs statistically from zero at the 95% confidence level except for a grid cell near BEARTOWN whose rate is 57 ± 26 nanoradians/yr. Also corresponding areal dilatation rates are everywhere less than 40 nanostrain/yr in magnitude, and no dilatation rate differs statistically from zero at the 95% confidence level

  7. Central Computer Science Concepts to Research-Based Teacher Training in Computer Science: An Experimental Study

    Science.gov (United States)

    Zendler, Andreas; Klaudt, Dieter

    2012-01-01

    The significance of computer science for economics and society is undisputed. In particular, computer science is acknowledged to play a key role in schools (e.g., by opening multiple career paths). The provision of effective computer science education in schools is dependent on teachers who are able to properly represent the discipline and whose…

  8. 77 FR 15397 - Dominican Republic-Central America-United States Free Trade Agreement; Notice of Determination...

    Science.gov (United States)

    2012-03-15

    ... of the Secretary Dominican Republic-Central America-United States Free Trade Agreement; Notice of... Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR). Father Christopher Hartley.... Department of Labor. ACTION: Notice. SUMMARY: The Office of Trade and Labor Affairs (OTLA) gives notice...

  9. 77 FR 51828 - Dominican Republic-Central America-United States Free Trade Agreement; Notice of Extension of the...

    Science.gov (United States)

    2012-08-27

    ... of the Secretary Dominican Republic--Central America--United States Free Trade Agreement; Notice of... Republic--Central America--United States Free Trade Agreement (CAFTA-DR). On December 22, 2011, OTLA... International Labor Affairs, U.S. Department of Labor. ACTION: Notice. The Office of Trade and Labor...

  10. Efficient Nonbonded Interactions for Molecular Dynamics on a Graphics Processing Unit

    OpenAIRE

    Eastman, Peter; Pande, Vijay S.

    2010-01-01

    We describe an algorithm for computing nonbonded interactions with cutoffs on a graphics processing unit (GPU). We have incorporated it into OpenMM, a library for performing molecular simulations on high performance computer architectures. We benchmark it on a variety of systems including boxes of water molecules, proteins in explicit solvent, a lipid bilayer, and proteins with implicit solvent. The results demonstrate that its performance scales linearly with the number of atoms over a wide ...

  11. An Overview of Computer-Based Natural Language Processing.

    Science.gov (United States)

    Gevarter, William B.

    Computer-based Natural Language Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using natural languages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

  12. Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU)

    CERN Document Server

    Hagiwara, K; Okamura, N; Rainwater, D L; Stelzer, T

    2009-01-01

    We use a graphics processing unit (GPU) for fast calculations of helicity amplitudes of quark and gluon scattering processes in massless QCD. New HEGET ({\\bf H}ELAS {\\bf E}valuation with {\\bf G}PU {\\bf E}nhanced {\\bf T}echnology) codes for gluon self-interactions are introduced, and a C++ program to convert the MadGraph generated FORTRAN codes into HEGET codes in CUDA (a C-platform for general purpose computing on GPU) is created. Because of the proliferation of the number of Feynman diagrams and the number of independent color amplitudes, the maximum number of final state jets we can evaluate on a GPU is limited to 4 for pure gluon processes ($gg\\to 4g$), or 5 for processes with one or more quark lines such as $q\\bar{q}\\to 5g$ and $qq\\to qq+3g$. Compared with the usual CPU-based programs, we obtain 60-100 times better performance on the GPU, except for 5-jet production processes and the $gg\\to 4g$ processes for which the GPU gain over the CPU is about 20.

  13. Kinematic modelling of disc galaxies using graphics processing units

    Science.gov (United States)

    Bekiaris, G.; Glazebrook, K.; Fluke, C. J.; Abraham, R.

    2016-01-01

    With large-scale integral field spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the graphics processing unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and nested sampling algorithms, but also a naive brute-force approach based on nested grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ˜100 when compared to a single-threaded CPU, and up to a factor of ˜10 when compared to a multithreaded dual CPU configuration. Our method's accuracy, precision and robustness are assessed by successfully recovering the kinematic properties of simulated data, and also by verifying the kinematic modelling results of galaxies from the GHASP and DYNAMO surveys as found in the literature. The resulting GBKFIT code is available for download from: http://supercomputing.swin.edu.au/gbkfit.

  14. Parallelizing Kernel Polynomial Method Applying Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Shinichi Yamagiwa

    2012-01-01

    Full Text Available

    The Kernel Polynomial Method (KPM is one of the fast diagonalization methods used for simulations of quantum systems in research fields of condensed matter physics and chemistry. The algorithm has a difficulty to be parallelized on a cluster computer or a supercomputer due to the fine-grain recursive calculations. This paper proposes an implementation of the KPM on the recent graphics processing units (GPU where the recursive calculations are able to be parallelized in the massively parallel environment. This paper also describes performance evaluations regarding the cases when the actual simulation parameters are applied, where one parameter is applied for the increased intensive calculations and another is applied for the increased amount of memory usage. Moreover, the impact for applying the Compress Row Storage (CRS format to the KPM algorithm is also discussed. Finally, it concludes that the performance on the GPU promises very high performance compared to the one on CPU and reduces the overall simulation time.

  15. Kinematic Modelling of Disc Galaxies using Graphics Processing Units

    CERN Document Server

    Bekiaris, Georgios; Fluke, Christopher J; Abraham, Roberto

    2015-01-01

    With large-scale Integral Field Spectroscopy (IFS) surveys of thousands of galaxies currently under-way or planned, the astronomical community is in need of methods, techniques and tools that will allow the analysis of huge amounts of data. We focus on the kinematic modelling of disc galaxies and investigate the potential use of massively parallel architectures, such as the Graphics Processing Unit (GPU), as an accelerator for the computationally expensive model-fitting procedure. We review the algorithms involved in model-fitting and evaluate their suitability for GPU implementation. We employ different optimization techniques, including the Levenberg-Marquardt and Nested Sampling algorithms, but also a naive brute-force approach based on Nested Grids. We find that the GPU can accelerate the model-fitting procedure up to a factor of ~100 when compared to a single-threaded CPU, and up to a factor of ~10 when compared to a multi-threaded dual CPU configuration. Our method's accuracy, precision and robustness a...

  16. Graphics Processing Unit Acceleration of Gyrokinetic Turbulence Simulations

    Science.gov (United States)

    Hause, Benjamin; Parker, Scott; Chen, Yang

    2013-10-01

    We find a substantial increase in on-node performance using Graphics Processing Unit (GPU) acceleration in gyrokinetic delta-f particle-in-cell simulation. Optimization is performed on a two-dimensional slab gyrokinetic particle simulation using the Portland Group Fortran compiler with the OpenACC compiler directives and Fortran CUDA. Mixed implementation of both Open-ACC and CUDA is demonstrated. CUDA is required for optimizing the particle deposition algorithm. We have implemented the GPU acceleration on a third generation Core I7 gaming PC with two NVIDIA GTX 680 GPUs. We find comparable, or better, acceleration relative to the NERSC DIRAC cluster with the NVIDIA Tesla C2050 computing processor. The Tesla C 2050 is about 2.6 times more expensive than the GTX 580 gaming GPU. We also see enormous speedups (10 or more) on the Titan supercomputer at Oak Ridge with Kepler K20 GPUs. Results show speed-ups comparable or better than that of OpenMP models utilizing multiple cores. The use of hybrid OpenACC, CUDA Fortran, and MPI models across many nodes will also be discussed. Optimization strategies will be presented. We will discuss progress on optimizing the comprehensive three dimensional general geometry GEM code.

  17. Marrying Content and Process in Computer Science Education

    Science.gov (United States)

    Zendler, A.; Spannagel, C.; Klaudt, D.

    2011-01-01

    Constructivist approaches to computer science education emphasize that as well as knowledge, thinking skills and processes are involved in active knowledge construction. K-12 computer science curricula must not be based on fashions and trends, but on contents and processes that are observable in various domains of computer science, that can be…

  18. Unit cell-based computer-aided manufacturing system for tissue engineering

    International Nuclear Information System (INIS)

    Scaffolds play an important role in the regeneration of artificial tissues or organs. A scaffold is a porous structure with a micro-scale inner architecture in the range of several to several hundreds of micrometers. Therefore, computer-aided construction of scaffolds should provide sophisticated functionality for porous structure design and a tool path generation strategy that can achieve micro-scale architecture. In this study, a new unit cell-based computer-aided manufacturing (CAM) system was developed for the automated design and fabrication of a porous structure with micro-scale inner architecture that can be applied to composite tissue regeneration. The CAM system was developed by first defining a data structure for the computing process of a unit cell representing a single pore structure. Next, an algorithm and software were developed and applied to construct porous structures with a single or multiple pore design using solid freeform fabrication technology and a 3D tooth/spine computer-aided design model. We showed that this system is quite feasible for the design and fabrication of a scaffold for tissue engineering. (paper)

  19. A Central Line Care Maintenance Bundle for the Prevention of Central Line-Associated Bloodstream Infection in Non-Intensive Care Unit Settings.

    Science.gov (United States)

    O'Neil, Caroline; Ball, Kelly; Wood, Helen; McMullen, Kathleen; Kremer, Pamala; Jafarzadeh, S Reza; Fraser, Victoria; Warren, David

    2016-06-01

    OBJECTIVE To evaluate a central line care maintenance bundle to reduce central line-associated bloodstream infection (CLABSI) in non-intensive care unit settings. DESIGN Before-after trial with 12-month follow-up period. SETTING A 1,250-bed teaching hospital. PARTICIPANTS Patients with central lines on 8 general medicine wards. Four wards received the intervention and 4 served as controls. INTERVENTION A multifaceted catheter care maintenance bundle consisting of educational programs for nurses, update of hospital policies, visual aids, a competency assessment, process monitoring, regular progress reports, and consolidation of supplies necessary for catheter maintenance. RESULTS Data were collected for 25,542 catheter-days including 43 CLABSI (rate, 1.68 per 1,000 catheter-days) and 4,012 catheter dressing observations. Following the intervention, a 2.5% monthly decrease in the CLABSI incidence density was observed on intervention floors but this was not statistically significant (95% CI, -5.3% to 0.4%). On control floors, there was a smaller but marginally significant decrease in CLABSI incidence during the study (change in monthly rate, -1.1%; 95% CI, -2.1% to -0.1%). Implementation of the bundle was associated with improvement in catheter dressing compliance on intervention wards (78.8% compliance before intervention vs 87.9% during intervention/follow-up; Pcontrol wards (84.9% compliance before intervention vs 90.9% during intervention/follow-up; P=.001). CONCLUSIONS A multifaceted program to improve catheter care was associated with improvement in catheter dressing care but no change in CLABSI rates. Additional study is needed to determine strategies to prevent CLABSI in non-intensive care unit patients. Infect Control Hosp Epidemiol 2016;37:692-698. PMID:26999746

  20. Organic facies characteristics of the Pliocene coaly units, central Anatolia, Ilgin (Konya / Turkey)

    Science.gov (United States)

    Altunsoy, Mehmet; Ozdoğan, Meltem; Ozcelik, Orhan; Ünal, Neslihan

    2015-04-01

    This study aims to determine organic facies characteristics of the Pliocene coaly units in the Ilgın (Konya, Central Anatolia, Turkey) area. Pliocene units (Dursunlu Formation) are composed of sandstone, siltstone, marl, mudstone and coal in the region. Lignite layers where coals are found and has a varying thickness between 100 - 300 m. Organic matter is composed predominantly of terrestrial material, with a minor contribution of algal and amorphous material. Organic matter in these units have generally low hydrogen index (HI) values and high oxygen index (OI) values, mostly characteristics type III kerogen (partly type II kerogen). Organic matters in the samples are immature to marginally mature in terms of organic maturation. Total organic carbon (TOC) values are generally between 0.03 and 51.7 %, but reach 53.4 % in the formation. Tmax values vary between 392 and 433 °C. Organic facies type C, CD and D were identified in the investigated units. C, CD and D facies are related to marl, mudstone and coal lithofacies. These facies are characterized by average values of HI around 102 (equivalent to type II/ III kerogene), TOC around 12.2 %, and an average of S2 of 14.6 mg HC/g of rock. The organic matter is terrestrial, partly oxidized / oxidized / highly oxidized , decomposed and reworked. Organic facies C and CD are the "gas-prone" facies but Organic facies D is nongenerative. Keywords: Central Anatolia, Pliocene, Organic Facies, Ilgın, Coal

  1. Hydroclimatological Processes in the Central American Dry Corridor

    Science.gov (United States)

    Hidalgo, H. G.; Duran-Quesada, A. M.; Amador, J. A.; Alfaro, E. J.; Mora, G.

    2015-12-01

    This work studies the hydroclimatological variability and the climatic precursors of drought in the Central American Dry Corridor (CADC), a subregion located in the Pacific coast of Southern Mexico and Central America. Droughts are frequent in the CADC, which is featured by a higher climatological aridity compared to the highlands and Caribbean coast of Central America. The CADC region presents large social vulnerability to hydroclimatological impacts originated from dry conditions, as there is a large part of population that depends on subsistance agriculture. The influence of large-scale climatic precursors such as ENSO, the Caribbean Low-Level Jet (CLLJ), low frequency signals from the Pacific and Caribbean and some intra-seasonal signals such as the MJO are evaluated. Previous work by the authors identified a connection between the CLLJ and CADC precipitation. This connection is more complex than a simple rain-shadow effect, and instead it was suggested that convection at the exit of the jet in the Costa-Rica and Nicaragua Caribbean coasts and consequent subsidence in the Pacific could be playing a role in this connection. During summer, when the CLLJ is stronger than normal, the Inter-Tropical Convergence Zone (located mainly in the Pacific) displaces to a more southern position, and vice-versa, suggesting a connection between these two processes that has not been fully explained yet. The role of the Western Hemisphere Warm Pool also needs more research. All this is important, as it suggest a working hypothesis that during summer, the effect of the Caribbean wind strength may be responsible for the dry climate of the CADC. Another previous analysis by the authors was based on downscaled precipitation and temperature from GCMs and the NCEP/NCAR reanalysis. The data was later used in a hydrological model. Results showed a negative trend in reanalysis' runoff for 1980-2012 in San José (Costa Rica) and Tegucigalpa (Honduras). This highly significant drying trend

  2. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  3. Method and unit for processing end-of-life tyres

    OpenAIRE

    López Gómez, Félix Antonio; Alguacil, Francisco José; Álvarez Centeno, Teresa; Lobato Ortega, Belén; Grau Almirall, José; Grau García, Roger; Grau García, Ferrán; Grau García, Oriol

    2010-01-01

    [EN] The invention relates to a method and unit for processing granulated end-of-life tyres using a process comprising: distillation ofthe constituent polymer materials ofthe end-of-life tyres, such as natural and synthetic rubber; and gasification ofthe carbon black or solid residue.

  4. A 1.5 GFLOPS Reciprocal Unit for Computer Graphics

    DEFF Research Database (Denmark)

    Nannarelli, Alberto; Rasmussen, Morten Sleth; Stuart, Matthias Bo

    2006-01-01

    The reciprocal operation 1/d is a frequent operation performed in graphics processors (GPUs). In this work, we present the design of a radix-16 reciprocal unit based on the algorithm combining the traditional digit-by-digit algorithm and the approximation of the reciprocal by one Newton......-Raphson iteration. We design a fully pipelined single-precision unit to be used in GPUs. The results of the implementation show that the proposed unit can sustain a higher throughput than that of a unit implementing the normal Newton-Raphson approximation, and its area is smaller....

  5. Computer-aided modeling of aluminophosphate zeolites as packings of building units

    KAUST Repository

    Peskov, Maxim

    2012-03-22

    New building schemes of aluminophosphate molecular sieves from packing units (PUs) are proposed. We have investigated 61 framework types discovered in zeolite-like aluminophosphates and have identified important PU combinations using a recently implemented computational algorithm of the TOPOS package. All PUs whose packing completely determines the overall topology of the aluminophosphate framework were described and catalogued. We have enumerated 235 building models for the aluminophosphates belonging to 61 zeolite framework types, from ring- or cage-like PU clusters. It is indicated that PUs can be considered as precursor species in the zeolite synthesis processes. © 2012 American Chemical Society.

  6. Operating The Central Process Systems At Glenn Research Center

    Science.gov (United States)

    Weiler, Carly P.

    2004-01-01

    As a research facility, the Glenn Research Center (GRC) trusts and expects all the systems, controlling their facilities to run properly and efficiently in order for their research and operations to occur proficiently and on time. While there are many systems necessary for the operations at GRC, one of those most vital systems is the Central Process Systems (CPS). The CPS controls operations used by GRC's wind tunnels, propulsion systems lab, engine components research lab, and compressor, turbine and combustor test cells. Used widely throughout the lab, it operates equipment such as exhausters, chillers, cooling towers, compressors, dehydrators, and other such equipment. Through parameters such as pressure, temperature, speed, flow, etc., it performs its primary operations on the major systems of Electrical Dispatch (ED), Central Air Dispatch (CAD), Central Air Equipment Building (CAEB), and Engine Research Building (ERB). In order for the CPS to continue its operations at Glenn, a new contract must be awarded. Consequently, one of my primary responsibilities was assisting the Source Evaluation Board (SEB) with the process of awarding the recertification contract of the CPS. The job of the SEB was to evaluate the proposals of the contract bidders and then to present their findings to the Source Selecting Official (SSO). Before the evaluations began, the Center Director established the level of the competition. For this contract, the competition was limited to those companies classified as a small, disadvantaged business. After an industry briefing that explained to qualified companies the CPS and type of work required, each of the interested companies then submitted proposals addressing three components: Mission Suitability, Cost, and Past Performance. These proposals were based off the Statement of Work (SOW) written by the SEB. After companies submitted their proposals, the SEB reviewed all three components and then presented their results to the SSO. While the

  7. Study guide to accompany computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Study Guide to Accompany Computer and Data Processing provides information pertinent to the fundamental aspects of computers and computer technology. This book presents the key benefits of using computers.Organized into five parts encompassing 19 chapters, this book begins with an overview of the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. This text then introduces computer hardware and describes the processor. Other chapters describe how microprocessors are made and describe the physical operation of computers. This book discusses as w

  8. A Framework for Smart Distribution of Bio-signal Processing Units in M-Health

    OpenAIRE

    Mei, Hailiang; Widya, Ing; Broens, Tom; Pawar, Pravin; Halteren, van, AT; Shishkov, Boris; Sinderen, van, Marten

    2007-01-01

    This paper introduces the Bio-Signal Processing Unit (BSPU) as a functional component that hosts (part of ) the bio-signal information processing algorithms that are needed for an m-health application. With our approach, the BSPUs can be dynamically assigned to available nodes between the bio-signal source and the application to optimize the use of computation and communication resources. The main contributions of this paper are: (1) it presents the supporting architecture (e.g. components an...

  9. Image-Processing Software For A Hypercube Computer

    Science.gov (United States)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  10. Mapping dynamic programming algorithms on graphics processing units

    OpenAIRE

    Hanif, Muhammad Kashif

    2014-01-01

    Alignment is the fundamental operation used to compare biological sequences. It also serves to identify regions of similarity that are eventually consequences of structural, functional, or evolutionary relationships. Today, the processing of sequences from large DNA or protein databases is a big challenge. Graphics Processing Units (GPUs) are based on a highly parallel, many-core streaming architecture and can be used to tackle the processing of large biological data. In the thesis, progressi...

  11. Sensor-based mapping of soil quality on degraded claypan landscapes of the central United States

    Science.gov (United States)

    Claypan soils (Epiaqualfs) in the central USA have experienced severe erosion as a result of tillage practices of the late 1800s and 1900s. Because of the site-specific nature of erosion processes within claypan fields, research is needed to achieve cost-effective sensing and mapping of soil and lan...

  12. Advanced computational modelling for drying processes – A review

    International Nuclear Information System (INIS)

    Highlights: • Understanding the product dehydration process is a key aspect in drying technology. • Advanced modelling thereof plays an increasingly important role for developing next-generation drying technology. • Dehydration modelling should be more energy-oriented. • An integrated “nexus” modelling approach is needed to produce more energy-smart products. • Multi-objective process optimisation requires development of more complete multiphysics models. - Abstract: Drying is one of the most complex and energy-consuming chemical unit operations. R and D efforts in drying technology have skyrocketed in the past decades, as new drivers emerged in this industry next to procuring prime product quality and high throughput, namely reduction of energy consumption and carbon footprint as well as improving food safety and security. Solutions are sought in optimising existing technologies or developing new ones which increase energy and resource efficiency, use renewable energy, recuperate waste heat and reduce product loss, thus also the embodied energy therein. Novel tools are required to push such technological innovations and their subsequent implementation. Particularly computer-aided drying process engineering has a large potential to develop next-generation drying technology, including more energy-smart and environmentally-friendly products and dryers systems. This review paper deals with rapidly emerging advanced computational methods for modelling dehydration of porous materials, particularly for foods. Drying is approached as a combined multiphysics, multiscale and multiphase problem. These advanced methods include computational fluid dynamics, several multiphysics modelling methods (e.g. conjugate modelling), multiscale modelling and modelling of material properties and the associated propagation of material property variability. Apart from the current challenges for each of these, future perspectives should be directed towards material property

  13. Globalized Computing Education: Europe and the United States

    Science.gov (United States)

    Scime, A.

    2008-01-01

    As computing makes the world a smaller place there will be an increase in the mobility of information technology workers and companies. The European Union has recognized the need for mobility and is instituting educational reforms to provide recognition of worker qualifications. Within computing there have been a number of model curricula proposed…

  14. Effects of sleep deprivation on central auditory processing

    Directory of Open Access Journals (Sweden)

    Liberalesso Paulo Breno

    2012-07-01

    Full Text Available Abstract Background Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP. Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75 ± 7.14 years were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT, and the Random Gap Detection Test (RGDT. Baseline (BSL performance was compared to performance after 24 hours of being sleep deprived (24hSD using the Student’s t test. Results Mean RGDT score was elevated in the 24hSD condition (8.0 ± 2.9 ms relative to the BSL condition for the whole cohort (6.4 ± 2.8 ms; p = 0.0005, for males (p = 0.0066, and for females (p = 0.0208. Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4 % ± 1.8 % vs. SD, 94.2 % ± 6.3 %. p = 0.0005(left: BSL, 96.7 % ± 3.1 % vs. SD, 92.1 % ± 6.1 %, p  Conclusion Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.

  15. AUTOMATION OF INVENTORY PROCESS OF PERSONAL COMPUTERS

    Directory of Open Access Journals (Sweden)

    A. I. Zaharenko

    2013-01-01

    Full Text Available The modern information infrastructure of a large or medium-sized enterprise is inconceivable without an effective system of the computer equipment and fictitious assets inventory. An example of creation of such system which is simple for implementation and has low cost of possession is considered in this article.

  16. On-line satellite/central computer facility of the Multiparticle Argo Spectrometer System

    International Nuclear Information System (INIS)

    An on-line satellite/central computer facility has been developed at Brookhaven National Laboratory as part of the Multiparticle Argo Spectrometer System (MASS). This facility consisting of a PDP-9 and a CDC-6600, has been successfully used in study of proton-proton interactions at 28.5 GeV/c. (U.S.)

  17. The Organization and Evaluation of a Computer-Assisted, Centralized Immunization Registry.

    Science.gov (United States)

    Loeser, Helen; And Others

    1983-01-01

    Evaluation of a computer-assisted, centralized immunization registry after one year shows that 93 percent of eligible health practitioners initially agreed to provide data and that 73 percent continue to do so. Immunization rates in audited groups have improved significantly. (GC)

  18. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  19. Automated extraction of natural drainage density patterns for the conterminous United States through high performance computing

    Science.gov (United States)

    Stanislawski, Larry V.; Falgout, Jeff T.; Buttenfield, Barbara P.

    2015-01-01

    Hydrographic networks form an important data foundation for cartographic base mapping and for hydrologic analysis. Drainage density patterns for these networks can be derived to characterize local landscape, bedrock and climate conditions, and further inform hydrologic and geomorphological analysis by indicating areas where too few headwater channels have been extracted. But natural drainage density patterns are not consistently available in existing hydrographic data for the United States because compilation and capture criteria historically varied, along with climate, during the period of data collection over the various terrain types throughout the country. This paper demonstrates an automated workflow that is being tested in a high-performance computing environment by the U.S. Geological Survey (USGS) to map natural drainage density patterns at the 1:24,000-scale (24K) for the conterminous United States. Hydrographic network drainage patterns may be extracted from elevation data to guide corrections for existing hydrographic network data. The paper describes three stages in this workflow including data pre-processing, natural channel extraction, and generation of drainage density patterns from extracted channels. The workflow is concurrently implemented by executing procedures on multiple subbasin watersheds within the U.S. National Hydrography Dataset (NHD). Pre-processing defines parameters that are needed for the extraction process. Extraction proceeds in standard fashion: filling sinks, developing flow direction and weighted flow accumulation rasters. Drainage channels with assigned Strahler stream order are extracted within a subbasin and simplified. Drainage density patterns are then estimated with 100-meter resolution and subsequently smoothed with a low-pass filter. The extraction process is found to be of better quality in higher slope terrains. Concurrent processing through the high performance computing environment is shown to facilitate and refine

  20. Ultra-Fast Displaying Spectral Domain Optical Doppler Tomography System Using a Graphics Processing Unit

    OpenAIRE

    Jeong-Yeon Kim; Changho Lee; Hyosang Jeong; Unsang Jung; Nam Hyun Cho; Jeehyun Kim

    2012-01-01

    We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU) computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels × 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT ...

  1. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. Presentation at the ICWL 2008 conference. August, 20, 2008, Jinhua, China.

  2. Characterization of the Temporal Clustering of Flood Events across the Central United States in terms of Climate States

    Science.gov (United States)

    Mallakpour, Iman; Villarini, Gabriele; Jones, Michael; Smith, James

    2016-04-01

    The central United States is a region of the country that has been plagued by frequent catastrophic flooding (e.g., flood events of 1993, 2008, 2013, and 2014), with large economic and social repercussions (e.g., fatalities, agricultural losses, flood losses, water quality issues). The goal of this study is to examine whether it is possible to describe the occurrence of flood events at the sub-seasonal scale in terms of variations in the climate system. Daily streamflow time series from 774 USGS stream gage stations over the central United States (defined here to include North Dakota, South Dakota, Nebraska, Kansas, Missouri, Iowa, Minnesota, Wisconsin, Illinois, West Virginia, Kentucky, Ohio, Indiana, and Michigan) with a record of at least 50 years and ending no earlier than 2011 are used for this study. We use a peak-over-threshold (POT) approach to identify flood peaks so that we have, on average two events per year. We model the occurrence/non-occurrence of a flood event over time using regression models based on Cox processes. Cox processes are widely used in biostatistics and can be viewed as a generalization of Poisson processes. Rather than assuming that flood events occur independently of the occurrence of previous events (as in Poisson processes), Cox processes allow us to account for the potential presence of temporal clustering, which manifests itself in an alternation of quiet and active periods. Here we model the occurrence/non-occurrence of flood events using two climate indices as climate time-varying covariates: the North Atlantic Oscillation (NAO) and the Pacific-North American pattern (PNA). The results of this study show that NAO and/or PNA can explain the temporal clustering in flood occurrences in over 90% of the stream gage stations we considered. Analyses of the sensitivity of the results to different average numbers of flood events per year (from one to five) are also performed and lead to the same conclusions. The findings of this work

  3. United States Atomic Energy Commission Radiation Processing of Foods Programme

    International Nuclear Information System (INIS)

    The current progress of the United States Atomic Energy Commission's Radiation Processing of Food Programme, with emphasis on the clearance of such foods for general human consumption, product development, facility design, process conditions and economics, and commercial aspects are discussed. Semi-production processing for a number of products has now become feasible. The goal is to test laboratory data under near-commercial-scale process conditions, and to obtain cost data. Either completed, or nearing completion are semi-production facilities capable of processing various foods in quantities of thousands of pounds per hour. Among them are the Marine Products Development Irradiator, the Mobile Gamma Irradiator and the Grain Products Irradiator, for bulk and packaged grain. Plans for a Hawaiian Development Irradiator are also discussed. Activities in the United States, which are related to the commercialization of radiation processing of foods, including the use of radiation for processing fresh fish and fruits, sterilized meats and other food products, are discussed. For example, a project is under way in which several agencies of the United States Government are attempting to establish a co-operative programme with industry, aimed at the development of a pilot-plant meat irradiator. These efforts are directed towards the establishment of a large facility operated by industry which would: (a) provide necessary radiation-sterilized meats for the armed services; (b) establish process conditions and economics; and (c) introduce some of the product into the civilian economy, for commercial purposes. (author)

  4. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  5. Evaluation of global impact models' ability to reproduce runoff characteristics over the central United States

    Science.gov (United States)

    Giuntoli, Ignazio; Villarini, Gabriele; Prudhomme, Christel; Mallakpour, Iman; Hannah, David M.

    2015-09-01

    The central United States experiences a wide array of hydrological extremes, with the 1993, 2008, 2013, and 2014 flooding events and the 1988 and 2012 droughts representing some of the most recent extremes, and is an area where water availability is critical for agricultural production. This study aims to evaluate the ability of a set of global impact models (GIMs) from the Water Model Intercomparison Project to reproduce the regional hydrology of the central United States for the period 1963-2001. Hydrological indices describing annual daily maximum, medium and minimum flow, and their timing are extracted from both modeled daily runoff data by nine GIMs and from observed daily streamflow measured at 252 river gauges. We compare trend patterns for these indices, and their ability to capture runoff volume differences for the 1988 drought and 1993 flood. In addition, we use a subset of 128 gauges and corresponding grid cells to perform a detailed evaluation of the models on a gauge-to-grid cell basis. Results indicate that these GIMs capture the overall trends in high, medium, and low flows well. However, the models differ from observations with respect to the timing of high and medium flows. More specifically, GIMs that only include water balance tend to be closer to the observations than GIMs that also include the energy balance. In general, as it would be expected, the performance of the GIMs is the best when describing medium flows, as opposed to the two ends of the runoff spectrum. With regards to low flows, some of the GIMs have considerably large pools of zeros or low values in their time series, undermining their ability in capturing low flow characteristics and weakening the ensemble's output. Overall, this study provides a valuable examination of the capability of GIMs to reproduce observed regional hydrology over a range of quantities for the central United States.

  6. Computer simulation of gear tooth manufacturing processes

    Science.gov (United States)

    Mavriplis, Dimitri; Huston, Ronald L.

    1990-01-01

    The use of computer graphics to simulate gear tooth manufacturing procedures is discussed. An analytical basis for the simulation is established for spur gears. The simulation itself, however, is developed not only for spur gears, but for straight bevel gears as well. The applications of the developed procedure extend from the development of finite element models of heretofore intractable geometrical forms, to exploring the fabrication of nonstandard tooth forms.

  7. Coastal processes of Central Tamil Nadu, India: clues from grain size studies

    Directory of Open Access Journals (Sweden)

    Nimalanathan Angusamy

    2007-03-01

    Full Text Available The sediments of the beaches along the central coast of Tamil Nadu from Pondicherry to Vedaranyam were studied for their textural variation. 108 sediment samples collected from the low-, mid-, and high-tidal zones, as well as the berms and dunes of different beach morpho-units were analysed. The study area was divided into three sectors (northern, central and southern on the basis of prevailing energy conditions and oceanographic parameters. The poorly sorted, negatively skewed, coarser sediments of the northern sector are indicative of denudational processes taking place there. Medium-to-fine, moderately-to-well sorted, positive-symmetrically skewed sediments dominate the central sector, probably as a result of the influence of palaeo-sediments deposited by rivers from inland as well as by waves and currents from offshore. Fine, poorly sorted, positive-symmetrically skewed sediments dominate the southern sector, highlighting depositional processes. Linear Discriminant Function Analysis (LDF of the samples indicates a shallow marine environment origin for all the three sectors. These results show that reworked sediments, submerged during the Holocene marine transgression, are being deposited on present-day beaches by waves, currents and rivers in the study area.

  8. Coating Process Monitoring Using Computer Vision

    OpenAIRE

    Veijola, Erik

    2013-01-01

    The aim of this Bachelor’s Thesis was to make a prototype system for Metso Paper Inc. for monitoring a paper roll coating process. If the coating is done badly and there are faults one has to redo the process which lowers the profits of the company since the process is costly. The work was proposed by Seppo Parviainen in December of 2012. The resulting system was to alarm the personnel of faults in the process. Specifically if the system that is applying the synthetic resin on to the roll...

  9. X/Qs and unit dose calculations for Central Waste Complex interim safety basis effort

    International Nuclear Information System (INIS)

    The objective for this problem is to calculate the ground-level release dispersion factors (X/Q) and unit doses for onsite facility and offsite receptors at the site boundary and at Highway 240 for plume meander, building wake effect, plume rise, and the combined effect. The release location is at Central Waste Complex Building P4 in the 200 West Area. The onsite facility is located at Building P7. Acute ground level release 99.5 percentile dispersion factors (X/Q) were generated using the GXQ. The unit doses were calculated using the GENII code. The dimensions of Building P4 are 15 m in W x 24 m in L x 6 m in H

  10. 从图形处理器到基于GPU的通用计算%From Graphic Processing Unit to General Purpose Graphic Processing Unit

    Institute of Scientific and Technical Information of China (English)

    刘金硕; 刘天晓; 吴慧; 曾秋梅; 任梦菲; 顾宜淳

    2013-01-01

    对GPU(graphic process unit)、基于GPU的通用计算(general purpose GPU,GPGPU)、基于GPU的编程模型与环境进行了界定;将GPU的发展分为4个阶段,阐述了GPU的架构由非统一的渲染架构到统一的渲染架构,再到新一代的费米架构的变化;通过对基于GPU的通用计算的架构与多核CPU架构、分布式集群架构进行了软硬件的对比.分析表明:当进行中粒度的线程级数据密集型并行运算时,采用多核多线程并行;当进行粗粒度的网络密集型并行运算时,采用集群并行;当进行细粒度的计算密集型并行运算时,采用GPU通用计算并行.最后本文展示了未来的GPGPU的研究热点和发展方向--GPGPU自动并行化、CUDA对多种语言的支持、CUDA的性能优化,并介绍了GPGPU的一些典型应用.%This paper defines the outline of GPU(graphic processing unit) , the general purpose computation, the programming model and the environment for GPU. Besides, it introduces the evolution process from GPU to GPGPU (general purpose graphic processing unit) , and the change from non-uniform render architecture to the unified render architecture and the next Fermi architecture in details. Then it compares GPGPU architecture with multi-core GPU architecture and distributed cluster architecture from the perspective of software and hardware. When doing the middle grain level thread data intensive parallel computing, the multi-core and multi-thread should be utilized. When doing the coarse grain network computing, the cluster computing should be utilized. When doing the fine grained compute intensive parallel computing, the general purpose computation should be adopted. Meanwhile, some classical applications of GPGPU have been mentioned. At last, this paper demonstrates the further developments and research hotspots of GPGPU, which are automatic parallelization of GPGPU, multi-language support and performance optimization of CUDA, and introduces the classic

  11. Central washout sign in computer-aided evaluation of breast MRI: preliminary results

    International Nuclear Information System (INIS)

    Background: Although computer-aided evaluation (CAE) programs were introduced to help differentiate benign tumors from malignant ones, the set of CAE-measured parameters that best predict malignancy have not yet been established. Purpose: To assess the value of the central washout sign on CAE color overlay images of breast MRI. Material and Methods: We evaluated the frequency of the central washout sign using CAE. The central washout sign was determined so that thin, rim-like, persistent kinetics were seen in the periphery of the tumor. Then, sequentially, plateau and washout kinetics appeared. Two additional CAE-delayed kinetic variables were compared with the central washout sign for assessment of diagnostic utility: the predominant enhancement type (washout, plateau, or persistent) and the most suspicious enhancement type (any washout > any plateau > any persistent kinetics). Results: One hundred and forty-nine pathologically proven breast lesions (130 malignant, 19 benign) were evaluated. A central washout sign was associated with 87% of malignant lesions but only 11% of benign lesions. Significant differences were found when delayed-phase kinetics were categorized by the most suspicious enhancement type (P< 0.001) and the presence of the central washout sign (P< 0.001). Under the criteria of the most suspicious kinetics, 68% of benign lesions were assigned as plateau or washout pattern. Conclusion: The central washout sign is a reliable indicator of malignancy on CAE color overlay images of breast MRI

  12. Computer Supported Collaborative Processes in Virtual Organizations

    OpenAIRE

    Paszkiewicz, Zbigniew; Cellary, Wojciech

    2012-01-01

    In global economy, turbulent organization environment strongly influences organization's operation. Organizations must constantly adapt to changing circumstances and search for new possibilities of gaining competitive advantage. To face this challenge, small organizations base their operation on collaboration within Virtual Organizations (VOs). VO operation is based on collaborative processes. Due to dynamism and required flexibility of collaborative processes, existing business information s...

  13. Design, implementation and evalution of a central unit for controlling climatic conditions in the greenhouse

    Directory of Open Access Journals (Sweden)

    Gh. Zarei

    2016-02-01

    Full Text Available In greenhouse culture, in addition to increasing the quantity and quality of crop production in comparison with traditional methods, the agricultural inputs are saved, too. Recently, using new methods, designs and materials, and higher automation in greenhouses, better management has become possible for enhancing yield and improving the quality of greenhouse crops. The constructed and evaluated central controller unit (CCU is a central controller system and computerized monitoring unit for greenhouse application. Several sensors, one CCU, several operators, and a data-collection and recorder unit were the major components of this system. The operators included heating, cooling, spraying, ventilation and lighting systems, and the sensors are for temperature, humidity, carbon dioxide, oxygen and light in inside and outside the greenhouse. Environmental conditions were measured by the accurate sensors and transmitted to the CCU. Based on this information, the CCU changed variables to optimize the greenhouse environmental conditions to predetermined ranges. This system was totally made of local instruments and parts and had the ability to integrate with the needs of the client. The designed and implemented CCU was tested in a greenhouse located in Agriculture and Natural Resources Research Center of Khuzestan Province during summer season of 2011. The CCU was operated successfully for controlling greenhouse temperature in the range of 22-29 ˚C, relative humidity of 35-55%, artificial lighting in the case of receiving radiation of less than 800 Lux and turning on the ventilation units if the concentration of carbon dioxide was more than 800 mg/L.

  14. Structural Determination of (Al2O3)(n) (n = 1-15) Clusters Based on Graphic Processing Unit.

    Science.gov (United States)

    Zhang, Qiyao; Cheng, Longjiu

    2015-05-26

    Global optimization algorithms have been widely used in the field of chemistry to search the global minimum structures of molecular and atomic clusters, which is a nondeterministic polynomial problem with the increasing sizes of clusters. Considering that the computational ability of a graphic processing unit (GPU) is much better than that of a central processing unit (CPU), we developed a GPU-based genetic algorithm for structural prediction of clusters and achieved a high acceleration ratio compared to a CPU. On the one-dimensional (1D) operation of a GPU, taking (Al2O3)n clusters as test cases, the peak acceleration ratio in the GPU is about 220 times that in a CPU in single precision and the value is 103 for double precision in calculation of the analytical interatomic potential. The peak acceleration ratio is about 240 and 107 on the block operation, and it is about 77 and 35 on the 2D operation compared to a CPU in single precision and double precision, respectively. And the peak acceleration ratio of the whole genetic algorithm program is about 35 compared to CPU at double precision. Structures of (Al2O3)n clusters at n = 1-10 reported in previous works are successfully located, and their low-lying structures at n = 11-15 are predicted. PMID:25928795

  15. How mobility increases mobile cloud computing processing capacity

    OpenAIRE

    Nguyen, Anh-Dung; Sénac, Patrick; Ramiro, Victor; Diaz, Michel

    2011-01-01

    In this paper, we address a important and still unanswered question in mobile cloud computing ``how mobility impacts the distributed processing power of network and computing clouds formed from mobile ad-hoc networks ?''. Indeed, mobile ad-hoc networks potentially offer an aggregate cloud of resources delivering collectively processing, storage and networking resources. We demonstrate that the mobility can increase significantly the performances of distributed computation in such networks. In...

  16. 15 CFR 971.408 - Processing outside the United States.

    Science.gov (United States)

    2010-01-01

    ...: (1) The national interest in an adequate supply of minerals; (2) The foreign policy interests of the... 15 Commerce and Foreign Trade 3 2010-01-01 2010-01-01 false Processing outside the United States. 971.408 Section 971.408 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign...

  17. High performance direct gravitational N-body simulations on graphics processing units II: An implementation in CUDA

    NARCIS (Netherlands)

    R.G. Belleman; J. Bédorf; S.F. Portegies Zwart

    2008-01-01

    We present the results of gravitational direct N-body simulations using the graphics processing unit (GPU) on a commercial NVIDIA GeForce 8800GTX designed for gaming computers. The force evaluation of the N-body problem is implemented in "Compute Unified Device Architecture" (CUDA) using the GPU to

  18. Factors Promoting and Hindering Performance of Unit Nurse Managers at Kamuzu and Queen Elizabeth Central Hospitals in Malawi

    OpenAIRE

    Caroline Chitsulo; Mercy Pindani; Idesi Chilinda; Alfred Maluwa

    2014-01-01

    Unit nurse managers in Malawi experience many challenges in the course of performing their roles. This affects their performance and service delivery including the quality of nursing care to patients. This study was conducted to determine the factors that hindered performance of unit managers in relation to expected quality of nursing services at two referral facilities (Kamuzu and Queen Elizabeth Central hospitals) in Malawi. These two central hospitals have the same structural settings...

  19. Genkai Nuclear Power Station Units 1 and 2. Upgrading of central instrumentation equipment

    International Nuclear Information System (INIS)

    Genkai Unit No.1 started commercial operation in October 19754 and Genkai Unit No.2 in March 1981. They are two-loop PWR plants with the electrical power output of 559 MW each. Units No.1 and 2 have been successfully operation and accumulated good results to data. Meanwhile, striving to maintain and enhance reliability, Kyushu Electric Power Company has been systematically implementing upgrade and repair works by reflecting knowledge acquired from nuclear power plant operating experience in Japan and overseas and the outcome of technological developments. The main control boards had been modified several times, as had other equipment before this upgrading project started. Although there was no significant problem in the safe and stable plant operation using the boards as they were, the scalability and maintainability became worse. This would become a problem in future in view of the continuation of safe and stable plant operation for a long time. We thought to enhance the reliability, operability and monitorability further and decided to replace the main control boards with new ones equipped with more CRTs that are the same type as those used in the latest Genkai Units No.3 and 4 located in the same site. And, the related systems, including the primary and secondary system control systems, plant computers, and alarm and monitor cabinets, were replaced with the units featuring the latest technology. Hereafter, this project may be called as CBR in short. The replacement work was implemented by coinciding with the 20th refueling outage (March 6 to August 18, 2001) for Unit No.1 and with the 16th refueling outage (March 16 to September 20, 2001) for Unit No.2. (author)

  20. Farm Process (FMP) Parameters used in the Central Valley Hydrologic Model (CVHM)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This digital dataset defines the farm-process parameters used in the transient hydrologic model of the Central Valley flow system. The Central Valley encompasses an...

  1. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Lindley, G.

    1998-02-01

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 10{sup 20} dyne-cm to 690 bars at 10{sup 25} dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine Q{sub Lg} as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the M{sub b} 5.6, 14 April, 1995, West Texas earthquake.

  2. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    International Nuclear Information System (INIS)

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 1020 dyne-cm to 690 bars at 1025 dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine QLg as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the Mb 5.6, 14 April, 1995, West Texas earthquake

  3. A monolithic 3D integrated nanomagnetic co-processing unit

    Science.gov (United States)

    Becherer, M.; Breitkreutz-v. Gamm, S.; Eichwald, I.; Žiemys, G.; Kiermaier, J.; Csaba, G.; Schmitt-Landsiedel, D.

    2016-01-01

    As CMOS scaling becomes more and more challenging there is strong impetus for beyond CMOS device research to add new functionality to ICs. In this article, a promising technology with non-volatile ferromagnetic computing states - the so-called Perpendicular Nanomagnetic Logic (pNML) - is reviewed. After introducing the 2D planar implementation of NML with magnetization perpendicular to the surface, the path to monolithically 3D integrated systems is discussed. Instead of CMOS substitution, additional functionality is added by a co-processor architecture as a prospective back-end-of-line (BEOL) process, where the computing elements are clocked by a soft-magnetic on-chip inductor. The unconventional computation in the ferromagnetic domain can lead to highly dense computing structures without leakage currents, attojoule dissipation per bit operation and data-throughputs comparable to state-of-the-art high-performance CMOS CPUs. In appropriate applications and with specialized computing architectures they might even circumvent the bottleneck of time-consuming memory access, as computation is inherently performed with non-volatile computing states.

  4. NUMATH: a nuclear-material-holdup estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    A computer program, NUMATH (Nuclear Material Holdup Estimator), has been developed to estimate compositions of materials in vessels involved in unit operations and chemical processes. This program has been implemented in a remotely operated nuclear fuel processing plant. NUMATH provides estimates of the steady-state composition of materials residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimations, the results are determined for the cataloged in container-oriented files. The estimated compositions represent materials collected in applicable vessels - including consideration for materials previously acknowledged in these vessels. The program utilizes process measurements and simple performance models to estimate material holdup and distribution within unit operations. In simulated run-testing, NUMATH typically produced estimates within 5% of the measured inventories for uranium and within 8% of the measured inventories for thorium during steady-state process operation

  5. Computer-aided software development process design

    Science.gov (United States)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  6. Soft computing in big data processing

    CERN Document Server

    Park, Seung-Jong; Lee, Jee-Hyong

    2014-01-01

    Big data is an essential key to build a smart world as a meaning of the streaming, continuous integration of large volume and high velocity data covering from all sources to final destinations. The big data range from data mining, data analysis and decision making, by drawing statistical rules and mathematical patterns through systematical or automatically reasoning. The big data helps serve our life better, clarify our future and deliver greater value. We can discover how to capture and analyze data. Readers will be guided to processing system integrity and implementing intelligent systems. With intelligent systems, we deal with the fundamental data management and visualization challenges in effective management of dynamic and large-scale data, and efficient processing of real-time and spatio-temporal data. Advanced intelligent systems have led to managing the data monitoring, data processing and decision-making in realistic and effective way. Considering a big size of data, variety of data and frequent chan...

  7. A Computational Chemistry Database for Semiconductor Processing

    Science.gov (United States)

    Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)

    1998-01-01

    The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.

  8. Leveraging EarthScope USArray with the Central and Eastern United States Seismic Network

    Science.gov (United States)

    Busby, R.; Sumy, D. F.; Woodward, R.; Frassetto, A.; Brudzinski, M.

    2015-12-01

    Recent earthquakes, such as the 2011 M5.8 Mineral, Virginia earthquake, raised awareness of the comparative lack of knowledge about seismicity, site response to ground shaking, and the basic geologic underpinnings in this densely populated region. With this in mind, the National Science Foundation, United States Geological Survey, United States Nuclear Regulatory Commission, and Department of Energy supported the creation of the Central and Eastern United States Seismic Network (CEUSN). These agencies, along with the IRIS Consortium who operates the network, recognized the unique opportunity to retain EarthScope Transportable Array (TA) seismic stations in this region beyond the standard deployment duration of two years per site. The CEUSN project supports 159 broadband TA stations, more than 30 with strong motion sensors added, that are scheduled to operate through 2017. Stations were prioritized in regions of elevated seismic hazard that have not been traditionally heavily monitored, such as the Charlevoix and Central Virginia Seismic Zones, and in regions proximal to nuclear power plants and other critical facilities. The stations (network code N4) transmit data in real time, with broadband and strong motion sensors sampling at 100 samples per second. More broadly the CEUSN concept also recognizes the existing backbone coverage of permanently operating seismometers in the CEUS, and forms a network of over 300 broadband stations. This multi-agency collaboration is motivated by the opportunity to use one facility to address multiple missions and needs in a way that is rarely possible, and to produce data that enables both researchers and federal agencies to better understand seismic hazard potential and associated seismic risks. In June 2015, the CEUSN Working Group (www.usarray.org/ceusn_working_group) was formed to review and provide advice to IRIS Management on the performance of the CEUSN as it relates to the target scientific goals and objectives. Map shows

  9. EEG processing and its application in brain-computer interface

    Institute of Scientific and Technical Information of China (English)

    Wang Jing; Xu Guanghua; Xie Jun; Zhang Feng; Li Lili; Han Chengcheng; Li Yeping; Sun Jingjing

    2013-01-01

    Electroencephalogram (EEG) is an efficient tool in exploring human brains.It plays a very important role in diagnosis of disorders related to epilepsy and development of new interaction techniques between machines and human beings,namely,brain-computer interface (BCI).The purpose of this review is to illustrate the recent researches in EEG processing and EEG-based BCI.First,we outline several methods in removing artifacts from EEGs,and classical algorithms for fatigue detection are discussed.Then,two BCI paradigms including motor imagery and steady-state motion visual evoked potentials (SSMVEP) produced by oscillating Newton' s rings are introduced.Finally,BCI systems including wheelchair controlling and electronic car navigation are elaborated.As a new technique to control equipments,BCI has promising potential in rehabilitation of disorders in central nervous system,such as stroke and spinal cord injury,treatment of attention deficit hyperactivity disorder (ADHD) in children and development of novel games such as brain-controlled auto racings.

  10. New photosensitizer with phenylenebisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units for dye-sensitized solar cells

    International Nuclear Information System (INIS)

    Graphical abstract: A novel dye D was synthesized and used as photosensitizer for quasi solid state dye-sensitized solar cells. A power conversion efficiency of 4.4% was obtained which was improved to 5.52% when diphenylphosphinic acid (DPPA) was added as coadsorbent. Display Omitted Highlights: → A new low band gap photosensitizer with cyanovinylene 4-nitrophenyl terminal units was synthesized. → A power conversion efficiency of 4.4% was obtained for the dye-sensitized solar cell based on this photosensitizer. → The power conversion efficiency of the dye-sensitized solar cell was further improved to 5.52% when diphenylphosphinic acid was added as coadsorbent. - Abstract: A new low band gap photosensitizer, D, which contains 2,2'-(1,4-phenylene) bisthiophene central unit and cyanovinylene 4-nitrophenyl terminal units at both sides was synthesized. The two carboxyls attached to the 2,5-positions of the phenylene ring act as anchoring groups. Dye D was soluble in common organic solvents, showed long-wavelength absorption maximum at 620-636 nm and optical band gap of 1.72 eV. The electrochemical parameters, i.e. the highest occupied molecular orbital (HOMO) (-5.1 eV) and the lowest unoccupied molecular orbital (LUMO) (-3.3 eV) energy levels of D show that this dye is suitable as molecular sensitizer. The quasi solid state dye-sensitized solar cell (DSSC) based on D shows a short circuit current (Jsc) of 9.95 mA/cm2, an open circuit voltage (Voc) of 0.70 V, and a fill factor (FF) of 0.64 corresponding to an overall power conversion efficiency (PCE) of 4.40% under 100 mW/cm2 irradiation. The overall PCE has been further improved to 5.52% when diphenylphosphinic acid (DPPA) coadsorbent is incorporated into the D solution. This increased PCE has been attributed to the enhancement in the electron lifetime and reduced recombination of injected electrons with the iodide ions present in the electrolyte with the use of DPPA as coadsorbant. The electrochemical impedance

  11. Ambient Ammonia Monitoring in the Central United States Using Passive Diffusion Samplers

    Science.gov (United States)

    Caughey, M.; Gay, D.; Sweet, C.

    2008-12-01

    Environmental scientists and governmental authorities are increasingly aware of the need for more comprehensive measurements of ambient ammonia in urban, rural and remote locations. As the predominant alkaline gas, ammonia plays a critical role in atmospheric chemistry by reacting readily with acidic gases and particles. Ammonium salts often comprise a major portion of the aerosols that impair visibility, not only in urban areas, but also in national parks and other Class I areas. Ammonia is also important as a plant nutrient that directly or indirectly affects terrestrial and aquatic biomes. Successful computer simulations of important environmental processes require an extensive representative data set of ambient ammonia measurements in the range of 0.1 ppbv or greater. Generally instruments with that level of sensitivity are not only expensive, but also require electrical connections, an enclosed shelter and, in many instances, frequent attention from trained technicians. Such requirements significantly restrict the number and locations of ambient ammonia monitors that can be supported. As an alternative we have employed simple passive diffusion samplers to measure ambient ammonia at 9 monitoring sites in the central U.S. over the past 3 years. Passive samplers consist of a layer of an acidic trapping medium supported at a fixed distance behind a microporous barrier for which the diffusive properties are known. Ammonia uptake rates are determined by the manufacturer under controlled laboratory conditions. (When practical, field results are compared against those from collocated conventional samplers, e.g., pumped annular denuders.) After a known exposure time at the sampling site, the sampler is resealed in protective packaging and shipped to the analytical laboratory where the ammonia captured in the acidic medium is carefully extracted and quantified. Because passive samplers are comparatively inexpensive and do not require electricity or other facilities they

  12. Interventions on central computing services during the weekend of 21 and 22 August

    CERN Multimedia

    2004-01-01

    As part of the planned upgrade of the computer centre infrastructure to meet the LHC computing needs, approximately 150 servers, hosting in particular the NICE home directories, Mail services and Web services, will need to be physically relocated to another part of the computing hall during the weekend of the 21 and 22 August. On Saturday 21 August, starting from 8:30a.m. interruptions of typically 60 minutes will take place on the following central computing services: NICE and the whole Windows infrastructure, Mail services, file services (including home directories and DFS workspaces), Web services, VPN access, Windows Terminal Services. During any interruption, incoming mail from outside CERN will be queued and delivered as soon as the service is operational again. All Services should be available again on Saturday 21 at 17:30 but a few additional interruptions will be possible after that time and on Sunday 22 August. IT Department

  13. Application of genetic algorithm to computer-aided process

    OpenAIRE

    Amrinder Chahal

    2012-01-01

    Process planning is a task of transforming design specifications into manufacturing instructions. It is an engineering task that determinesthe detailed manufacturing requirements for transforminga raw material into a completed part, within the available machining resources. The output of process planning generally includes operations, machine tools, cutting tools, fixtures, machining parameters, etc. Computer-aided process planning (CAPP) is an important interface between computer-aided desig...

  14. Evaluation of the Central Hearing Process in Parkinson Patients

    OpenAIRE

    Santos, Rosane Sampaio; Teive, Hélio A. Ghizoni; Gorski, Leslie Palma; Klagenberg, Karlin Fabianne; Muñoz, Monica Barby; Zeigelboim, Bianca Simone

    2011-01-01

    Introduction: Parkinson disease (PD) is a degenerating disease with a deceitful character, impairing the central nervous system and causing biological, psychological and social changes. It shows motor signs and symptoms characterized by trembling, postural instability, rigidity and bradykinesia. Objective: To evaluate the central hearing function in PD patients. Method: A descriptive, prospect and transversal study, in which 10 individuals diagnosed of PD named study group (SG) and 10 normall...

  15. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    2016-01-01

    , communicating O(n log∗ n) ele- ments in the small field and performing O(n log n log log n) operations on small field elements. The fourth main result of the dissertation is a generic and efficient protocol for proving knowledge of a witness for circuit satisfiability in Zero-Knowledge. We prove our......Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned for...... yields an astonishing fast evaluation per AES block of 400μs = 400 ∗ 10−6 seconds. Our techniques focus on AES but work in general. In particular we reduce round complexity of the protocol using oblivious table lookup to take care of the non-linear parts. At first glance one might expect table lookup to...

  16. Computer simulation of surface and film processes

    Science.gov (United States)

    Tiller, W. A.; Halicioglu, M. T.

    1984-01-01

    All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.

  17. Oaks were the historical foundation genus of the east-central United States

    Science.gov (United States)

    Hanberry, Brice B.; Nowacki, Gregory J.

    2016-08-01

    Foundation tree species are dominant and define ecosystems. Because of the historical importance of oaks (Quercus) in east-central United States, it was unlikely that oak associates, such as pines (Pinus), hickories (Carya) and chestnut (Castanea), rose to this status. We used 46 historical tree studies or databases (ca. 1620-1900) covering 28 states, 1.7 million trees, and 50% of the area of the eastern United States to examine importance of oaks compared to pines, hickories, and chestnuts. Oak was the most abundant genus, ranging from 40% to 70% of total tree composition at the ecological province scale and generally increasing in dominance from east to west across this area. Pines, hickories, and chestnuts were co-dominant (ratio of oak composition to other genera of United States, and thus by definition, were not foundational. Although other genera may be called foundational because of localized abundance or perceptions resulting from inherited viewpoints, they decline from consideration when compared to overwhelming oak abundance across this spatial extent. The open structure and high-light conditions of oak ecosystems uniquely supported species-rich understories. Loss of oak as a foundation genus has occurred with loss of open forest ecosystems at landscape scales.

  18. Use of parallel computing in mass processing of laser data

    Science.gov (United States)

    Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.

    2015-12-01

    The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.

  19. Launch Site Computer Simulation and its Application to Processes

    Science.gov (United States)

    Sham, Michael D.

    1995-01-01

    This paper provides an overview of computer simulation, the Lockheed developed STS Processing Model, and the application of computer simulation to a wide range of processes. The STS Processing Model is an icon driven model that uses commercial off the shelf software and a Macintosh personal computer. While it usually takes one year to process and launch 8 space shuttles, with the STS Processing Model this process is computer simulated in about 5 minutes. Facilities, orbiters, or ground support equipment can be added or deleted and the impact on launch rate, facility utilization, or other factors measured as desired. This same computer simulation technology can be used to simulate manufacturing, engineering, commercial, or business processes. The technology does not require an 'army' of software engineers to develop and operate, but instead can be used by the layman with only a minimal amount of training. Instead of making changes to a process and realizing the results after the fact, with computer simulation, changes can be made and processes perfected before they are implemented.

  20. Algorithms and Heuristics for Scalable Betweenness Centrality Computation on Multi-GPU Systems

    OpenAIRE

    Vella, Flavio; Carbone, Giancarlo; Bernaschi, Massimo

    2016-01-01

    Betweenness Centrality (BC) is steadily growing in popularity as a metrics of the influence of a vertex in a graph. The BC score of a vertex is proportional to the number of all-pairs-shortest-paths passing through it. However, complete and exact BC computation for a large-scale graph is an extraordinary challenge that requires high performance computing techniques to provide results in a reasonable amount of time. Our approach combines bi-dimensional (2-D) decomposition of the graph and mult...

  1. Clinical radiodiagnosis of metastases of central lung cancer in regional lymph nodes using computers

    International Nuclear Information System (INIS)

    On the basis of literary data and clinical examination (112 patients) methods of clinical radiodiagnosis of metastases of central lung cancer in regional lymph nodes using computers are developed. Methods are tested on control clinical material (110 patients). Using computers (Bayes and Vald methods) 57.3% and 65.5% correct answers correspondingly are obtained, that is by 14.6% and 22.8% higher the level of clinical diagnosis of metastases. Diagnostic errors are analysed. Complexes of clinical-radiological signs of symptoms of metastases are outlined

  2. Contributions to Parallel Simulation of Equation-Based Models on Graphics Processing Units

    OpenAIRE

    Stavåker, Kristian

    2011-01-01

    In this thesis we investigate techniques and methods for parallel simulation of equation-based, object-oriented (EOO) Modelica models on graphics processing units (GPUs). Modelica is being developed through an international effort via the Modelica Association. With Modelica it is possible to build computationally heavy models; simulating such models however might take a considerable amount of time. Therefor techniques of utilizing parallel multi-core architectures for simulation are desirable...

  3. Acceleration of Early-Photon Fluorescence Molecular Tomography with Graphics Processing Units

    OpenAIRE

    Xin Wang; Bin Zhang; Xu Cao; Fei Liu; Jianwen Luo; Jing Bai

    2013-01-01

    Fluorescence molecular tomography (FMT) with early-photons can improve the spatial resolution and fidelity of the reconstructed results. However, its computing scale is always large which limits its applications. In this paper, we introduced an acceleration strategy for the early-photon FMT with graphics processing units (GPUs). According to the procedure, the whole solution of FMT was divided into several modules and the time consumption for each module is studied. In this strategy, two most...

  4. Central Nervous System Based Computing Models for Shelf Life Prediction of Soft Mouth Melting Milk Cakes

    Directory of Open Access Journals (Sweden)

    Gyanendra Kumar Goyal

    2012-04-01

    Full Text Available This paper presents the latency and potential of central nervous system based system intelligent computer engineering system for detecting shelf life of soft mouth melting milk cakes stored at 10o C. Soft mouth melting milk cakes are exquisite sweetmeat cuisine made out of heat and acid thickened solidified sweetened milk. In today’s highly competitive market consumers look for good quality food products. Shelf life is a good and accurate indicator to the food quality and safety. To achieve good quality of food products, detection of shelf life is important. Central nervous system based intelligent computing model was developed which detected 19.82 days shelf life, as against 21 days experimental shelf life.

  5. Software Graphics Processing Unit (sGPU) for Deep Space Applications

    Science.gov (United States)

    McCabe, Mary; Salazar, George; Steele, Glen

    2015-01-01

    A graphics processing capability will be required for deep space missions and must include a range of applications, from safety-critical vehicle health status to telemedicine for crew health. However, preliminary radiation testing of commercial graphics processing cards suggest they cannot operate in the deep space radiation environment. Investigation into an Software Graphics Processing Unit (sGPU)comprised of commercial-equivalent radiation hardened/tolerant single board computers, field programmable gate arrays, and safety-critical display software shows promising results. Preliminary performance of approximately 30 frames per second (FPS) has been achieved. Use of multi-core processors may provide a significant increase in performance.

  6. Central Nervous System Based Computing Models for Shelf Life Prediction of Soft Mouth Melting Milk Cakes

    OpenAIRE

    Gyanendra Kumar Goyal; Sumit Goyal

    2012-01-01

    This paper presents the latency and potential of central nervous system based system intelligent computer engineering system for detecting shelf life of soft mouth melting milk cakes stored at 10o C. Soft mouth melting milk cakes are exquisite sweetmeat cuisine made out of heat and acid thickened solidified sweetened milk. In today’s highly competitive market consumers look for good quality food products. Shelf life is a good and accurate indicator to the food quality and safety. To achieve g...

  7. Fermilab Central Computing Facility: Energy conservation report and mechanical systems design optimization and cost analysis study

    Energy Technology Data Exchange (ETDEWEB)

    Krstulovich, S.F.

    1986-11-12

    This report is developed as part of the Fermilab Central Computing Facility Project Title II Design Documentation Update under the provisions of DOE Document 6430.1, Chapter XIII-21, Section 14, paragraph a. As such, it concentrates primarily on HVAC mechanical systems design optimization and cost analysis and should be considered as a supplement to the Title I Design Report date March 1986 wherein energy related issues are discussed pertaining to building envelope and orientation as well as electrical systems design.

  8. Beginner's manual of TSS terminal in connection with the INS central computer

    International Nuclear Information System (INIS)

    One of the facilities of the newly installed INS central computer system, FACOM M-180IIAD, is TSS (Time Sharing System) service. It seems to be necessary for beginners to prepare a manual easy to use the TSS terminal without special education or knowledge. In this manual are described how to handle the terminal with several basic and practical examples, and an explanation of selected important commands. (author)

  9. Role of centralized review processes for making reimbursement decisions on new health technologies in Europe

    Directory of Open Access Journals (Sweden)

    Stafinski T

    2011-08-01

    Full Text Available Tania Stafinski1, Devidas Menon2, Caroline Davis1, Christopher McCabe31Health Technology and Policy Unit, 2Health Policy and Management, School of Public Health, University of Alberta, Edmonton, Alberta, Canada; 3Academic Unit of Health Economics, Leeds Institute for Health Sciences, University of Leeds, Leeds, UKBackground: The purpose of this study was to compare centralized reimbursement/coverage decision-making processes for health technologies in 23 European countries, according to: mandate, authority, structure, and policy options; mechanisms for identifying, selecting, and evaluating technologies; clinical and economic evidence expectations; committee composition, procedures, and factors considered; available conditional reimbursement options for promising new technologies; and the manufacturers' roles in the process.Methods: A comprehensive review of publicly available information from peer-reviewed literature (using a variety of bibliographic databases and gray literature (eg, working papers, committee reports, presentations, and government documents was conducted. Policy experts in each of the 23 countries were also contacted. All information collected was reviewed by two independent researchers.Results: Most European countries have established centralized reimbursement systems for making decisions on health technologies. However, the scope of technologies considered, as well as processes for identifying, selecting, and reviewing them varies. All systems include an assessment of clinical evidence, compiled in accordance with their own guidelines or internationally recognized published ones. In addition, most systems require an economic evaluation. The quality of such information is typically assessed by content and methodological experts. Committees responsible for formulating recommendations or decisions are multidisciplinary. While criteria used by committees appear transparent, how they are operationalized during deliberations

  10. Research on Three Dimensional Computer Assistance Assembly Process Design System

    Institute of Scientific and Technical Information of China (English)

    HOU Wenjun; YAN Yaoqi; DUAN Wenjia; SUN Hanxu

    2006-01-01

    The computer aided process planning will certainly play a significant role in the success of enterprise informationization. 3-dimensional design will promote Tri-dimensional process planning. This article analysis nowadays situation and problems of assembly process planning, gives a 3-dimensional computer aided process planning system (3D-VAPP), and researches on the product information extraction, assembly sequence and path planning in visual interactive assembly process design, dynamic emulation of assembly and process verification, assembly animation outputs and automatic exploding view generation, interactive craft filling and craft knowledge management, etc. It also gives a multi-layer collision detect and multi-perspective automatic camera switching algorithm. Experiments were done to validate the feasibility of such technology and algorithm, which established the foundation of tri-dimensional computer aided process planning.

  11. Accelerating Image Reconstruction in Three-Dimensional Optoacoustic Tomography on Graphics Processing Units

    CERN Document Server

    Wang, Kun; Kao, Yu-Jiun; Chou, Cheng-Ying; Oraevsky, Alexander A; Anastasio, Mark A; 10.1118/1.4774361

    2013-01-01

    Purpose: Optoacoustic tomography (OAT) is inherently a three-dimensional (3D) inverse problem. However, most studies of OAT image reconstruction still employ two-dimensional (2D) imaging models. One important reason is because 3D image reconstruction is computationally burdensome. The aim of this work is to accelerate existing image reconstruction algorithms for 3D OAT by use of parallel programming techniques. Methods: Parallelization strategies are proposed to accelerate a filtered backprojection (FBP) algorithm and two different pairs of projection/backprojection operations that correspond to two different numerical imaging models. The algorithms are designed to fully exploit the parallel computing power of graphic processing units (GPUs). In order to evaluate the parallelization strategies for the projection/backprojection pairs, an iterative image reconstruction algorithm is implemented. Computer-simulation and experimental studies are conducted to investigate the computational efficiency and numerical a...

  12. Artificial intelligence, expert systems, computer vision, and natural language processing

    Science.gov (United States)

    Gevarter, W. B.

    1984-01-01

    An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, natural language processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

  13. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    OpenAIRE

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core.

  14. Graphics processing units accelerated semiclassical initial value representation molecular dynamics

    OpenAIRE

    Tamascelli, Dario; Dambrosio, Francesco S.; Conte, Riccardo; Ceotto, Michele

    2013-01-01

    This paper presents a Graphics Processing Units (GPUs) implementation of the Semiclassical Initial Value Representation (SC-IVR) propagator for vibrational molecular spectroscopy calculations. The time-averaging formulation of the SC-IVR for power spectrum calculations is employed. Details about the GPU implementation of the semiclassical code are provided. Four molecules with an increasing number of atoms are considered and the GPU-calculated vibrational frequencies perfectly match the bench...

  15. Distributed match-making for processes in computer networks

    OpenAIRE

    Mullender, Sape; Vitányi, Paul

    1986-01-01

    In the very large multiprocessor systems and, on a grander scale, computer networks now emerging, processes are not tied to fixed processors but run on processors taken from a pool of processors. Processors are released when a process dies, migrates or when the process crashes. In distributed operating systems using the service concept, processes can be clients asking for a service, servers giving a service or both. Establishing communication between a process asking for a service and a proce...

  16. Computational Tools for Accelerating Carbon Capture Process Development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, David; Sahinidis, N V; Cozad, A; Lee, A; Kim, H; Morinelly, J; Eslick, J; Yuan, Z

    2013-06-04

    This presentation reports development of advanced computational tools to accelerate next generation technology development. These tools are to develop an optimized process using rigorous models. They include: Process Models; Simulation-Based Optimization; Optimized Process; Uncertainty Quantification; Algebraic Surrogate Models; and Superstructure Optimization (Determine Configuration).

  17. Proton computed tomography from multiple physics processes

    Science.gov (United States)

    Bopp, C.; Colin, J.; Cussol, D.; Finck, Ch; Labalme, M.; Rousseau, M.; Brasse, D.

    2013-10-01

    Proton CT (pCT) nowadays aims at improving hadron therapy treatment planning by mapping the relative stopping power (RSP) of materials with respect to water. The RSP depends mainly on the electron density of the materials. The main information used is the energy of the protons. However, during a pCT acquisition, the spatial and angular deviation of each particle is recorded and the information about its transmission is implicitly available. The potential use of those observables in order to get information about the materials is being investigated. Monte Carlo simulations of protons sent into homogeneous materials were performed, and the influence of the chemical composition on the outputs was studied. A pCT acquisition of a head phantom scan was simulated. Brain lesions with the same electron density but different concentrations of oxygen were used to evaluate the different observables. Tomographic images from the different physics processes were reconstructed using a filtered back-projection algorithm. Preliminary results indicate that information is present in the reconstructed images of transmission and angular deviation that may help differentiate tissues. However, the statistical uncertainty on these observables generates further challenge in order to obtain an optimal reconstruction and extract the most pertinent information.

  18. Fluid Management Plan Central Nevada Test Area Corrective Action Unit 443

    International Nuclear Information System (INIS)

    The U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office initiated the Offsites Project to characterize the risk posed to human health and the environment as a result of underground nuclear testing at sites in Alaska, Colorado, Mississippi, Nevada, and New Mexico. Responsibility for environmental restoration of the sites that constitute the Offsites Project was transferred from the DOE Office of Environmental Management to the DOE Office of Legacy Management (LM) on October 1, 2006. The scope of this Fluid Management Plan (FMP) is to support subsurface investigations at the Central Nevada Test Area (CNTA) Corrective Action Unit (CAU) 443, in accordance with the Federal Facility Agreement and Consent Order (FFACO) (1996). The subsurface CAU 443 is associated with the underground nuclear testing conducted at UC-1 and is located approximately 30 miles north of Warm Springs in Nye County, Nevada.

  19. Software used at the central diagnostic unit for main circulation pump diagnostics

    International Nuclear Information System (INIS)

    The software used at the central diagnostic unit consists of 6 basic software packages for in-service vibro-acoustic diagnostics of the main circulation pump. The programs are designed for evaluating vibrations of the main circulation pump, evaluating spectra, drawing orbits from two orthogonal accelerometers of deflection sensors for monitoring the time development of the frequency spectra of nonstationary signals, for detecting slips of an asynchronous motor, and for calculating estimates of the frequency spectra. Four more software packages were prepared for the analysis of the pulse components of signals, for determining the basic statistical parameters of the signals, for assessing correct operation of the regulating assemblies of WWER-440 reactors in the free drop mode, and for a quick frequency spectrum estimate for the purposes of operative evaluation. (J.B.). 5 refs

  20. Design of Central Management & Control Unit for Onboard High-Speed Data Handling System

    Institute of Scientific and Technical Information of China (English)

    LI Yan-qin; JIN Sheng-zhen; NING Shu-nian

    2007-01-01

    The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Based on the advanced techniques of foreign countries, an improved structure of onboard data handling systems feasible for SST, is proposed. This article concentrated on the development of a Central Management & Control Unit (MCU) based on FPGA and DSP. Through reconfigurating the FPGA and DSP programs, the prototype could perform different tasks.Thus the inheritability of the whole system is improved. The completed dual-channel prototype proves that the system meets all requirements of the MOT. Its high reliability and safety features also meet the requirements under harsh conditions such as mine detection.

  1. Computer Data Processing of the Hydrogen Peroxide Decomposition Reaction

    Institute of Scientific and Technical Information of China (English)

    余逸男; 胡良剑

    2003-01-01

    Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods work with no necessity to measure the final oxygen volume, but also the fitting errors decrease evidently.

  2. Computer and control applications in a vegetable processing plant

    Science.gov (United States)

    There are many advantages to the use of computers and control in food industry. Software in the food industry takes 2 forms - general purpose commercial computer software and software for specialized applications, such as drying and thermal processing of foods. Many applied simulation models for d...

  3. Parallel Computer Vision Algorithms for Graphics Processing Units

    OpenAIRE

    Berjón Díez, Daniel

    2016-01-01

    La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadore...

  4. Well Installation Report for Corrective Action Unit 443, Central Nevada Test Area, Nye County, Nevada

    International Nuclear Information System (INIS)

    A Corrective Action Investigation (CAI) was performed in several stages from 1999 to 2003, as set forth in the ''Corrective Action Investigation Plan for the Central Nevada Test Area Subsurface Sites, Corrective Action Unit 443'' (DOE/NV, 1999). Groundwater modeling was the primary activity of the CAI. Three phases of modeling were conducted for the Faultless underground nuclear test. The first phase involved the gathering and interpretation of geologic and hydrogeologic data, and inputting the data into a three-dimensional numerical model to depict groundwater flow. The output from the groundwater flow model was used in a transport model to simulate the migration of a radionuclide release (Pohlmann et al., 2000). The second phase of modeling (known as a Data Decision Analysis [DDA]) occurred after NDEP reviewed the first model. This phase was designed to respond to concerns regarding model uncertainty (Pohll and Mihevc, 2000). The third phase of modeling updated the original flow and transport model to incorporate the uncertainty identified in the DDA, and focused the model domain on the region of interest to the transport predictions. This third phase culminated in the calculation of contaminant boundaries for the site (Pohll et al., 2003). Corrective action alternatives were evaluated and an alternative was submitted in the ''Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 443: Central Nevada Test Area-Subsurface'' (NNSA/NSO, 2004). Based on the results of this evaluation, the preferred alternative for CAU 443 is Proof-of-Concept and Monitoring with Institutional Controls. This alternative was judged to meet all requirements for the technical components evaluated and will control inadvertent exposure to contaminated groundwater at CAU 443

  5. Computer Forensics Field Triage Process Model

    Directory of Open Access Journals (Sweden)

    Marcus K. Rogers

    2006-06-01

    Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.

  6. Seismic proving test of process computer systems with a seismic floor isolation system

    Energy Technology Data Exchange (ETDEWEB)

    Fujimoto, S.; Niwa, H.; Kondo, H. [Toshiba Corp., Kawasaki (Japan)] [and others

    1995-12-01

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified.

  7. Seismic proving test of process computer systems with a seismic floor isolation system

    International Nuclear Information System (INIS)

    The authors have carried out seismic proving tests for process computer systems as a Nuclear Power Engineering Corporation (NUPEC) project sponsored by the Ministry of International Trade and Industry (MITI). This paper presents the seismic test results for evaluating functional capabilities of process computer systems with a seismic floor isolation system. The seismic floor isolation system to isolate the horizontal motion was composed of a floor frame (13 m x 13 m), ball bearing units, and spring-damper units. A series of seismic excitation tests was carried out using a large-scale shaking table of NUPEC. From the test results, the functional capabilities during large earthquakes of computer systems with a seismic floor isolation system were verified

  8. Theoretic computing model of combustion process of asphalt smoke

    Institute of Scientific and Technical Information of China (English)

    HUANG Rui; CHAI Li-yuan; HE De-wen; PENG Bing; WANG Yun-yan

    2005-01-01

    Based on the data and methods provided by research literature, dispersing mathematical model of combustion process of asphalt smoke is set by theoretic analysis. Through computer programming, the dynamic combustion process of asphalt smoke is calculated to simulate an experimental model. The computing result shows that the temperature and the concentration of asphalt smoke influence its burning temperature in approximatively linear manner. The consumed quantity of fuel to ignite the asphalt smoke needs to be measured from the two factors.

  9. Computer-based tools for radiological assessment of emergencies at nuclear facilities in the United Kingdom

    International Nuclear Information System (INIS)

    HMNll is responsible for regulating the activities at licensed nuclear sites in the UK. In the event of an emergency being declared at any of these sites. HMNll would mobilize an emergency response team. This team would, interalia, monitor the activities of the operator at the affected site, assess the actual or potential radiological consequences of the event and provide briefings to senior members of government. Central to this response to an emergency is the assessment effort that would be provided by the Bootle Headquarters Emergency Room. To facilitate the assessments carried out at Bottle, computer based tools have been developed. The major licensed nuclear facilities in the UK fall into two broad groups, civil power reactors and nuclear chemical plant. These two types of facilities pose different levels of radiological hazard as a result of their different radioactive inventories and the different physical processes in operation. Furthermore these two groups of facilities pose different problems in assessing the radiological hazard in emergency situations. This paper describes the differences in approach used in designing and using computer based tools to assess the radiological consequences of emergencies at power reactor and chemical plant sites

  10. Mathematical modelling in the computer-aided process planning

    Science.gov (United States)

    Mitin, S.; Bochkarev, P.

    2016-04-01

    This paper presents new approaches to organization of manufacturing preparation and mathematical models related to development of the computer-aided multi product process planning (CAMPP) system. CAMPP system has some peculiarities compared to the existing computer-aided process planning (CAPP) systems: fully formalized developing of the machining operations; a capacity to create and to formalize the interrelationships among design, process planning and process implementation; procedures for consideration of the real manufacturing conditions. The paper describes the structure of the CAMPP system and shows the mathematical models and methods to formalize the design procedures.

  11. Oaks were the historical foundation genus of the east-central United States

    Science.gov (United States)

    Hanberry, Brice B.; Nowacki, Gregory J.

    2016-08-01

    Foundation tree species are dominant and define ecosystems. Because of the historical importance of oaks (Quercus) in east-central United States, it was unlikely that oak associates, such as pines (Pinus), hickories (Carya) and chestnut (Castanea), rose to this status. We used 46 historical tree studies or databases (ca. 1620-1900) covering 28 states, 1.7 million trees, and 50% of the area of the eastern United States to examine importance of oaks compared to pines, hickories, and chestnuts. Oak was the most abundant genus, ranging from 40% to 70% of total tree composition at the ecological province scale and generally increasing in dominance from east to west across this area. Pines, hickories, and chestnuts were co-dominant (ratio of oak composition to other genera of abundance or perceptions resulting from inherited viewpoints, they decline from consideration when compared to overwhelming oak abundance across this spatial extent. The open structure and high-light conditions of oak ecosystems uniquely supported species-rich understories. Loss of oak as a foundation genus has occurred with loss of open forest ecosystems at landscape scales.

  12. Fallout 137Cs in cultivated and noncultivated north central United States watersheds

    International Nuclear Information System (INIS)

    The cesium (137Cs) concentrations were measured in the soils and sediments of 14 watersheds, 7 cultivated and 7 noncultivated, in the North Central United States. The 137Cs concentration in watershed soils ranged from 56 to 149 nCi/m2, with cultivated watersheds averaging 75 nCi/m2 and noncultivated watersheds averaging 104 nCi/m2. The 137Cs concentration in the reservoir sediments ranged from 74 to 1,280 nCi/m2, with a mean of 676 nCi/m2 for the cultivated watersheds and 365 nCi/m2 for the noncultivated watersheds. The 137Cs concentrations per unit area in sediments were 0.8 to 18.7 times greater than those found in the contributing watershed soils. This indicated that some 137Cs is moving within the watersheds and that the reservoirs are acting as ''traps'' or ''sinks.'' The factors accounting for the variation in 137Cs concentration in the soils and sediments of the watersheds are (i) the erosion potential of the watershed, (ii) the sites for adsorption of 137Cs, and (iii) the input of radioactivity into the watershed

  13. Groundwater flow functioning in arid zones with thick volcanic aquifer units: North-Central Mexico

    International Nuclear Information System (INIS)

    Population increase in arid zones of Mexico has created the presence of 450% new cities with more that 50,000 inhabitants, as related to the 1950s. Due to the arid nature of the environment, the once sufficient spring and shallow water are becoming inadequate for the supply of those cities. An answer to this problem lies with the sustainable development of deep groundwater. The geological features of the country include fractured volcanic aquifer units that are more than 1,500 m thick, and are regionally continuous over of several hundred thousands of square kilometres. Groundwater development decisions need to consider, in the long span, inter-basin groundwater flow and the need to prevent environmental impacts in distant sites hydraulically connected with extraction centres. Radiocarbon is an excellent tool that initially has been applied to characterize groundwater in thick aquifer units in central Mexico to provide evidence on the hierarchy of flow (local/regional) and water age from where the distance of regional recharge was inferred. Radiocarbon also helps constrain flow path length which can then be used to characterize inter-basin groundwater communication. Radiocarbon has a large potential for future expansion of research and water management application. (author)

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  15. Distributed trace using central performance counter memory

    Science.gov (United States)

    Satterfield, David L.; Sexton, James C.

    2013-01-22

    A plurality of processing cores, are central storage unit having at least memory connected in a daisy chain manner, forming a daisy chain ring layout on an integrated chip. At least one of the plurality of processing cores places trace data on the daisy chain connection for transmitting the trace data to the central storage unit, and the central storage unit detects the trace data and stores the trace data in the memory co-located in with the central storage unit.

  16. Ontology-based metrics computation for business process analysis

    OpenAIRE

    Carlos Pedrinaci; John Domingue

    2009-01-01

    Business Process Management (BPM) aims to support the whole life-cycle necessary to deploy and maintain business processes in organisations. Crucial within the BPM lifecycle is the analysis of deployed processes. Analysing business processes requires computing metrics that can help determining the health of business activities and thus the whole enterprise. However, the degree of automation currently achieved cannot support the level of reactivity and adaptation demanded by businesses. In thi...

  17. Software-based Approximate Computation Of Signal Processing Tasks

    OpenAIRE

    Anastasia, D.

    2012-01-01

    This thesis introduces a new dimension in performance scaling of signal processing systems by proposing software frameworks that achieve increased processing throughput when producing approximate results. The first contribution of this work is a new theory for accelerated computation of multimedia processing based on the concept of tight packing (Chapter 2). Usage of this theory accelerates small-dynamic-range linear signal processing tasks (such as convolution and transform decomposition) th...

  18. Value of computed tomography and magnetic resonance imaging in diagnosis of central nervous system

    International Nuclear Information System (INIS)

    Systemic sclerosis is an autoimmune connective tissue disease characterized by vascular abnormalities and fibrotic changes in skin and internal organs. The aim of the study was to investigate involvement of the central nervous system in systemic sclerosis and the value of computed tomography (CT) and magnetic resonance imaging (MRI) in evaluation of central nervous system involvement in systemic sclerosis. 26 patients with neuropsychiatric symptoms in the course of systemic sclerosis were investigated for central nervous system abnormalities by computed tomography (CT) and magnetic resonance imaging (MRI). Among these 26 symptomatic patients lesions in brain MRI and CT examinations were present in 54% and in 50% patients respectively. Most common findings (in 46% of all patients), were symptoms of cortical and subcortical atrophy, seen in both, MRI and CT. Single and multiple focal lesions, predominantly in the white matter, were detected by MRI significantly more frequently as compared to CT (62% and 15% patients respectively). These data indicate that brain involvement is common in patients with severe systemic sclerosis. MRI shows significantly higher than CT sensitivity in detection focal brain lesions in these patients. (author)

  19. COST OF PROCESSING CARROT PRODUCTION IN WEST CENTRAL MICHIGAN

    OpenAIRE

    Dartt, Barbara; Black, J. Roy; Breinling, Jim; Morrone, Vicki

    2002-01-01

    This bulletin represents a tool that can help producers, consultants, educators, and agribusinesses working with producers estimate costs of production and expected profit based on "typical" carrot management strategies found in west central Michigan. The budget included in this bulletin will allow users to revise inputs based on their management strategies and calculate their expected cost and profit. This flexibility provides a decision aid to search for systems that generate higher net ret...

  20. Proceedings: Distributed digital systems, plant process computers, and networks

    International Nuclear Information System (INIS)

    These are the proceedings of a workshop on Distributed Digital Systems, Plant Process Computers, and Networks held in Charlotte, North Carolina on August 16--18, 1994. The purpose of the workshop was to provide a forum for technology transfer, technical information exchange, and education. The workshop was attended by more than 100 representatives of electric utilities, equipment manufacturers, engineering service organizations, and government agencies. The workshop consisted of three days of presentations, exhibitions, a panel discussion and attendee interactions. Original plant process computers at the nuclear power plants are becoming obsolete resulting in increasing difficulties in their effectiveness to support plant operations and maintenance. Some utilities have already replaced their plant process computers by more powerful modern computers while many other utilities intend to replace their aging plant process computers in the future. Information on recent and planned implementations are presented. Choosing an appropriate communications and computing network architecture facilitates integrating new systems and provides functional modularity for both hardware and software. Control room improvements such as CRT-based distributed monitoring and control, as well as digital decision and diagnostic aids, can improve plant operations. Commercially available digital products connected to the plant communications system are now readily available to provide distributed processing where needed. Plant operations, maintenance activities, and engineering analyses can be supported in a cost-effective manner. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database

  1. Opportunities in the United States' gas processing industry

    International Nuclear Information System (INIS)

    To keep up with the increasing amount of natural gas that will be required by the market and with the decreasing quality of the gas at the well-head, the gas processing industry must look to new technologies to stay competitive. The Gas Research Institute (GR); is managing a research, development, design and deployment program that is projected to save the industry US dollar 230 million/year in operating and capital costs from gas processing related activities in NGL extraction and recovery, dehydration, acid gas removal/sulfur recovery, and nitrogen rejection. Three technologies are addressed here. Multivariable Control (MVC) technology for predictive process control and optimization is installed or in design at fourteen facilities treating a combined total of over 30x109 normal cubic meter per year (BN m3/y) [1.1x1012 standard cubic feet per year (Tcf/y)]. Simple pay backs are typically under 6 months. A new acid gas removal process based on n-formyl morpholine (NFM) is being field tested that offers 40-50% savings in operating costs and 15-30% savings in capital costs relative to a commercially available physical solvent. The GRI-MemCalcTM Computer Program for Membrane Separations and the GRI-Scavenger CalcBaseTM Computer Program for Scavenging Technologies are screening tools that engineers can use to determine the best practice for treating their gas. (au) 19 refs

  2. Potential energy savings and environmental impacts of energy efficiency standards for vapor compression central air conditioning units in China

    Energy Technology Data Exchange (ETDEWEB)

    Lu Wei [Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Tsinghua University, Beijing 100084 (China)]. E-mail: tjluwei@163.com

    2007-03-15

    Owing to the rapid development of economy and the stable improvement of people's living standard, central air conditioning units are broadly used in China. This not only consumes large energy, but also results in adverse energy-related environmental issues. Energy efficiency standards are accepted effective policy tools to reduce energy consumption and pollutant emissions. Recently, China issued two national energy efficiency standards, GB19577-2004 and GB19576-2004, for vapor compression central air conditioning units for the first time. This paper first reviews the two standards, and then establishes a mathematic model to evaluate the potential energy savings and environmental impacts of the standards. The estimated results indicate implementing these standards will save massive energy, as well as benefit greatly to the environment. Obviously, it is significant to implement energy efficiency standards for central air conditioning units in China.

  3. Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2008-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2008). Modeling Units of Assessment for Sharing Assessment Process Information: towards an Assessment Process Specification. In F. W. B. Li, J. Zhao, T. K. Shih, R. W. H. Lau, Q. Li & D. McLeod (Eds.), Advances in Web Based Learning - Proceedings of the 7th Inter

  4. Magma chamber processes in central volcanic systems of Iceland

    DEFF Research Database (Denmark)

    Þórarinsson, Sigurjón Böðvar; Tegner, Christian

    2009-01-01

    olivine basalts from Iceland that had undergone about 20% crystallisation of olivine, plagioclase and clinopyroxene and that the macrorhythmic units formed from thin magma layers not exceeding 200-300 m. Such a "mushy" magma chamber is akin to volcanic plumbing systems in settings of high magma supply...

  5. An Investigation of the Artifacts and Process of Constructing Computers Games about Environmental Science in a Fifth Grade Classroom

    Science.gov (United States)

    Baytak, Ahmet; Land, Susan M.

    2011-01-01

    This study employed a case study design (Yin, "Case study research, design and methods," 2009) to investigate the processes used by 5th graders to design and develop computer games within the context of their environmental science unit, using the theoretical framework of "constructionism." Ten fifth graders designed computer games using "Scratch"…

  6. The on-board computer in diagnosis of satellite power unit

    Science.gov (United States)

    Bel'giy, V. V.; Bugrovskiy, V. V.; Kovachich, Yu. V.; Petrov, B. N.; Shevyakov, A. A.

    Diagnosis of a space thermoemission power unit incorporating a Topaz type reactor converter is hindered by the low potential of the measurement system. The lack of information is restored by computing from the measurement date. Examples of dynamic mode diagnosis with restoration of information on the field temperature is given. The power unit diagnosis algorithms are implemented in the onboard computer whose power is about 200,000 operations per second. Memory and computing requirements are determined from algorithms of different diagnosis degrees. Results in study of the necessary computer component redundancy are given for different models of system degradation. The redundancy level should insure that the nucleus of the computer system with a minimally necessary 4K-words memory remains in operation after three years into the mission.

  7. Computers in Public Schools: Changing the Image with Image Processing.

    Science.gov (United States)

    Raphael, Jacqueline; Greenberg, Richard

    1995-01-01

    The kinds of educational technologies selected can make the difference between uninspired, rote computer use and challenging learning experiences. University of Arizona's Image Processing for Teaching Project has worked with over 1,000 teachers to develop image-processing techniques that provide students with exciting, open-ended opportunities for…

  8. Computer-Assisted Regulation of Emotional and Social Processes

    OpenAIRE

    Vanhala, Toni; Surakka, Veikko

    2008-01-01

    The current work presented a model for a computer system that supports the regulation of emotion related processes during exposure to provoking stimuli. We identified two main challenges for these kinds of systems. First, emotions as such are complex, multi-component processes that are measured with several complementary methods. The amount of

  9. Computer presentation in mineral processing by software comuputer packets

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj; Golomeova, Mirjana

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-1, Minteh-2 and Minteh-3 in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fasr and sure presentation of some complex circuits in the mineral processing technologies.

  10. Cognitive Workload of Computerized Nursing Process in Intensive Care Units.

    Science.gov (United States)

    Dal Sasso, Grace Marcon; Barra, Daniela Couto Carvalho

    2015-08-01

    The aim of this work was to measure the cognitive workload to complete printed nursing process versus computerized nursing process from International Classification Practice of Nursing in intensive care units. It is a quantitative, before-and-after quasi-experimental design, with a sample of 30 participants. Workload was assessed using National Aeronautics and Space Administration Task-Load Index. Six cognitive categories were measured. The "temporal demand" was the largest contributor to the cognitive workload, and the role of the nursing process in the "performance" category has excelled that of computerized nursing process. It was concluded that computerized nursing process contributes to lower cognitive workload of nurses for being a support system for decision making based on the International Classification Practice of Nursing. The computerized nursing process as a logical structure of the data, information, diagnoses, interventions and results become a reliable option for health improvement of healthcare, because it can enhance nurse safe decision making, with the intent to reduce damage and adverse events to patients in intensive care. PMID:26061562

  11. Use of simulation for planning the organization of the computational process on digital computing systems

    International Nuclear Information System (INIS)

    The technique of choice for job processing on a given computer system structure in real time is proposed. These tasks are accomplished with limited and unlimited memory buffers by choosing subjects of dispatching and management of their use to specify the parameters of a set of objectives. The characteristics of the computational process have been calculated using the simulation program designed and drawn up in the GPSS

  12. Translator-computer interaction in action:An observational process study of computer-aided translation

    OpenAIRE

    Bundgaard, Kristine; Christensen, Tina Paulsen; Schjoldager, Anne

    2016-01-01

    Though we lack empirically-based knowledge of the impact of computer-aided translation (CAT) tools on translation processes, it is generally agreed that all professional translators are now involved in some kind of translator-computer interaction (TCI), using O’Brien’s (2012) term. Taking a TCI perspective, this paper investigates the relationship between machines and humans in the field of translation, analysing a CAT process in which machine-translation (MT) technology was integrated into a...

  13. Splash, pop, sizzle: Information processing with phononic computing

    Directory of Open Access Journals (Sweden)

    Sophia R. Sklan

    2015-05-01

    Full Text Available Phonons, the quanta of mechanical vibration, are important to the transport of heat and sound in solid materials. Recent advances in the fundamental control of phonons (phononics have brought into prominence the potential role of phonons in information processing. In this review, the many directions of realizing phononic computing and information processing are examined. Given the relative similarity of vibrational transport at different length scales, the related fields of acoustic, phononic, and thermal information processing are all included, as are quantum and classical computer implementations. Connections are made between the fundamental questions in phonon transport and phononic control and the device level approach to diodes, transistors, memory, and logic.

  14. Computer Processing Of Tunable-Diode-Laser Spectra

    Science.gov (United States)

    May, Randy D.

    1991-01-01

    Tunable-diode-laser spectrometer measuring transmission spectrum of gas operates under control of computer, which also processes measurement data. Measurements in three channels processed into spectra. Computer controls current supplied to tunable diode laser, stepping it through small increments of wavelength while processing spectral measurements at each step. Program includes library of routines for general manipulation and plotting of spectra, least-squares fitting of direct-transmission and harmonic-absorption spectra, and deconvolution for determination of laser linewidth and for removal of instrumental broadening of spectral lines.

  15. Splash, pop, sizzle: Information processing with phononic computing

    International Nuclear Information System (INIS)

    Phonons, the quanta of mechanical vibration, are important to the transport of heat and sound in solid materials. Recent advances in the fundamental control of phonons (phononics) have brought into prominence the potential role of phonons in information processing. In this review, the many directions of realizing phononic computing and information processing are examined. Given the relative similarity of vibrational transport at different length scales, the related fields of acoustic, phononic, and thermal information processing are all included, as are quantum and classical computer implementations. Connections are made between the fundamental questions in phonon transport and phononic control and the device level approach to diodes, transistors, memory, and logic. 

  16. Applications of evolutionary computation in image processing and pattern recognition

    CERN Document Server

    Cuevas, Erik; Perez-Cisneros, Marco

    2016-01-01

    This book presents the use of efficient Evolutionary Computation (EC) algorithms for solving diverse real-world image processing and pattern recognition problems. It provides an overview of the different aspects of evolutionary methods in order to enable the reader in reaching a global understanding of the field and, in conducting studies on specific evolutionary techniques that are related to applications in image processing and pattern recognition. It explains the basic ideas of the proposed applications in a way that can also be understood by readers outside of the field. Image processing and pattern recognition practitioners who are not evolutionary computation researchers will appreciate the discussed techniques beyond simple theoretical tools since they have been adapted to solve significant problems that commonly arise on such areas. On the other hand, members of the evolutionary computation community can learn the way in which image processing and pattern recognition problems can be translated into an...

  17. Quantum information processing in nanostructures Quantum optics; Quantum computing

    CERN Document Server

    Reina-Estupinan, J H

    2002-01-01

    Since information has been regarded os a physical entity, the field of quantum information theory has blossomed. This brings novel applications, such as quantum computation. This field has attracted the attention of numerous researchers with backgrounds ranging from computer science, mathematics and engineering, to the physical sciences. Thus, we now have an interdisciplinary field where great efforts are being made in order to build devices that should allow for the processing of information at a quantum level, and also in the understanding of the complex structure of some physical processes at a more basic level. This thesis is devoted to the theoretical study of structures at the nanometer-scale, 'nanostructures', through physical processes that mainly involve the solid-state and quantum optics, in order to propose reliable schemes for the processing of quantum information. Initially, the main results of quantum information theory and quantum computation are briefly reviewed. Next, the state-of-the-art of ...

  18. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation are...... widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately this is not...... been to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  19. Improving management decision processes through centralized communication linkages

    Science.gov (United States)

    Simanton, D. F.; Garman, J. R.

    1985-01-01

    Information flow is a critical element to intelligent and timely decision-making. At NASA's Johnson Space Center the flow of information is being automated through the use of a centralized backbone network. The theoretical basis of this network, its implications to the horizontal and vertical flow of information, and the technical challenges involved in its implementation are the focus of this paper. The importance of the use of common tools among programs and some future concerns related to file transfer, graphics transfer, and merging of voice and data are also discussed.

  20. Evaluating historical climate and hydrologic trends in the Central Appalachian region of the United States

    Science.gov (United States)

    Gaertner, B. A.; Zegre, N.

    2015-12-01

    Climate change is surfacing as one of the most important environmental and social issues of the 21st century. Over the last 100 years, observations show increasing trends in global temperatures and intensity and frequency of precipitation events such as flooding, drought, and extreme storms. Global circulation models (GCM) show similar trends for historic and future climate indicators, albeit with geographic and topographic variability at regional and local scale. In order to assess the utility of GCM projections for hydrologic modeling, it is important to quantify how robust GCM outputs are compared to robust historical observations at finer spatial scales. Previous research in the United States has primarily focused on the Western and Northeastern regions due to dominance of snow melt for runoff and aquifer recharge but the impact of climate warming in the mountainous central Appalachian Region is poorly understood. In this research, we assess the performance of GCM-generated historical climate compared to historical observations primarily in the context of forcing data for macro-scale hydrologic modeling. Our results show significant spatial heterogeneity of modeled climate indices when compared to observational trends at the watershed scale. Observational data is showing considerable variability within maximum temperature and precipitation trends, with consistent increases in minimum temperature. The geographic, temperature, and complex topographic gradient throughout the central Appalachian region is likely the contributing factor in temperature and precipitation variability. Variable climate changes are leading to more severe and frequent climate events such as temperature extremes and storm events, which can have significant impacts on our drinking water supply, infrastructure, and health of all downstream communities.

  1. Corrective Action Plan for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    K. Campbell

    2000-04-01

    This Corrective Action Plan provides methods for implementing the approved corrective action alternative as provided in the Corrective Action Decision Document for the Central Nevada Test Area (CNTA), Corrective Action Unit (CAU) 417 (DOE/NV, 1999). The CNTA is located in the Hot Creek Valley in Nye County, Nevada, approximately 137 kilometers (85 miles) northeast of Tonopah, Nevada. The CNTA consists of three separate land withdrawal areas commonly referred to as UC-1, UC-3, and UC-4, all of which are accessible to the public. CAU 417 consists of 34 Corrective Action Sites (CASs). Results of the investigation activities completed in 1998 are presented in Appendix D of the Corrective Action Decision Document (DOE/NV, 1999). According to the results, the only Constituent of Concern at the CNTA is total petroleum hydrocarbons (TPH). Of the 34 CASs, corrective action was proposed for 16 sites in 13 CASs. In fiscal year 1999, a Phase I Work Plan was prepared for the construction of a cover on the UC-4 Mud Pit C to gather information on cover constructibility and to perform site management activities. With Nevada Division of Environmental Protection concurrence, the Phase I field activities began in August 1999. A multi-layered cover using a Geosynthetic Clay Liner as an infiltration barrier was constructed over the UC-4 Mud Pit. Some TPH impacted material was relocated, concrete monuments were installed at nine sites, signs warning of site conditions were posted at seven sites, and subsidence markers were installed on the UC-4 Mud Pit C cover. Results from the field activities indicated that the UC-4 Mud Pit C cover design was constructable and could be used at the UC-1 Central Mud Pit (CMP). However, because of the size of the UC-1 CMP this design would be extremely costly. An alternative cover design, a vegetated cover, is proposed for the UC-1 CMP.

  2. Discontinuous Galerkin methods on graphics processing units for nonlinear hyperbolic conservation laws

    CERN Document Server

    Fuhry, Martin; Krivodonova, Lilia

    2016-01-01

    We present a novel implementation of the modal discontinuous Galerkin (DG) method for hyperbolic conservation laws in two dimensions on graphics processing units (GPUs) using NVIDIA's Compute Unified Device Architecture (CUDA). Both flexible and highly accurate, DG methods accommodate parallel architectures well as their discontinuous nature produces element-local approximations. High performance scientific computing suits GPUs well, as these powerful, massively parallel, cost-effective devices have recently included support for double-precision floating point numbers. Computed examples for Euler equations over unstructured triangle meshes demonstrate the effectiveness of our implementation on an NVIDIA GTX 580 device. Profiling of our method reveals performance comparable to an existing nodal DG-GPU implementation for linear problems.

  3. Finite Element Analysis in Concurrent Processing: Computational Issues

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw; Watson, Brian; Vanderplaats, Garrett

    2004-01-01

    The purpose of this research is to investigate the potential application of new methods for solving large-scale static structural problems on concurrent computers. It is well known that traditional single-processor computational speed will be limited by inherent physical limits. The only path to achieve higher computational speeds lies through concurrent processing. Traditional factorization solution methods for sparse matrices are ill suited for concurrent processing because the null entries get filled, leading to high communication and memory requirements. The research reported herein investigates alternatives to factorization that promise a greater potential to achieve high concurrent computing efficiency. Two methods, and their variants, based on direct energy minimization are studied: a) minimization of the strain energy using the displacement method formulation; b) constrained minimization of the complementary strain energy using the force method formulation. Initial results indicated that in the context of the direct energy minimization the displacement formulation experienced convergence and accuracy difficulties while the force formulation showed promising potential.

  4. Rationale awareness for quality assurance in iterative human computation processes

    CERN Document Server

    Xiao, Lu

    2012-01-01

    Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.

  5. Graphics processing unit-assisted density profile calculations in the KSTAR reflectometer

    Science.gov (United States)

    Seo, Seong-Heon; Oh, Dong Keun

    2014-11-01

    Wavelet transform (WT) is widely used in signal processing. The frequency modulation reflectometer in the KSTAR applies this technique to get the phase information from the mixer output measurements. Since WT is a time consuming process, it is difficult to calculate the density profile in real time. The data analysis time, however, can be significantly reduced by the use of the Graphics Processing Unit (GPU), with its powerful computing capability, in WT. A bottle neck in the KSTAR data processing exists in the data input and output (IO) process between the CPU and its peripheral devices. In this paper, the details of the WT implementation assisted by a GPU in the KSTAR reflectometer are presented and the consequent performance improvement is reported. The real time density profile calculation from the reflectometer measurements is also discussed.

  6. Activity flow assessment based on a central radiological computer system 'CRCS' and a comprehensive plant-wide radiation monitoring system 'PW-RMS'

    International Nuclear Information System (INIS)

    The comprehensive plant-wide radiation monitoring concept combines radiation measurement of processes and systems in the plant, online environmental measurements and personal doses in order to allow the assessment of the activity transport and to plan event-related job doses. The plant wide radiation monitoring system 'PW-RMS' consists of a network of radiation monitors and their auxiliary measurements inside the buildings of the plant, at points of release to the environment and in the vicinity of the plant. Data are transmitted to the central radiological computer system 'CRCS' with its radiological assessment features. CRCS is of a modular design and if necessary can be enlarged and modified according to the plant-related performance profile. The system is the result of a long time development based on the experience with Radiation Monitoring Systems 'RMS' in power plants and in the environment and with the Radiological Computer Systems CRCS, installed in nuclear facility and authorities as well. RM-Systems with basic functions of CRCS are running in German NPPs since a long time. CRCS, provided with process data from the plant and its environment are operating successfully in several networks in federal German countries and in Switzerland. A plant- wide radiation monitoring system based on a combination of Russian and German measuring units and with CRCS is in operation in a Slovak nuclear power plant since 2002/2003. The new plant Olkiluoto 3 in Finland, which is currently to be designed, will be equipped with a plant- wide radiation monitoring system and a CRCS. (authors)

  7. Analytical calculation of heavy quarkonia production processes in computer

    OpenAIRE

    Braguta, V. V.; Likhoded, A. K.; Luchinsky, A. V.; Poslavsky, S. V.

    2013-01-01

    This report is devoted to the analytical calculation of heavy quarkonia production processes in modern experiments such as LHC, B-factories and superB-factories in computer. Theoretical description of heavy quarkonia is based on the factorization theorem. This theorem leads to special structure of the production amplitudes which can be used to develop computer algorithm which calculates these amplitudes automatically. This report is devoted to the description of this algorithm. As an example ...

  8. Image processing with massively parallel computer Quadrics Q1

    International Nuclear Information System (INIS)

    Aimed to evaluate the image processing capabilities of the massively parallel computer Quadrics Q1, a convolution algorithm that has been implemented is described in this report. At first the discrete convolution mathematical definition is recalled together with the main Q1 h/w and s/w features. Then the different codification forms of the algorythm are described and the Q1 performances are compared with those obtained by different computers. Finally, the conclusions report on main results and suggestions

  9. Intelligent Computational Systems. Opening Remarks: CFD Application Process Workshop

    Science.gov (United States)

    VanDalsem, William R.

    1994-01-01

    This discussion will include a short review of the challenges that must be overcome if computational physics technology is to have a larger impact on the design cycles of U.S. aerospace companies. Some of the potential solutions to these challenges may come from the information sciences fields. A few examples of potential computational physics/information sciences synergy will be presented, as motivation and inspiration for the Improving The CFD Applications Process Workshop.

  10. Image processing and computer graphics in radiology. Pt. B

    International Nuclear Information System (INIS)

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG)

  11. Image processing and computer graphics in radiology. Pt. A

    International Nuclear Information System (INIS)

    The reports give a full review of all aspects of digital imaging in radiology which are of significance to image processing and the subsequent picture archiving and communication techniques. The review strongly clings to practice and illustrates the various contributions from specialized areas of the computer sciences, such as computer vision, computer graphics, database systems and information and communication systems, man-machine interactions and software engineering. Methods and models available are explained and assessed for their respective performance and value, and basic principles are briefly explained. (DG)

  12. Advanced Investigation and Comparative Study of Graphics Processing Unit-queries Countered

    Directory of Open Access Journals (Sweden)

    A. Baskar

    2014-10-01

    Full Text Available GPU, Graphics Processing Unit, is the buzz word ruling the market these days. What is that and how has it gained that much importance is what to be answered in this research work. The study has been constructed with full attention paid towards answering the following question. What is a GPU? How is it different from a CPU? How good/bad it is computationally when comparing to CPU? Can GPU replace CPU, or it is a day dream? How significant is arrival of APU (Accelerated Processing Unit in market? What tools are needed to make GPU work? What are the improvement/focus areas for GPU to stand in the market? All the above questions are discussed and answered well in this study with relevant explanations.

  13. Evidence of a sensory processing unit in the mammalian macula

    Science.gov (United States)

    Chimento, T. C.; Ross, M. D.

    1996-01-01

    We cut serial sections through the medial part of the rat vestibular macula for transmission electron microscopic (TEM) examination, computer-assisted 3-D reconstruction, and compartmental modeling. The ultrastructural research showed that many primary vestibular neurons have an unmyelinated segment, often branched, that extends between the heminode (putative site of the spike initiation zone) and the expanded terminal(s) (calyx, calyces). These segments, termed the neuron branches, and the calyces frequently have spine-like processes of various dimensions with bouton endings that morphologically are afferent, efferent, or reciprocal to other macular neural elements. The major questions posed by this study were whether small details of morphology, such as the size and location of neuronal processes or synapses, could influence the output of a vestibular afferent, and whether a knowledge of morphological details could guide the selection of values for simulation parameters. The conclusions from our simulations are (1) values of 5.0 k omega cm2 for membrane resistivity and 1.0 nS for synaptic conductance yield simulations that best match published physiological results; (2) process morphology has little effect on orthodromic spread of depolarization from the head (bouton) to the spike initiation zone (SIZ); (3) process morphology has no effect on antidromic spread of depolarization to the process head; (4) synapses do not sum linearly; (5) synapses are electrically close to the SIZ; and (6) all whole-cell simulations should be run with an active SIZ.

  14. Business Process Quality Computation: Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  15. Conceptual Design for the Pilot-Scale Plutonium Oxide Processing Unit in the Radiochemical Processing Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Meier, David E.; Tingey, Joel M.; Casella, Amanda J.; Delegard, Calvin H.; Edwards, Matthew K.; Jones, Susan A.; Rapko, Brian M.

    2014-08-05

    This report describes a conceptual design for a pilot-scale capability to produce plutonium oxide for use as exercise and reference materials, and for use in identifying and validating nuclear forensics signatures associated with plutonium production. This capability is referred to as the Pilot-scale Plutonium oxide Processing Unit (P3U), and it will be located in the Radiochemical Processing Laboratory at the Pacific Northwest National Laboratory. The key unit operations are described, including plutonium dioxide (PuO2) dissolution, purification of the Pu by ion exchange, precipitation, and conversion to oxide by calcination.

  16. Toward optical signal processing using photonic reservoir computing.

    Science.gov (United States)

    Vandoorne, Kristof; Dierckx, Wouter; Schrauwen, Benjamin; Verstraeten, David; Baets, Roel; Bienstman, Peter; Van Campenhout, Jan

    2008-07-21

    We propose photonic reservoir computing as a new approach to optical signal processing in the context of large scale pattern recognition problems. Photonic reservoir computing is a photonic implementation of the recently proposed reservoir computing concept, where the dynamics of a network of nonlinear elements are exploited to perform general signal processing tasks. In our proposed photonic implementation, we employ a network of coupled Semiconductor Optical Amplifiers (SOA) as the basic building blocks for the reservoir. Although they differ in many key respects from traditional software-based hyperbolic tangent reservoirs, we show using simulations that such a photonic reservoir can outperform traditional reservoirs on a benchmark classification task. Moreover, a photonic implementation offers the promise of massively parallel information processing with low power and high speed. PMID:18648434

  17. Towards Process Support for Migrating Applications to Cloud Computing

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2012-01-01

    Cloud computing is an active area of research for industry and academia. There are a large number of organizations providing cloud computing infrastructure and services. In order to utilize these infrastructure resources and services, existing applications need to be migrated to clouds. However, a...... successful migration effort needs well-defined process support. It does not only help to identify and address challenges associated with migration but also provides a strategy to evaluate different platforms in relation to application and domain specific requirements. This paper present a process framework...... App Engine. We also report the potential challenges, suitable solutions, and lesson learned to support the presented process framework. We expect that the reported experiences can serve guidelines for those who intend to migrate software applications to cloud computing....

  18. BarraCUDA - a fast short read sequence aligner using graphics processing units

    Directory of Open Access Journals (Sweden)

    Klus Petr

    2012-01-01

    Full Text Available Abstract Background With the maturation of next-generation DNA sequencing (NGS technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU, extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http://seqbarracuda.sf.net

  19. Graphical processing unit implementation of an integrated shape-based active contour: Application to digital pathology

    Directory of Open Access Journals (Sweden)

    Sahirzeeshan Ali

    2011-01-01

    Full Text Available Commodity graphics hardware has become a cost-effective parallel platform to solve many general computational problems. In medical imaging and more so in digital pathology, segmentation of multiple structures on high-resolution images, is often a complex and computationally expensive task. Shape-based level set segmentation has recently emerged as a natural solution to segmenting overlapping and occluded objects. However the flexibility of the level set method has traditionally resulted in long computation times and therefore might have limited clinical utility. The processing times even for moderately sized images could run into several hours of computation time. Hence there is a clear need to accelerate these segmentations schemes. In this paper, we present a parallel implementation of a computationally heavy segmentation scheme on a graphical processing unit (GPU. The segmentation scheme incorporates level sets with shape priors to segment multiple overlapping nuclei from very large digital pathology images. We report a speedup of 19× compared to multithreaded C and MATLAB-based implementations of the same scheme, albeit with slight reduction in accuracy. Our GPU-based segmentation scheme was rigorously and quantitatively evaluated for the problem of nuclei segmentation and overlap resolution on digitized histopathology images corresponding to breast and prostate biopsy tissue specimens.

  20. BarraCUDA - a fast short read sequence aligner using graphics processing units

    LENUS (Irish Health Repository)

    Klus, Petr

    2012-01-13

    Abstract Background With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. Findings Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. Conclusions BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available from http:\\/\\/seqbarracuda.sf.net

  1. Imaging Rayleigh wave attenuation and phase velocity in the western and central United States

    Science.gov (United States)

    Bao, X.; Dalton, C. A.; Jin, G.; Gaherty, J. B.

    2013-12-01

    The EarthScope USArray provides an opportunity to obtain detailed images of the continental upper mantle at an unprecedented scale. The majority of mantle models derived from USArray data to date contain spatial variations in seismic-wave speed; however, little is known about the attenuation structure of the North American upper mantle. Joint interpretation of seismic attenuation and velocity models can improve upon the interpretations based only on velocity, and provide important constraints on the temperature, composition, melt content, and volatile content of the mantle. We jointly invert Rayleigh wave phase and amplitude observations for phase velocity and attenuation maps for the western and central United States using USArray data. This approach exploits the amplitudes' sensitivity to velocity and the phase delays' sensitivity to attenuation. The phase and amplitude data are measured in the period range 20--100 s using a new interstation cross-correlation approach, based on the Generalized Seismological Data Functional algorithm, that takes advantage of waveform similarity at nearby stations. The Rayleigh waves are generated from 670 large teleseismic earthquakes that occurred between 2006 and 2012, and measured from all available Transportable Array stations. We consider two separate and complementary approaches for imaging attenuation variations: (1) the Helmholtz tomography (Lin et al., 2012) and (2) two-station path tomography. Results obtained from the two methods are contrasted. We provide a preliminary interpretation based on the observed relationship between Rayleigh wave attenuation and phase velocity.

  2. Carbon Flux of Down Woody Materials in Forests of the North Central United States

    International Nuclear Information System (INIS)

    Across large scales, the carbon (C) flux of down woody material (DWM) detrital pools has largely been simulated based on forest stand attributes (e.g., stand age and forest type). The annual change in forest DWM C stocks and other attributes (e.g., size and decay class changes) was assessed using a forest inventory in the north central United States to provide an empirical assessment of strategic-scale DWM C flux. Using DWM inventory data from the USDA Forest Service's Forest Inventory and Analysis program, DWM C stocks were found to be relatively static across the study region with an annual flux rate not statistically different from zero. Mean C flux rates across the study area were -0.25, -0.12, -0.01, and -0.04 (Mg/ha/yr) for standing live trees, standing dead trees, coarse woody debris, and fine woody debris, respectively. Flux rates varied in their both magnitude and status (emission/sequestration) by forest types, latitude, and DWM component size. Given the complex dynamics of DWM C flux, early implementation of inventory re measurement, and relatively low sample size, numerous future research directions are suggested.

  3. Heterogeneous arsenic enrichment in meta-sedimentary rocks in central Maine, United States.

    Science.gov (United States)

    O'Shea, Beth; Stransky, Megan; Leitheiser, Sara; Brock, Patrick; Marvinney, Robert G; Zheng, Yan

    2015-02-01

    Arsenic is enriched up to 28 times the average crustal abundance of 4.8 mg kg(-1) for meta-sedimentary rocks of two adjacent formations in central Maine, USA where groundwater in the bedrock aquifer frequently contains elevated As levels. The Waterville Formation contains higher arsenic concentrations (mean As 32.9 mg kg(-1), median 12.1 mg kg(-1), n=38) than the neighboring Vassalboro Group (mean As 19.1 mg kg(-1), median 6.0 mg kg(-1), n=38). The Waterville Formation is a pelitic meta-sedimentary unit with abundant pyrite either visible or observed by scanning electron microprobe. Concentrations of As and S are strongly correlated (r=0.88, ppyrrhotite (mean As 1 mg kg(-1), n=15) during de-sulfidation reactions: the resulting metamorphic rocks contain arsenic but little or no sulfur indicating that the arsenic is now in new mineral hosts. Secondary weathering products such as iron oxides may host As, yet the geochemical methods employed (oxidative and reductive leaching) do not conclusively indicate that arsenic is associated only with these. Instead, silicate minerals such as biotite and garnet are present in metamorphic zones where arsenic is enriched (up to 130.8 mg kg(-1) As) where S is 0%. Redistribution of already variable As in the protolith during metamorphism and contemporary water-rock interaction in the aquifers, all combine to contribute to a spatially heterogeneous groundwater arsenic distribution in bedrock aquifers. PMID:24861530

  4. Application of Paleoseismology to Seismic Hazard Analysis in the Central and Eastern United States (CEUS)

    International Nuclear Information System (INIS)

    Paleoseismology techniques have been applied across the CEUS (Central and Eastern United States) to augment seismic data and to improve seismic hazard analyses. Considering paleoseismic data along with historic data may increase the number of events and their maximum magnitudes (Mmax), which may decrease the recurrence time of seismic events included in hazard calculations. More importantly, paleoseismic studies extend the length of the earthquake record often by 1000s–10,000s of years and reduce uncertainties related to sources, magnitude, and recurrence times of earthquakes. The CEUS Seismic Source Characterization (Technical Report, [108]) uses a lot of paleoseismic data in building the source model for seismic hazard analyses. Most of these data are derived through study of paleoliquefaction features. Appendix E of the Technical Report compiles data from ten distinct regions in eastern North America where paleoliquefaction features have been used to improve knowledge of regional seismic history. These regions are shown. Paleoliquefaction data can significantly impact seismic hazard calculations by better defining earthquake sources, Mmax for those sources, and recurrence rates of large earthquakes

  5. Molecular epidemiology of Acinetobacter baumannii in central intensive care unit in Kosova teaching hospital

    Directory of Open Access Journals (Sweden)

    Lul Raka

    2009-12-01

    Full Text Available Infections caused by bacteria of genus Acinetobacter pose a significant health care challenge worldwide. Information on molecular epidemiological investigation of outbreaks caused by Acinetobacter species in Kosova is lacking. The present investigation was carried out to enlight molecular epidemiology of Acinetobacterbaumannii in the Central Intensive Care Unit (CICU of a University hospital in Kosova using pulse field gel electrophoresis (PFGE. During March - July 2006, A. baumannii was isolated from 30 patients, of whom 22 were infected and 8 were colonised. Twenty patients had ventilator-associated pneumonia, one patient had meningitis, and two had coinfection with bloodstream infection and surgical site infection. The most common diagnoses upon admission to the ICU were politrauma and cerebral hemorrhage. Bacterial isolates were most frequently recovered from endotracheal aspirate (86.7%. First isolation occurred, on average, on day 8 following admission (range 1-26 days. Genotype analysis of A. baumannii isolates identified nine distinct PFGE patterns, with predominance of PFGE clone E represented by isolates from 9 patients. Eight strains were resistant to carbapenems. The genetic relatedness of Acinetobacter baumannii was high, indicating cross-transmission within the ICU setting. These results emphasize the need for measures to prevent nosocomial transmission of A. baumannii in ICU.

  6. Closure Report for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    International Nuclear Information System (INIS)

    This Closure Report provides the documentation for closure of the Central Nevada Test Area (CNTA) surface Corrective Action Unit (CAU) 417. The CNTA is located in Hot Creek Valley in Nye County, Nevada, approximately 22.5 kilometers (14 miles) west of U.S. State Highway 6 near the Moores Station historical site, and approximately 137 kilometers (85 miles) northeast of Tonopah, Nevada. The CNTA consists of three separate land withdrawal areas commonly referred to as UC-1, UC-3, and UC-4, all of which are accessible to the public. A nuclear device for Project Faultless was detonated approximately 975 meters (3,200 feet) below ground surface on January 19, 1968, in emplacement boring UC-1 (Department of Energy, Nevada Operation Office [DOE/NV], 1997). CAU 417 consists of 34 Corrective Action Sites (CASs). Site closure was completed using a Nevada Department of Environmental Protection (NDEP) approved Corrective Action Plan (CAP) (DOE/NV, 2000) which was based on the recommendations presented in the NDEP-approved Corrective Action Decision Document (DOE/NV, 1999). Closure of CAU 417 was completed in two phases. Phase I field activities were completed with NDEP concurrence during 1999 as outlined in the Phase I Work Plan, Appendix A of the CAP (DOE/NV, 2000), and as summarized in Section 2.1.2 of this document

  7. Impact of climate variability on runoff in the north-central United States

    Science.gov (United States)

    Ryberg, Karen R.; Lin, Wei; Vecchia, Aldo V.

    2014-01-01

    Large changes in runoff in the north-central United States have occurred during the past century, with larger floods and increases in runoff tending to occur from the 1970s to the present. The attribution of these changes is a subject of much interest. Long-term precipitation, temperature, and streamflow records were used to compare changes in precipitation and potential evapotranspiration (PET) to changes in runoff within 25 stream basins. The basins studied were organized into four groups, each one representing basins similar in topography, climate, and historic patterns of runoff. Precipitation, PET, and runoff data were adjusted for near-decadal scale variability to examine longer-term changes. A nonlinear water-balance analysis shows that changes in precipitation and PET explain the majority of multidecadal spatial/temporal variability of runoff and flood magnitudes, with precipitation being the dominant driver. Historical changes in climate and runoff in the region appear to be more consistent with complex transient shifts in seasonal climatic conditions than with gradual climate change. A portion of the unexplained variability likely stems from land-use change.

  8. Final design of the Switching Network Units for the JT-60SA Central Solenoid

    International Nuclear Information System (INIS)

    This paper describes the approved detailed design of the four Switching Network Units (SNUs) of the superconducting Central Solenoid of JT-60SA, the satellite tokamak that will be built in Naka, Japan, in the framework of the “Broader Approach” cooperation agreement between Europe and Japan. The SNUs can interrupt a current of 20 kA DC in less than 1 ms in order to produce a voltage of 5 kV. Such performance is obtained by inserting an electronic static circuit breaker in parallel to an electromechanical contactor and by matching and coordinating their operations. Any undesired transient overvoltage is limited by an advanced snubber circuit optimized for this application. The SNU resistance values can be adapted to the specific operation scenario. In particular, after successful plasma breakdown, the SNU resistance can be reduced by a making switch. The design choices of the main SNU elements are justified by showing and discussing the performed calculations and simulations. In most cases, the developed design is expected to exceed the performances required by the JT-60SA project

  9. Fractal Characteristics of Geomorphology Units as Bouguer Anomaly Manifestations in Bumiayu, Central Java, Indonesia

    Science.gov (United States)

    Agus Nur, Andi; Syafri, Ildrem; Muslim, Dicky; Hirnawan, Febri; Raditya, Pradnya P.; Sulastri, Murni; Abdulah, Fikri

    2016-01-01

    Bumiayu in Central Java, Indonesia, has a typical landform characteristics. Differences of topography on each geomorphological unit indicated by the value of fractal dimension. This research provides important information on the influence of geomorphology conditions and subsurface geological phenomenon of research area based on fractal application. This research methodology relies on laboratory analysis and field observation. Landform is a characteristics of Bouguer anomaly contour manifestation. It is indicated by occurences of significant correlation between the Bouguer anomaly countour and geological cross section as well topography contour slope and Bouguer anomaly contour slope. Based on spatial analysis, morphology of research area is dominated by very high steep hills (more than 60%). Result of Bouguer anomaly countour analysis also shows that research area dominated by very high steep hills (more than 55%). Statistical analysis between the fractal value of lineament in Digital Elevation Model and fractal value of lineament in Bouguer anomaly countour as well the fractal value of topography countour and fractal value of Bouguer anomaly countour shows that the relationship was not significant. Further, the entire result of this verified research shows clearly that geomorphology conditions represents subsurface geological phenomenon.

  10. Well Completion Report for Corrective Action Unit 443 Central Nevada Test Area Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    None

    2009-12-01

    The drilling program described in this report is part of a new corrective action strategy for Corrective Action Unit (CAU) 443 at the Central Nevada Test Area (CNTA). The drilling program included drilling two boreholes, geophysical well logging, construction of two monitoring/validation (MV) wells with piezometers (MV-4 and MV-5), development of monitor wells and piezometers, recompletion of two existing wells (HTH-1 and UC-1-P-1S), removal of pumps from existing wells (MV-1, MV-2, and MV-3), redevelopment of piezometers associated with existing wells (MV-1, MV-2, and MV-3), and installation of submersible pumps. The new corrective action strategy includes initiating a new 5-year proof-of-concept monitoring period to validate the compliance boundary at CNTA (DOE 2007). The new 5-year proof-of-concept monitoring period begins upon completion of the new monitor wells and collection of samples for laboratory analysis. The new strategy is described in the Corrective Action Decision Document/Corrective Action Plan addendum (DOE 2008a) that the Nevada Division of Environmental Protection approved (NDEP 2008).

  11. Molecular epidemiology of Acinetobacter baumannii in central intensive care unit in Kosova Teaching Hospital.

    Science.gov (United States)

    Raka, Lul; Kalenć, Smilja; Bosnjak, Zrinka; Budimir, Ana; Katić, Stjepan; Sijak, Dubravko; Mulliqi-Osmani, Gjyle; Zoutman, Dick; Jaka, Arbëresha

    2009-12-01

    Infections caused by bacteria of genus Acinetobacter pose a significant health care challenge worldwide. Information on molecular epidemiological investigation of outbreaks caused by Acinetobacter species in Kosova is lacking. The present investigation was carried out to enlight molecular epidemiology of Acinetobacter baumannii in the Central Intensive Care Unit (CICU) of a University hospital in Kosova using pulse field gel electrophoresis (PFGE). During March - July 2006, A. baumannii was isolated from 30 patients, of whom 22 were infected and 8 were colonised. Twenty patients had ventilator-associated pneumonia, one patient had meningitis, and two had coinfection with bloodstream infection and surgical site infection. The most common diagnoses upon admission to the ICU were politrauma and cerebral hemorrhage. Bacterial isolates were most frequently recovered from endotracheal aspirate (86.7%). First isolation occurred, on average, on day 8 following admission (range 1-26 days). Genotype analysis of A. baumannii isolates identified nine distinct PFGE patterns, with predominance of PFGE clone E represented by isolates from 9 patients. Eight strains were resistant to carbapenems. The genetic relatedness of Acinetobacter baumannii was high, indicating cross-transmission within the ICU setting. These results emphasize the need for measures to prevent nosocomial transmission of A. baumannii in ICU. PMID:20464330

  12. Some Aspects of Process Computers Configuration Control in Nuclear Power Plant Krsko - Process Computer Signal Configuration Database (PCSCDB)

    International Nuclear Information System (INIS)

    During the operation of NEK and other nuclear power plants it has been recognized that certain issues related to the usage of digital equipment and associated software in NPP technological process protection, control and monitoring, is not adequately addressed in the existing programs and procedures. The term and the process of Process Computers Configuration Control joins three 10CFR50 Appendix B quality requirements of Process Computers application in NPP: Design Control, Document Control and Identification and Control of Materials, Parts and Components. This paper describes Process Computer Signal Configuration Database (PCSCDB), that was developed and implemented in order to resolve some aspects of Process Computer Configuration Control related to the signals or database points that exist in the life cycle of different Process Computer Systems (PCS) in Nuclear Power Plant Krsko. PCSCDB is controlled, master database, related to the definition and description of the configurable database points associated with all Process Computer Systems in NEK. PCSCDB holds attributes related to the configuration of addressable and configurable real time database points and attributes related to the signal life cycle references and history data such as: Input/Output signals, Manually Input database points, Program constants, Setpoints, Calculated (by application program or SCADA calculation tools) database points, Control Flags (example: enable / disable certain program feature) Signal acquisition design references to the DCM (Document Control Module Application software for document control within Management Information System - MIS) and MECL (Master Equipment and Component List MIS Application software for identification and configuration control of plant equipment and components) Usage of particular database point in particular application software packages, and in the man-machine interface features (display mimics, printout reports, ...) Signals history (EEAR Engineering

  13. Computer presentation of the closed circuits in mineral processing by software computer packets

    OpenAIRE

    Krstev, Aleksandar; Krstev, Boris; Golomeov, Blagoj

    2009-01-01

    In this paper will be shown computer application of softwares Minteh-1, Minteh-2 and Minteh-3 in Visual Basic, Visual Studio for presentation of two-products for some closed circuits od grinding-clasifying processes. These methods make possibilities for appropriate, fasr and sure presentation of some complex circuits in the mineral processing technologies.

  14. Soft Computing Methodology for Shelf Life Prediction of Processed Cheese

    Directory of Open Access Journals (Sweden)

    Sumit Goyal

    2012-06-01

    Full Text Available Feedforward multilayer models were developed for predicting shelf life of processed cheese stored at 30o C. Input variables were Soluble nitrogen, pH, Standard plate count, Yeast & mould count and Spore count. Sensory score was taken as output parameter for developing feedforward multilayer models. Mean square error, root mean square error, coefficient of determination and nash - sutcliffo coefficient performance measures were implemented for testing prediction potential of the soft computing models. The study revealed that soft computing multilayer models can predict shelf life of processed cheese.

  15. Centralization process in procurement of maintenance, repair and operations (mro) items: Case company X

    OpenAIRE

    Hoang, Huong

    2016-01-01

    This thesis documents the process of a centralization project for Maintenance, Repair and Operations (MRO) procurement and the incentives behind the project, as well as discusses the problem attributes, and recommends solutions on how to improve the operational sides of the project in company X. The research questions seek answers for a particular and standardized process to implement the centralization procurement process for MRO items, the reason why MRO items, especially the packaging ...

  16. The University Next Door: Developing a Centralized Unit That Strategically Cultivates Community Engagement at an Urban University

    Science.gov (United States)

    Holton, Valerie L.; Early, Jennifer L.; Resler, Meghan; Trussell, Audrey; Howard, Catherine

    2016-01-01

    Using Kotter's model of change as a framework, this case study will describe the structure and efforts of a centralized unit within an urban, research university to deepen and extend the institutionalization of community engagement. The change model will be described along with details about the implemented strategies and practices that fall…

  17. Evaluation of central catheters and endotracheal tubes in the unit of intensive care with x-ray of portable thorax

    International Nuclear Information System (INIS)

    We carried out a revision of the parameters that define the connect location of the different types of central catheters and endotracheal tube in patients hospitalized at the intensive care unit; by means of the evaluation with portable X-ray of the thorax, describing the complications associated to this vital support

  18. The certification process of the LHCb distributed computing software

    CERN Document Server

    CERN. Geneva

    2015-01-01

    DIRAC contains around 200 thousand lines of python code, and LHCbDIRAC around 120 thousand. The testing process for each release consists of a number of steps, that includes static code analysis, unit tests, integration tests, regression tests, system tests. We dubbed the full p...

  19. Architecture and data processing alternatives for Tse computer. Volume 1: Tse logic design concepts and the development of image processing machine architectures

    Science.gov (United States)

    Rickard, D. A.; Bodenheimer, R. E.

    1976-01-01

    Digital computer components which perform two dimensional array logic operations (Tse logic) on binary data arrays are described. The properties of Golay transforms which make them useful in image processing are reviewed, and several architectures for Golay transform processors are presented with emphasis on the skeletonizing algorithm. Conventional logic control units developed for the Golay transform processors are described. One is a unique microprogrammable control unit that uses a microprocessor to control the Tse computer. The remaining control units are based on programmable logic arrays. Performance criteria are established and utilized to compare the various Golay transform machines developed. A critique of Tse logic is presented, and recommendations for additional research are included.

  20. Replacement of process computers and upgrading of the Loviisa nuclear power station's training simulator

    International Nuclear Information System (INIS)

    The replacement project for the process computer systems at Finland's Imatran Voima Oy (IVO) Loviisa NPP (2x465 MW PWR, in operation since 1977 and 1980) has been started. These systems are of crucial importance to the process monitoring in the control rooms. At the same time the computer systems of the on-site training simulator, operating since 1980, will be upgraded. Both the process computer and the simulator systems were supplied by Nokia Electronics, Finland. Planning of the computer replacement was initiated in 1983 and an agreement with Nokia-Afora was signed in summer 1986 after a hard international competition. The new systems are based on DEC's VAX and Micro VAX computers and a distributed bus (Ethernet) configuration. The high resolution colour display system will be delivered by Ferranti, United Kingdom. The systems will be upgraded so that the simulation computers will be replaced in September 1987, the process computers of Loviisa 1 and the training simulator by the end of 1988 and the Loviisa 2 units by the end of 1989. The paper describes the reasons and justification for replacement, the functional and technical requirements of the new systems, with particular emphasis on the improvements to the present systems, hardware and software solutions and estimated costs of the project. Problems related to the replacement of systems in a running plant are also dealt with. Finally, some ideas concerning further development, such as new functions of the man-machine interface, are presented, as well as plans to minimize the obsolescence problem of the new systems. (author). 2 figs

  1. Development of a computer code, PARC, for simulation of liquid-liquid extraction process in reprocessing

    International Nuclear Information System (INIS)

    A computer code PARC was developed for simulating liquid-liquid extraction process in the PUREX reprocessing plant. PARC is able to predict transient behavior and profiles at equilibrium of uranium, plutonium, neptunium and fission products in several units of pulsed columns and mixer-settlers, which are connected each other in the PUREX plant. In this report, mathematical models of mass transfer and chemical reactions employed in PARC are described and an example of PUREX simulation is given. (author)

  2. Parallel Memetic Algorithm for VLSI Circuit Partitioning Problem using Graphical Processing Units

    Directory of Open Access Journals (Sweden)

    P. Sivakumar

    2012-01-01

    Full Text Available Problem statement: Memetic Algorithm (MA is a form of population-based hybrid Genetic Algorithm (GA coupled with an individual learning procedure capable of performing local refinements. Here we used genetic algorithm to explore the search space and simulated annealing as a local search method to exploit the information in the search region for the optimization of VLSI netlist bi-Partitioning problem. However, they may execute for a long time, because several fitness evaluations must be performed. A promising approach to overcome this limitation is to parallelize this algorithms. General Purpose computing over Graphical Processing Units (GPGPUs is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. Approach: In this study, we propose to implement a parallel MA using graphics cards. Graphics Processor Units (GPUs have emerged as powerful parallel processors in recent years. Using of Graphics Processing Units (GPUs equipped computers; it is possible to accelerate the evaluation of individuals in Genetic Programming. Program compilation, fitness case data and fitness execution are spread over the cores of GPU, allowing for the efficient processing of very large datasets. Results: We perform experiments to compare our parallel MA with a Sequential MA and demonstrate that the former is much more effective than the latter. Our results, implemented on a NVIDIA GeForce GTX 9400 GPU card. Conclusion: Its indicates that our approach is on average 5×faster when compared to a CPU based implementation. With the Tesla C1060 GPU server, our approach would be potentially 10×faster. The correctness of the GPU based MA has been verified by comparing its result with a CPU based MA.

  3. A control unit for a laser module of optoelectronic computing environment with dynamic architecture

    Directory of Open Access Journals (Sweden)

    Lipinskii A. Y.

    2013-06-01

    Full Text Available The paper presents the developed control unit of laser modules of optoelectronic acousto-optic computing environment. The unit is based on ARM micro¬con¬troller of Cortex M3 family, and allows alternating between recording (erase and reading modes in accordance with a predetermined algorithm and settings — exposure time and intensity. The principal electric circuit of the presented device, the block diagram of microcontroller algorithm, and the example application of the developed control unit in the layout of the experimental setup are provided.

  4. Bioinformation processing a primer on computational cognitive science

    CERN Document Server

    Peterson, James K

    2016-01-01

    This book shows how mathematics, computer science and science can be usefully and seamlessly intertwined. It begins with a general model of cognitive processes in a network of computational nodes, such as neurons, using a variety of tools from mathematics, computational science and neurobiology. It then moves on to solve the diffusion model from a low-level random walk point of view. It also demonstrates how this idea can be used in a new approach to solving the cable equation, in order to better understand the neural computation approximations. It introduces specialized data for emotional content, which allows a brain model to be built using MatLab tools, and also highlights a simple model of cognitive dysfunction.

  5. A RANDOM FUNCTIONAL CENTRAL LIMIT THEOREM FOR PROCESSES OF PRODUCT SUMS OF LINEAR PROCESSES GENERATED BY MARTINGALE DIFFERENCES

    Institute of Scientific and Technical Information of China (English)

    WANG YUEBAO; YANG YANG; ZHOU HAIYANG

    2003-01-01

    A random functional central limit theorem is obtained for processes of partial sums andproduct sums of linear processes generated by non-stationary martingale differences. It devel-ops and improves some corresponding results on processes of partial sums of linear processesgenerated by strictly stationary martingale differences, which can be found in [5].

  6. Implementation of central venous catheter bundle in an intensive care unit in Kuwait: Effect on central line-associated bloodstream infections.

    Science.gov (United States)

    Salama, Mona F; Jamal, Wafaa; Al Mousa, Haifa; Rotimi, Vincent

    2016-01-01

    Central line-associated bloodstream infection (CLABSIs) is an important healthcare-associated infection in the critical care units. It causes substantial morbidity, mortality and incurs high costs. The use of central venous line (CVL) insertion bundle has been shown to decrease the incidence of CLABSIs. Our aim was to study the impact of CVL insertion bundle on incidence of CLABSI and study the causative microbial agents in an intensive care unit in Kuwait. Surveillance for CLABSI was conducted by trained infection control team using National Health Safety Network (NHSN) case definitions and device days measurement methods. During the intervention period, nursing staff used central line care bundle consisting of (1) hand hygiene by inserter (2) maximal barrier precautions upon insertion by the physician inserting the catheter and sterile drape from head to toe to the patient (3) use of a 2% chlorohexidine gluconate (CHG) in 70% ethanol scrub for the insertion site (4) optimum catheter site selection. (5) Examination of the daily necessity of the central line. During the pre-intervention period, there were 5367 documented catheter-days and 80 CLABSIs, for an incidence density of 14.9 CLABSIs per 1000 catheter-days. After implementation of the interventions, there were 5052 catheter-days and 56 CLABSIs, for an incidence density of 11.08 per 1000 catheter-days. The reduction in the CLABSI/1000 catheter days was not statistically significant (P=0.0859). This study demonstrates that implementation of a central venous catheter post-insertion care bundle was associated with a reduction in CLABSI in an intensive care area setting. PMID:26138518

  7. Experimental data processing techniques by a personal computer

    International Nuclear Information System (INIS)

    A personal computer (16-bit, about 1 MB memory) can be used at a low cost in the experimental data processing. This report surveys the important techniques on A/D and D/A conversion, display, store and transfer of the experimental data. It is also discussed the items to be considered in the software. Practical softwares programed BASIC and Assembler language are given as examples. Here, we present some techniques to get faster process in BASIC language and show that the system composed of BASIC and Assembler is useful in a practical experiment. The system performance such as processing speed and flexibility in setting operation condition will depend strongly on programming language. We have made test for processing speed by some typical programming languages; BASIC(interpreter), C, FORTRAN and Assembler. As for the calculation, FORTRAN has the best performance which is comparable to or better than Assembler even in the personal computer. (author)

  8. Heterogeneous arsenic enrichment in meta-sedimentary rocks in central Maine, United States

    Energy Technology Data Exchange (ETDEWEB)

    O' Shea, Beth, E-mail: bethoshea@sandiego.edu [Department of Marine Science and Environmental Studies, University of San Diego, 5998 Alcala Park, San Diego, CA 92110 (United States); Lamont-Doherty Earth Observatory of Columbia University, 61 Route 9W, Palisades, NY 10964 (United States); Stransky, Megan; Leitheiser, Sara [Department of Marine Science and Environmental Studies, University of San Diego, 5998 Alcala Park, San Diego, CA 92110 (United States); Brock, Patrick [School of Earth and Environmental Sciences, Queens College, City University of New York, 65-30 Kissena Blvd., Flushing, NY 11367 (United States); Marvinney, Robert G. [Maine Geological Survey, 93 State House Station, Augusta, ME 04333 (United States); Zheng, Yan [School of Earth and Environmental Sciences, Queens College, City University of New York, 65-30 Kissena Blvd., Flushing, NY 11367 (United States); Lamont-Doherty Earth Observatory of Columbia University, 61 Route 9W, Palisades, NY 10964 (United States)

    2015-02-01

    Arsenic is enriched up to 28 times the average crustal abundance of 4.8 mg kg{sup −1} for meta-sedimentary rocks of two adjacent formations in central Maine, USA where groundwater in the bedrock aquifer frequently contains elevated As levels. The Waterville Formation contains higher arsenic concentrations (mean As 32.9 mg kg{sup −1}, median 12.1 mg kg{sup −1}, n = 38) than the neighboring Vassalboro Group (mean As 19.1 mg kg{sup −1}, median 6.0 mg kg{sup −1}, n = 38). The Waterville Formation is a pelitic meta-sedimentary unit with abundant pyrite either visible or observed by scanning electron microprobe. Concentrations of As and S are strongly correlated (r = 0.88, p < 0.05) in the low grade phyllite rocks, and arsenic is detected up to 1944 mg kg{sup −1} in pyrite measured by electron microprobe. In contrast, statistically significant (p < 0.05) correlations between concentrations of As and S are absent in the calcareous meta-sediments of the Vassalboro Group, consistent with the absence of arsenic-rich pyrite in the protolith. Metamorphism converts the arsenic-rich pyrite to arsenic-poor pyrrhotite (mean As 1 mg kg{sup −1}, n = 15) during de-sulfidation reactions: the resulting metamorphic rocks contain arsenic but little or no sulfur indicating that the arsenic is now in new mineral hosts. Secondary weathering products such as iron oxides may host As, yet the geochemical methods employed (oxidative and reductive leaching) do not conclusively indicate that arsenic is associated only with these. Instead, silicate minerals such as biotite and garnet are present in metamorphic zones where arsenic is enriched (up to 130.8 mg kg{sup −1} As) where S is 0%. Redistribution of already variable As in the protolith during metamorphism and contemporary water–rock interaction in the aquifers, all combine to contribute to a spatially heterogeneous groundwater arsenic distribution in bedrock aquifers. - Highlights: • Arsenic is enriched up to 138 mg kg

  9. Heterogeneous arsenic enrichment in meta-sedimentary rocks in central Maine, United States

    International Nuclear Information System (INIS)

    Arsenic is enriched up to 28 times the average crustal abundance of 4.8 mg kg−1 for meta-sedimentary rocks of two adjacent formations in central Maine, USA where groundwater in the bedrock aquifer frequently contains elevated As levels. The Waterville Formation contains higher arsenic concentrations (mean As 32.9 mg kg−1, median 12.1 mg kg−1, n = 38) than the neighboring Vassalboro Group (mean As 19.1 mg kg−1, median 6.0 mg kg−1, n = 38). The Waterville Formation is a pelitic meta-sedimentary unit with abundant pyrite either visible or observed by scanning electron microprobe. Concentrations of As and S are strongly correlated (r = 0.88, p < 0.05) in the low grade phyllite rocks, and arsenic is detected up to 1944 mg kg−1 in pyrite measured by electron microprobe. In contrast, statistically significant (p < 0.05) correlations between concentrations of As and S are absent in the calcareous meta-sediments of the Vassalboro Group, consistent with the absence of arsenic-rich pyrite in the protolith. Metamorphism converts the arsenic-rich pyrite to arsenic-poor pyrrhotite (mean As 1 mg kg−1, n = 15) during de-sulfidation reactions: the resulting metamorphic rocks contain arsenic but little or no sulfur indicating that the arsenic is now in new mineral hosts. Secondary weathering products such as iron oxides may host As, yet the geochemical methods employed (oxidative and reductive leaching) do not conclusively indicate that arsenic is associated only with these. Instead, silicate minerals such as biotite and garnet are present in metamorphic zones where arsenic is enriched (up to 130.8 mg kg−1 As) where S is 0%. Redistribution of already variable As in the protolith during metamorphism and contemporary water–rock interaction in the aquifers, all combine to contribute to a spatially heterogeneous groundwater arsenic distribution in bedrock aquifers. - Highlights: • Arsenic is enriched up to 138 mg kg−1 in meta-sedimentary rocks in central Maine.

  10. Integrating post-Newtonian equations on graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Herrmann, Frank; Tiglio, Manuel [Department of Physics, Center for Fundamental Physics, and Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Silberholz, John [Center for Scientific Computation and Mathematical Modeling, University of Maryland, College Park, MD 20742 (United States); Bellone, Matias [Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Cordoba 5000 (Argentina); Guerberoff, Gustavo, E-mail: tiglio@umd.ed [Facultad de Ingenieria, Instituto de Matematica y Estadistica ' Prof. Ing. Rafael Laguardia' , Universidad de la Republica, Montevideo (Uruguay)

    2010-02-07

    We report on early results of a numerical and statistical study of binary black hole inspirals. The two black holes are evolved using post-Newtonian approximations starting with initially randomly distributed spin vectors. We characterize certain aspects of the distribution shortly before merger. In particular we note the uniform distribution of black hole spin vector dot products shortly before merger and a high correlation between the initial and final black hole spin vector dot products in the equal-mass, maximally spinning case. More than 300 million simulations were performed on graphics processing units, and we demonstrate a speed-up of a factor 50 over a more conventional CPU implementation. (fast track communication)

  11. From bentonite powder to engineered barrier units - an industrial process

    International Nuclear Information System (INIS)

    In the framework of the ESDRED Project, a consortium, called GME, dealt with the study and development of all required industrial processes for the fabrication of scale-1 buffer rings and discs, as well as all related means for transporting and handling the rings, the assembly in 4-unit sets, the packaging of buffer-ring assemblies, and all associated procedures. In 2006, a 100-t mould was built in order to compact in a few hours 12 rings and two discs measuring 2.3 m in diameter and 0.5 m in height, and weighing 4 t each. The ring-handling, assembly and transport means were tested successfully in 2007. (author)

  12. PO*WW*ER mobile treatment unit process hazards analysis

    Energy Technology Data Exchange (ETDEWEB)

    Richardson, R.B.

    1996-06-01

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards.

  13. PO*WW*ER mobile treatment unit process hazards analysis

    International Nuclear Information System (INIS)

    The objective of this report is to demonstrate that a thorough assessment of the risks associated with the operation of the Rust Geotech patented PO*WW*ER mobile treatment unit (MTU) has been performed and documented. The MTU was developed to treat aqueous mixed wastes at the US Department of Energy (DOE) Albuquerque Operations Office sites. The MTU uses evaporation to separate organics and water from radionuclides and solids, and catalytic oxidation to convert the hazardous into byproducts. This process hazards analysis evaluated a number of accident scenarios not directly related to the operation of the MTU, such as natural phenomena damage and mishandling of chemical containers. Worst case accident scenarios were further evaluated to determine the risk potential to the MTU and to workers, the public, and the environment. The overall risk to any group from operation of the MTU was determined to be very low; the MTU is classified as a Radiological Facility with low hazards

  14. Non-parallel processing: Gendered attrition in academic computer science

    Science.gov (United States)

    Cohoon, Joanne Louise Mcgrath

    2000-10-01

    This dissertation addresses the issue of disproportionate female attrition from computer science as an instance of gender segregation in higher education. By adopting a theoretical framework from organizational sociology, it demonstrates that the characteristics and processes of computer science departments strongly influence female retention. The empirical data identifies conditions under which women are retained in the computer science major at comparable rates to men. The research for this dissertation began with interviews of students, faculty, and chairpersons from five computer science departments. These exploratory interviews led to a survey of faculty and chairpersons at computer science and biology departments in Virginia. The data from these surveys are used in comparisons of the computer science and biology disciplines, and for statistical analyses that identify which departmental characteristics promote equal attrition for male and female undergraduates in computer science. This three-pronged methodological approach of interviews, discipline comparisons, and statistical analyses shows that departmental variation in gendered attrition rates can be explained largely by access to opportunity, relative numbers, and other characteristics of the learning environment. Using these concepts, this research identifies nine factors that affect the differential attrition of women from CS departments. These factors are: (1) The gender composition of enrolled students and faculty; (2) Faculty turnover; (3) Institutional support for the department; (4) Preferential attitudes toward female students; (5) Mentoring and supervising by faculty; (6) The local job market, starting salaries, and competitiveness of graduates; (7) Emphasis on teaching; and (8) Joint efforts for student success. This work contributes to our understanding of the gender segregation process in higher education. In addition, it contributes information that can lead to effective solutions for an

  15. Developments in medical image processing and computational vision

    CERN Document Server

    Jorge, Renato

    2015-01-01

    This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013.  The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimat...

  16. Optical fiber network of the data acquisition sub system of SIIP Integral Information System of Process, Unit 2

    International Nuclear Information System (INIS)

    In this article, a description of the communication network, based in optical fiber, which interlace the data acquisition equipment with the computers of Laguna Verde Nuclear Power Plant of SIIP is made. It is also presented a description of the equipment and accessories which conform the network. The requirements imposed by the Central which stated the selection of optical fiber as interlace mean are also outstanding. SIIP is a computerized, centralized and integrated system which make information functions by means of the acquisition of signals and the required computational process for the continuous evaluation of the nuclear power plant in normal and emergency conditions. Is an exclusive monitoring system with no one action on the generation process; that is to say, it only acquire, process, store information and assist to the personnel in the operational analysis of the nuclear plant. SIIP is a Joint Project with three participant institutions: Federal Electricity Commission/ Electrical Research Institute/ General Electric. (Author)

  17. Computer-Aided Process Model For Carbon/Phenolic Materials

    Science.gov (United States)

    Letson, Mischell A.; Bunker, Robert C.

    1996-01-01

    Computer program implements thermochemical model of processing of carbon-fiber/phenolic-matrix composite materials into molded parts of various sizes and shapes. Directed toward improving fabrication of rocket-engine-nozzle parts, also used to optimize fabrication of other structural components, and material-property parameters changed to apply to other materials. Reduces costs by reducing amount of laboratory trial and error needed to optimize curing processes and to predict properties of cured parts.

  18. Computer-aided analysis of the forging process

    OpenAIRE

    Šraml, Matjaž; Stupan, Janez; Potrč, Iztok; Kramberger, Janez

    2012-01-01

    This paper presents computer simulation of the forging process using the finite volume method (FVM). The process of forging is highly non-linear, where both large deformations and continuously changing boundary conditions occur. In most practical cases, the initial billet shape is relatively simple, but the final shape of the end product is often geometrically complex, to the extent that it is commonly obtained using multiple forming stages. Examples of the numerical simulation of the forged ...

  19. Application of Computer Simulation Modeling to Medication Administration Process Redesign

    OpenAIRE

    Huynh, Nathan; Snyder, Rita; Vidal, Jose M.; Tavakoli, Abbas S.; Cai, Bo

    2012-01-01

    The medication administration process (MAP) is one of the most high-risk processes in health care. MAP workflow redesign can precipitate both unanticipated and unintended consequences that can lead to new medication safety risks and workflow inefficiencies. Thus, it is necessary to have a tool to evaluate the impact of redesign approaches in advance of their clinical implementation. This paper discusses the development of an agent-based MAP computer simulation model that can be used to assess...

  20. Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units

    Science.gov (United States)

    Howard, Michael P.; Anderson, Joshua A.; Nikoubashman, Arash; Glotzer, Sharon C.; Panagiotopoulos, Athanassios Z.

    2016-06-01

    We present an algorithm based on linear bounding volume hierarchies (LBVHs) for computing neighbor (Verlet) lists using graphics processing units (GPUs) for colloidal systems characterized by large size disparities. We compare this to a GPU implementation of the current state-of-the-art CPU algorithm based on stenciled cell lists. We report benchmarks for both neighbor list algorithms in a Lennard-Jones binary mixture with synthetic interaction range disparity and a realistic colloid solution. LBVHs outperformed the stenciled cell lists for systems with moderate or large size disparity and dilute or semidilute fractions of large particles, conditions typical of colloidal systems.

  1. Accelerated Molecular Dynamics Simulations with the AMOEBA Polarizable Force Field on Graphics Processing Units

    OpenAIRE

    Lindert, Steffen; Bucher, Denis; Eastman, Peter; Pande, Vijay; McCammon, J. Andrew

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale eve...

  2. Accelerated molecular dynamics simulations with the AMOEBA polarizable force field on graphics processing units

    OpenAIRE

    Lindert, S; Bucher, D; Eastman, P; Pande, V.; McCammon, JA

    2013-01-01

    The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale eve...

  3. Computational Models of Relational Processes in Cognitive Development

    Science.gov (United States)

    Halford, Graeme S.; Andrews, Glenda; Wilson, William H.; Phillips, Steven

    2012-01-01

    Acquisition of relational knowledge is a core process in cognitive development. Relational knowledge is dynamic and flexible, entails structure-consistent mappings between representations, has properties of compositionality and systematicity, and depends on binding in working memory. We review three types of computational models relevant to…

  4. Future Information Processing Technology--1983, Computer Science and Technology.

    Science.gov (United States)

    Kay, Peg, Ed.; Powell, Patricia, Ed.

    Developed by the Institute for Computer Sciences and Technology and the Defense Intelligence Agency with input from other federal agencies, this detailed document contains the 1983 technical forecast for the information processing industry through 1997. Part I forecasts the underlying technologies of hardware and software, discusses changes in the…

  5. Computer simulation program is adaptable to industrial processes

    Science.gov (United States)

    Schultz, F. E.

    1966-01-01

    The Reaction kinetics ablation program /REKAP/, developed to simulate ablation of various materials, provides mathematical formulations for computer programs which can simulate certain industrial processes. The programs are based on the use of nonsymmetrical difference equations that are employed to solve complex partial differential equation systems.

  6. Tutorial: Signal Processing in Brain-Computer Interfaces

    NARCIS (Netherlands)

    Garcia Molina, G.

    2010-01-01

    Research in Electroencephalogram (EEG) based Brain-Computer Interfaces (BCIs) has been considerably expanding during the last few years. Such an expansion owes to a large extent to the multidisciplinary and challenging nature of BCI research. Signal processing undoubtedly constitutes an essential co

  7. Integrated computer system architecture for Kori Unit 2 I and C upgrade

    International Nuclear Information System (INIS)

    The integrated computer system (ICS) architecture for Kori Unit 2 I and C upgrade has been developed. The integrated computer system of the plant supplies the infrastructure which allows the integration of existing and future computer systems. This infrastructure supports integrated upgrades, provides access to all of the plants information sources, and facilitates common interfaces between the human and the machine. Integration of the plant systems and information is essential to cost-effectively enhance cooperation between systems and to reduce unnecessary duplication of functions and information

  8. Perceptual weights for loudness reflect central spectral processing.

    OpenAIRE

    Joshi, Suyash Narendra; Jesteadt, Walt

    2011-01-01

    Weighting patterns for loudness obtained using the reverse correlation method are thought to reveal the relative contributions of different frequency regions to total loudness, the equivalent of specific loudness. Current models of loudness assume that specific loudness is determined by peripheral processes such as compression and masking. Here we test this hypothesisusing 20-tone harmonic complexes (200Hz f0, 200 to 4000Hz, 250 ms, 65 dB/Component) added in opposite phase relationships (Schr...

  9. Frontier areas and exploration techniques. Frontier uranium exploration in the South-Central United States

    International Nuclear Information System (INIS)

    Selected areas of the South-Central United States outside the known U trends of South Texas have a largely untested potential for the occurrence of significant U mineralization. These areas, underlain by Tertiary and older sediments, include parts of Texas, Oklahoma, Arkansas, Louisiana, Mississippi and Alabama. The commonly accepted criteria employed in U exploration are applicable to these frontier areas but special consideration must also be given to the atypical geologic aspects of such areas as they may apply to relatively unique types of U mineralization or to the development of special exploration criteria for common types of roll-front and fault-and dome-related uranium mineralization. The procedures used in evaluating frontier areas should be based on comprehensive evaluations involving: (1) location and analysis of potential source rocks (e.g., intrusive igneous rocks, bentonitic sediments, unique complexes, etc.); (2) definition of regional variations in the potential host sediments (e.g. marginal marine to nonmarine environments of deposition); (3) review of all available radiometric data in Tertiary or older rocks; (4) local groundwater sampling; (5) widely spaced reconnaissance (or stratigraphic) drilling, coring and borehole geophysical logging to define favorable sedimentary facies and to establish the specific lithologic character of the sediments; and (6) detailed petrographic evaluation of all available samples to define the environment of deposition and diagenetic history of ''favorable'' sediments. If procedures produce favorable results, an expanded exploration program is justified. Depths up to 3,000 feet should be anticipated if up-dip information is favorable. Selected areas are discussed that have: (1) favorable source and host rocks;(2) favorable age; (3) favorable regional and local structure; and (4) radiometric characteristics favorable for U mineralization of potentially economic grade and reserves in the areas

  10. Students' Beliefs about Mobile Devices vs. Desktop Computers in South Korea and the United States

    Science.gov (United States)

    Sung, Eunmo; Mayer, Richard E.

    2012-01-01

    College students in the United States and in South Korea completed a 28-item multidimensional scaling (MDS) questionnaire in which they rated the similarity of 28 pairs of multimedia learning materials on a 10-point scale (e.g., narrated animation on a mobile device Vs. movie clip on a desktop computer) and a 56-item semantic differential…

  11. Our U.S. Energy Future, Student Guide. Computer Technology Program Environmental Education Units.

    Science.gov (United States)

    Northwest Regional Educational Lab., Portland, OR.

    This is the student guide in a set of five computer-oriented environmental/energy education units. Contents are organized into the following parts or lessons: (1) Introduction to the U.S. Energy Future; (2) Description of the "FUTURE" programs; (3) Effects of "FUTURE" decisions; and (4) Exercises on the U.S. energy future. This guide supplements a…

  12. Computer-Assisted Scheduling of Army Unit Training: An Application of Simulated Annealing.

    Science.gov (United States)

    Hart, Roland J.; Goehring, Dwight J.

    This report of an ongoing research project intended to provide computer assistance to Army units for the scheduling of training focuses on the feasibility of simulated annealing, a heuristic approach for solving scheduling problems. Following an executive summary and brief introduction, the document is divided into three sections. First, the Army…

  13. Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners.

    Directory of Open Access Journals (Sweden)

    Carol Q Pham

    Full Text Available Cochlear implant (CI listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking or outside (central masking the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners.

  14. In-silico design of computational nucleic acids for molecular information processing.

    Science.gov (United States)

    Ramlan, Effirul Ikhwan; Zauner, Klaus-Peter

    2013-01-01

    Within recent years nucleic acids have become a focus of interest for prototype implementations of molecular computing concepts. During the same period the importance of ribonucleic acids as components of the regulatory networks within living cells has increasingly been revealed. Molecular computers are attractive due to their ability to function within a biological system; an application area extraneous to the present information technology paradigm. The existence of natural information processing architectures (predominately exemplified by protein) demonstrates that computing based on physical substrates that are radically different from silicon is feasible. Two key principles underlie molecular level information processing in organisms: conformational dynamics of macromolecules and self-assembly of macromolecules. Nucleic acids support both principles, and moreover computational design of these molecules is practicable. This study demonstrates the simplicity with which one can construct a set of nucleic acid computing units using a new computational protocol. With the new protocol, diverse classes of nucleic acids imitating the complete set of boolean logical operators were constructed. These nucleic acid classes display favourable thermodynamic properties and are significantly similar to the approximation of successful candidates implemented in the laboratory. This new protocol would enable the construction of a network of interconnecting nucleic acids (as a circuit) for molecular information processing. PMID:23647621

  15. Stakeholder consultations regarding centralized power procurement processes in Ontario

    International Nuclear Information System (INIS)

    In 2004, Ontario held 4 Requests for Proposals (RFPs) to encourage the development of new clean renewable combined heat and power generation and the implementation of conservation and demand management programs. Details of a stakeholder consultation related to the RFP process were presented were in this paper. The aim of the consultation was to synthesize stakeholder comments and to provide appropriate recommendations for future RFPs held by the Ontario Power Authority (OPA). The financial burden of bidding was discussed, as well as communications procedures and contract ambiguities. Issues concerning the criteria used for qualifying potential bidders and evaluating project submissions were reviewed. Recommendations for future processes included prequalification, a simplification in collusion requirements, and a fixed time response. It was also recommended that the process should not emphasize financing as lenders do not make firm commitments to bidders prior to a bid being accepted. It was suggested that the amount of bid security should vary with the project size and phase of development, and that the contracts for differences format should be refined to allow participants to propose parameters. Issues concerning audit procedures and performance deviations were reviewed. It was suggested that contract terms should be compatible with gas markets. It was also suggested that the OPA should adopt a more simplified approach to co-generation proposals, where proponents can specify amounts of energy and required prices. The adoption of the Swiss challenge approach of allowing other vendors an opportunity to match or beat terms on an offer was recommended. It was suggested that renewables should be acquired through a targeted and volume limited standard-offer process to be set yearly. Conservation and demand management recommendations were also presented. It was suggested that the OPA should serve as a facilitator of clean development mechanism (CDM) programs. It was

  16. Assessment of processes affecting low-flow water quality of Cedar Creek, west-central Illinois

    Science.gov (United States)

    Schmidt, Arthur R.; Freeman, W.O.; McFarlane, R.D.

    1989-01-01

    Water quality and the processes that affect dissolved oxygen, nutrient (nitrogen and phosphorus species), and algal concentrations were evaluated for a 23.8-mile reach of Cedar Creek near Galesburg, west-central Illinois, during periods of warm-weather, low-flow conditions. Water quality samples were collected and stream conditions were measured over a diel (24 hour) period on three occasions during July and August 1985. Analysis of data from the diel-sampling periods indicates that concentrations of iron, copper, manganese, phenols, and total dissolved-solids exceeded Illinois ' general-use water quality standards in some locations. Dissolved-oxygen concentrations were less than the State minimum standard throughout much of the study reach. These data were used to calibrate and verify a one-dimensional, steady-state, water quality model. The computer model was used to assess the relative effects on low-flow water quality of processes such as algal photosynthesis and respiration, ammonia oxidation, biochemical oxygen demand, sediment oxygen demand, and stream reaeration. Results from model simulations and sensitivity analysis indicate that sediment oxygen demand is the principal cause of low dissolved-oxygen concentrations in the creek. (USGS)

  17. Acceleration of High Angular Momentum Electron Repulsion Integrals and Integral Derivatives on Graphics Processing Units.

    Science.gov (United States)

    Miao, Yipu; Merz, Kenneth M

    2015-04-14

    We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications. PMID:26574356

  18. A sampler of useful computational tools for applied geometry, computer graphics, and image processing foundations for computer graphics, vision, and image processing

    CERN Document Server

    Cohen-Or, Daniel; Ju, Tao; Mitra, Niloy J; Shamir, Ariel; Sorkine-Hornung, Olga; Zhang, Hao (Richard)

    2015-01-01

    A Sampler of Useful Computational Tools for Applied Geometry, Computer Graphics, and Image Processing shows how to use a collection of mathematical techniques to solve important problems in applied mathematics and computer science areas. The book discusses fundamental tools in analytical geometry and linear algebra. It covers a wide range of topics, from matrix decomposition to curvature analysis and principal component analysis to dimensionality reduction.Written by a team of highly respected professors, the book can be used in a one-semester, intermediate-level course in computer science. It

  19. The ATLAS Fast TracKer Processing Units

    CERN Document Server

    Krizka, Karol; The ATLAS collaboration

    2016-01-01

    The Fast Tracker is a hardware upgrade to the ATLAS trigger and data-acquisition system, with the goal of providing global track reconstruction by the start of the High Level Trigger starts. The Fast Tracker can process incoming data from the whole inner detector at full first level trigger rate, up to 100 kHz, using custom electronic boards. At the core of the system is a Processing Unit installed in a VMEbus crate, formed by two sets of boards: the Associative Memory Board and a powerful rear transition module called the Auxiliary card, while the second set is the Second Stage board. The associative memories perform the pattern matching looking for correlations within the incoming data, compatible with track candidates at coarse resolution. The pattern matching task is performed using custom application specific integrated circuits, called associative memory chips. The auxiliary card prepares the input and reject bad track candidates obtained from from the Associative Memory Board using the full precision a...

  20. Computer-Aided Multiscale Modelling for Chemical Process Engineering

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Gani, Rafiqul

    2007-01-01

    Chemical processes are generally modeled through monoscale approaches, which, while not adequate, satisfy a useful role in product-process design. In this case, use of a multi-dimensional and multi-scale model-based approach has importance in product-process development. A computer-aided framework...... for model generation, analysis, solution and implementation is necessary for the development and application of the desired model-based approach for product-centric process design/analysis. This goal is achieved through the combination of a system for model development (ModDev), and a modelling tool...... (MoT) for model translation, analysis and solution. The integration of ModDev, MoT and ICAS or any other external software or process simulator (using COM-Objects) permits the generation of different models and/or process configurations for purposes of simulation, design and analysis. Consequently, it...

  1. Cassava Processing and Marketing by Rural Women in the Central Region of Cameroon

    OpenAIRE

    SHIOYA, Akiyo

    2013-01-01

    This study examines the development of rural women's commercial activities in Central Cameroon, particularly the Department of Lekié, which is adjacent to Yaoundé, the capital of Cameroon. I focused on cassava processing technologies and the sale of cassavabased processed foods undertaken by women in a suburban farming village. Cassava is one of the main staple foods in central Cameroon, including in urban areas. One of its characteristics is that it keeps for a long period in the ground but ...

  2. Multiplanar and two-dimensional imaging of central airway stenting with multidetector computed tomography

    Directory of Open Access Journals (Sweden)

    Ozgul Mehmet

    2012-08-01

    Full Text Available Abstract Background Multidetector computed tomography (MDCT provides guidance for primary screening of the central airways. The aim of our study was assessing the contribution of multidetector computed tomography- two dimensional reconstruction in the management of patients with tracheobronchial stenosis prior to the procedure and during a short follow up period of 3 months after the endobronchial treatment. Methods This is a retrospective study with data collected from an electronic database and from the medical records. Patients evaluated with MDCT and who had undergone a stenting procedure were included. A Philips RSGDT 07605 model MDCT was used, and slice thickness, 3 mm; overlap, 1.5 mm; matrix, 512x512; mass, 90 and kV, 120 were evaluated. The diameters of the airways 10 mm proximal and 10 mm distal to the obstruction were measured and the stent diameter (D was determined from the average between D upper and D lower. Results Fifty-six patients, 14 (25% women and 42 (75% men, mean age 55.3 ± 13.2 years (range: 16-79 years, were assessed by MDCT and then treated with placement of an endobronchial stent. A computed tomography review was made with 6 detector Philips RSGDT 07605 multidetector computed tomography device. Endobronchial therapy was provided for the patients with endoluminal lesions. Stents were placed into the area of stenosis in patients with external compression after dilatation and debulking procedures had been carried out. In one patient the migration of a stent was detected during the follow up period by using MDCT. Conclusions MDCT helps to define stent size, length and type in patients who are suitable for endobronchial stinting. This is a non-invasive, reliable method that helps decisions about optimal stent size and position, thus reducing complications.

  3. Semi-automatic process partitioning for parallel computation

    Science.gov (United States)

    Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John

    1988-01-01

    On current multiprocessor architectures one must carefully distribute data in memory in order to achieve high performance. Process partitioning is the operation of rewriting an algorithm as a collection of tasks, each operating primarily on its own portion of the data, to carry out the computation in parallel. A semi-automatic approach to process partitioning is considered in which the compiler, guided by advice from the user, automatically transforms programs into such an interacting task system. This approach is illustrated with a picture processing example written in BLAZE, which is transformed into a task system maximizing locality of memory reference.

  4. A learnable parallel processing architecture towards unity of memory and computing

    Science.gov (United States)

    Li, H.; Gao, B.; Chen, Z.; Zhao, Y.; Huang, P.; Ye, H.; Liu, L.; Liu, X.; Kang, J.

    2015-08-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area.

  5. A learnable parallel processing architecture towards unity of memory and computing.

    Science.gov (United States)

    Li, H; Gao, B; Chen, Z; Zhao, Y; Huang, P; Ye, H; Liu, L; Liu, X; Kang, J

    2015-01-01

    Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named "iMemComp", where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped "iMemComp" with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on "iMemComp" can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. PMID:26271243

  6. Modeling data mining processes in computational multi-agent systems

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Kazík, O.

    New York: ACM, 2011 - (Grosky, W.; Badr, Y.; Chbeir, R.), s. 91-97 ISBN 978-1-4503-1047-5. [MEDES 2011 : The International ACM Conference on Management of Emergent Digital EcoSystems. San Francisco (US), 21.11.2011-23.11.2011] R&D Projects: GA ČR GAP202/11/1368; GA MŠk(CZ) ME10023 Institutional research plan: CEZ:AV0Z10300504 Keywords : data mining * computational MAS * roles * description logic * pre-processing * ontology * closed-world assumption Subject RIV: IN - Informatics, Computer Science

  7. Test bank to accompany Computers data and processing

    CERN Document Server

    Deitel, Harvey M

    1980-01-01

    Test Bank to Accompany Computers and Data Processing provides a variety of questions from which instructors can easily custom tailor exams appropriate for their particular courses. This book contains over 4000 short-answer questions that span the full range of topics for introductory computing course.This book is organized into five parts encompassing 19 chapters. This text provides a very large number of questions so that instructors can produce different exam testing essentially the same topics in succeeding semesters. Three types of questions are included in this book, including multiple ch

  8. Solar augmentation for process heat with central receiver technology

    Science.gov (United States)

    Kotzé, Johannes P.; du Toit, Philip; Bode, Sebastian J.; Larmuth, James N.; Landman, Willem A.; Gauché, Paul

    2016-05-01

    Coal fired boilers are currently one of the most widespread ways to deliver process heat to industry. John Thompson Boilers (JTB) offer industrial steam supply solutions for industry and utility scale applications in Southern Africa. Transport cost add significant cost to the coal price in locations far from the coal fields in Mpumalanga, Gauteng and Limpopo. The Helio100 project developed a low cost, self-learning, wireless heliostat technology that requires no ground preparation. This is attractive as an augmentation alternative, as it can easily be installed on any open land that a client may have available. This paper explores the techno economic feasibility of solar augmentation for JTB coal fired steam boilers by comparing the fuel savings of a generic 2MW heliostat field at various locations throughout South Africa.

  9. Central pain processing in chronic tension-type headache

    DEFF Research Database (Denmark)

    Lindelof, Kim; Ellrich, Jens; Jensen, Rigmor;

    2009-01-01

    OBJECTIVE: Chronic tension-type headache (CTTH) affects 3% of the population. Directly and indirectly it causes high costs and considerable loss of quality of life. The mechanisms of this disorder are poorly understood and the treatment possibilities are therefore limited. The blink reflex (BR) r...... combined homotopic and heterotopic effect of the conditioning pain onto the blink reflex could account for this finding.......) reflects neuronal excitability due to nociceptive input in the brainstem. The aim of this study was to investigate nociceptive processing at the level of the brainstem in an experimental pain model of CTTH symptoms. METHODS: The effect of conditioning pain, 5 min infusion of hypertonic saline into the neck...... muscles, was investigated in 20 patients with CTTH and 20 healthy controls. In addition, a pilot study with isotonic saline was performed with 5 subjects in each group. The BR was elicited by electrical stimuli with an intensity of four times the pain threshold, with a superficial concentric electrode. We...

  10. Future evolution of the Fast TracKer (FTK) processing unit

    CERN Document Server

    Gentsos, C; The ATLAS collaboration; Giannetti, P; Magalotti, D; Nikolaidis, S

    2014-01-01

    The Fast Tracker (FTK) processor [1] for the ATLAS experiment has a computing core made of 128 Processing Units that reconstruct tracks in the silicon detector in a ~100 μsec deep pipeline. The track parameter resolution provided by FTK enables the HLT trigger to identify efficiently and reconstruct significant samples of fermionic Higgs decays. Data processing speed is achieved with custom VLSI pattern recognition, linearized track fitting executed inside modern FPGAs, pipelining, and parallel processing. One large FPGA executes full resolution track fitting inside low resolution candidate tracks found by a set of 16 custom Asic devices, called Associative Memories (AM chips) [2]. The FTK dual structure, based on the cooperation of VLSI dedicated AM and programmable FPGAs, is maintained to achieve further technology performance, miniaturization and integration of the current state of the art prototypes. This allows to fully exploit new applications within and outside the High Energy Physics field. We plan t...

  11. Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)

    International Nuclear Information System (INIS)

    Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author)

  12. Ferrofluid Simulations with the Barnes-Hut Algorithm on Graphics Processing Units

    CERN Document Server

    Polyakov, A Yu; Denisov, S; Reva, V V; Hanggi, P

    2012-01-01

    We present an approach to molecular-dynamics simulations of dilute ferrofluids on graphics processing units (GPUs). Our numerical scheme is based on a GPU-oriented modification of the Barnes-Hut (BH) algorithm designed to increase the parallelism of computations. For an ensemble consisting of one million of ferromagnetic particles, the performance of the proposed algorithm on a Tesla M2050 GPU demonstrated a computational-time speed-up of four order of magnitude compared to the performance of the sequential All-Pairs (AP) algorithm on a single-core CPU, and two order of magnitude compared to the performance of the optimized AP algorithm on the GPU. The accuracy of the scheme is corroborated by comparing theoretical predictions with the results of numerical simulations.

  13. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Indian Academy of Sciences (India)

    M. K. Griffiths; V. Fedun; R.Erdélyi

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1–3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  14. Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography

    International Nuclear Information System (INIS)

    Simplified techniques of cerebral angiography using a mobile X-ray unit and computed radiography (CR) are discussed. Computed radiography is a digital radiography system in which an imaging plate is used as an X-ray detector and a final image is displayed on the film. In the angiograms performed with CR, the spatial frequency components can be enhanced for the easy analysis of fine blood vessels. Computed radiography has an automatic sensitivity and a latitude-setting mechanism, thus serving as an 'automatic camera.' This mechanism is useful for radiography with a mobile X-ray unit in hospital wards, intensive care units, or operating rooms where the appropriate setting of exposure conditions is difficult. We applied this mechanism to direct percutaneous carotid angiography and intravenous digital subtraction angiography with a mobile X-ray unit. Direct percutaneous carotid angiography using CR and a mobile X-ray unit were taken after the manual injection of a small amount of a contrast material through a fine needle. We performed direct percutaneous carotid angiography with this method 68 times on 25 cases from August 1986 to December 1987. Of the 68 angiograms, 61 were evaluated as good, compared with conventional angiography. Though the remaining seven were evaluated as poor, they were still diagnostically effective. This method is found useful for carotid angiography in emergency rooms, intensive care units, or operating rooms. Cerebral venography using CR and a mobile X-ray unit was done after the manual injection of a contrast material through the bilateral cubital veins. The cerebral venous system could be visualized from 16 to 24 seconds after the beginning of the injection of the contrast material. We performed cerebral venography with this method 14 times on six cases. These venograms were better than conventional angiograms in all cases. This method may be useful in managing patients suffering from cerebral venous thrombosis. (J.P.N.)

  15. Computing the Efficiency of Decision-Making Units with FuzzyData Using Ideal and Anti-Ideal Decision Making Units

    OpenAIRE

    Mehrdad Nabahat; Fardin Esmaeeli Sangari

    2012-01-01

    Data Envelopment Analysis (DEA) is a nonparametrical, method for evaluating the efficiency of Decision Making Units (DMU) using mathematical programming. There are several methods for analyzing the efficiency of Decision Making Units, among which are Charnes Cooper Rodes (CCR) and Banker Charnes Cooper (BCC), which compute the efficiency of Decision Making Units using the linear programming or Wang’s method which evaluates the efficiency of Decision Making Units using Ideal Decision Making Un...

  16. PREMATH: a Precious-Material Holdup Estimator for unit operations and chemical processes

    International Nuclear Information System (INIS)

    A computer program, PREMATH (Precious Material Holdup Estimator), has been developed to permit inventory estimation in vessels involved in unit operations and chemical processes. This program has been implemented in an operating nuclear fuel processing plant. PREMATH's purpose is to provide steady-state composition estimates for material residing in process vessels until representative samples can be obtained and chemical analyses can be performed. Since these compositions are used for inventory estimation, the results are determined for and cataloged in container-oriented files. The estimated compositions represent material collected in applicable vessels - including consideration for material previously acknowledged in these vessels. The program utilizes process measurements and simple material balance models to estimate material holdups and distribution within unit operations. During simulated run testing, PREMATH-estimated inventories typically produced material balances within 7% of the associated measured material balances for uranium and within 16% of the associated, measured material balances for thorium (a less valuable material than uranium) during steady-state process operation

  17. Computer Science Teacher Professional Development in the United States: A Review of Studies Published between 2004 and 2014

    Science.gov (United States)

    Menekse, Muhsin

    2015-01-01

    While there has been a remarkable interest to make computer science a core K-12 academic subject in the United States, there is a shortage of K-12 computer science teachers to successfully implement computer sciences courses in schools. In order to enhance computer science teacher capacity, training programs have been offered through teacher…

  18. MODEL OF INFORMATION SECURITY FOR CONTROL PROCESSES OF COMPUTER NETWORKS

    Directory of Open Access Journals (Sweden)

    Kucher V. A.

    2015-06-01

    Full Text Available In order to improve the security of information transfer we have offered one of the possible approaches to modeling process control computer networks with elements of intelligent decision support. We proceed from the graph model of network nodes which are network devices with software control agents, and arcs are logical channels of information exchange between the equipment computer systems. We built an addressless sensing technology which ensures the completeness of monitoring of all network equipment. To classify the computer networks state we provided a method for calculating the values of reliability. Development of signal mismatch triggers the control cycle as a result of which the adjustment of the state of network equipment. For existing tools we proposed adding network control expert system consists of a knowledge base, inference mechanism and means of description and fill in the knowledge base

  19. A computer-assisted process for supersonic aircraft conceptual design

    Science.gov (United States)

    Johnson, V. S.

    1985-01-01

    Design methodology was developed and existing major computer codes were selected to carry out the conceptual design of supersonic aircraft. A computer-assisted design process resulted from linking the codes together in a logical manner to implement the design methodology. The process does not perform the conceptual design of a supersonic aircraft but it does provide the designer with increased flexibility, especially in geometry generation and manipulation. Use of the computer-assisted process for the conceptual design of an advanced technology Mach 3.5 interceptor showed the principal benefit of the process to be the ability to use a computerized geometry generator and then directly convert the geometry between formats used in the geometry code and the aerodynamics codes. Results from the interceptor study showed that a Mach 3.5 standoff interceptor with a 1000 nautical-mile mission radius and a payload of eight Phoenix missiles appears to be feasible with the advanced technologies considered. A sensitivity study showed that technologies affecting the empty weight and propulsion system would be critical in the final configuration characteristics with aerodynamics having a lesser effect for small perturbations around the baseline.

  20. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  1. First International Conference Multimedia Processing, Communication and Computing Applications

    CERN Document Server

    Guru, Devanur

    2013-01-01

    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  2. A Systematic Computer-Aided Framework for Integrated Design and Control of Chemical Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted;

    -defined operational conditions whereas controllability is considered to maintain desired operating points of the process at any kind of imposed disturbance under normal operating conditions. In this work, a systematic hierarchical computer-aided framework for integrated process design and control of chemical......-separator-recycle (RSR) system. Next, it will be shown that the RSR system can be replaced by an intensified unit operation, a reactive distillation column (RDC) which optimal design-control solution is also presented. The operation and control of the RSR and RDC at the optimal designs is compared with other candidate......Chemical processes are conventionally designed through a sequential approach. In this sequential approach, first, a steady-state process design is obtained and then, control structure synthesis that, in most of the cases, is based on heuristics is performed. Therefore, process design and process...

  3. The AMchip04 and the Processing Unit Prototype for the FastTracker

    CERN Document Server

    Andreani, A; The ATLAS collaboration; Beretta, M; Bogdan, M; Citterio, M; Alberti, F; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M; Shochet, M; Stabile, A; Tang, J; Tompkins, L; Volpi, G

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment complexity and the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive selections. We present the first prototype of a new Processing Unit, the core of the FastTracker processor for Atlas, whose computing power is such that a couple of hundreds of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV in the ATLAS events up to Phase II instantaneous luminosities (5×1034 cm-2 s-1) with an event input rate of 100 kHz and a latency below hundreds of microseconds. We plan extremely powerful, very compact and low consumption units for the far future, essential to increase efficiency and purity of the Level 2 selected samples through the intensive use of tracking. This strategy requires massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generall...

  4. Computer-assisted cartography using topographic properties: precision and accuracy of local soil maps in central Mexico

    Directory of Open Access Journals (Sweden)

    Gustavo Cruz-Cárdenas

    2011-06-01

    Full Text Available Map units directly related to properties of soil-landscape are generated by local soil classes. Therefore to take into consideration the knowledge of farmers is essential to automate the procedure. The aim of this study was to map local soil classes by computer-assisted cartography (CAC, using several combinations of topographic properties produced by GIS (digital elevation model, aspect, slope, and profile curvature. A decision tree was used to find the number of topographic properties required for digital cartography of the local soil classes. The maps produced were evaluated based on the attributes of map quality defined as precision and accuracy of the CAC-based maps. The evaluation was carried out in Central Mexico using three maps of local soil classes with contrasting landscape and climatic conditions (desert, temperate, and tropical. In the three areas the precision (56 % of the CAC maps based on elevation as topographical feature was higher than when based on slope, aspect and profile curvature. The accuracy of the maps (boundary locations was however low (33 %, in other words, further research is required to improve this indicator.

  5. Centralized configuration system for a large scale farm of network booted computers

    International Nuclear Information System (INIS)

    The ATLAS trigger and data acquisition online farm is composed of nearly 3,000 computing nodes, with various configurations, functions and requirements. Maintaining such a cluster is a big challenge from the computer administration point of view, thus various tools have been adopted by the System Administration team to help manage the farm efficiently. In particular, a custom central configuration system, ConfDBv2, was developed for the overall farm management. The majority of the systems are network booted, and are running an operating system image provided by a Local File Server (LFS) via the local area network (LAN). This method guarantees the uniformity of the system and allows, in case of issues, very fast recovery of the local disks which could be used as scratch area. It also provides greater flexibility as the nodes can be reconfigured and restarted with a different operating system in a very timely manner. A user-friendly web interface offers a quick overview of the current farm configuration and status, allowing changes to be applied on selected subsets or on the whole farm in an efficient and consistent manner. Also, various actions that would otherwise be time consuming and error prone can be quickly and safely executed. We describe the design, functionality and performance of this system and its web–based interface, including its integration with other CERN and ATLAS databases and with the monitoring infrastructure.

  6. Calibrated Multiple Event Relocations of the Central and Eastern United States

    Science.gov (United States)

    Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.

    2015-12-01

    Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964

  7. Corrective Action Decision Document for Corrective Action Unit 417: Central Nevada Test Area Surface, Nevada

    International Nuclear Information System (INIS)

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 417: Central Nevada Test Area Surface, Nevada, under the Federal Facility Agreement and Consent Order. Located in Hot Creek Valley in Nye County, Nevada, and consisting of three separate land withdrawal areas (UC-1, UC-3, and UC-4), CAU 417 is comprised of 34 corrective action sites (CASs) including 2 underground storage tanks, 5 septic systems, 8 shaker pad/cuttings disposal areas, 1 decontamination facility pit, 1 burn area, 1 scrap/trash dump, 1 outlier area, 8 housekeeping sites, and 16 mud pits. Four field events were conducted between September 1996 and June 1998 to complete a corrective action investigation indicating that the only contaminant of concern was total petroleum hydrocarbon (TPH) which was found in 18 of the CASs. A total of 1,028 samples were analyzed. During this investigation, a statistical approach was used to determine which depth intervals or layers inside individual mud pits and shaker pad areas were above the State action levels for the TPH. Other related field sampling activities (i.e., expedited site characterization methods, surface geophysical surveys, direct-push geophysical surveys, direct-push soil sampling, and rotosonic drilling located septic leachfields) were conducted in this four-phase investigation; however, no further contaminants of concern (COCs) were identified. During and after the investigation activities, several of the sites which had surface debris but no COCs were cleaned up as housekeeping sites, two septic tanks were closed in place, and two underground storage tanks were removed. The focus of this CADD was to identify CAAs which would promote the prevention or mitigation of human exposure to surface and subsurface soils with contaminant

  8. GPU-Based FFT Computation for Multi-Gigabit WirelessHD Baseband Processing

    Directory of Open Access Journals (Sweden)

    Nicholas Hinitt

    2010-01-01

    Full Text Available The next generation Graphics Processing Units (GPUs are being considered for non-graphics applications. Millimeter wave (60 Ghz wireless networks that are capable of multi-gigabit per second (Gbps transfer rates require a significant baseband throughput. In this work, we consider the baseband of WirelessHD, a 60 GHz communications system, which can provide a data rate of up to 3.8 Gbps over a short range wireless link. Thus, we explore the feasibility of achieving gigabit baseband throughput using the GPUs. One of the most computationally intensive functions commonly used in baseband communications, the Fast Fourier Transform (FFT algorithm, is implemented on an NVIDIA GPU using their general-purpose computing platform called the Compute Unified Device Architecture (CUDA. The paper, first, investigates the implementation of an FFT algorithm using the GPU hardware and exploiting the computational capability available. It then outlines the limitations discovered and the methods used to overcome these challenges. Finally a new algorithm to compute FFT is proposed, which reduces interprocessor communication. It is further optimized by improving memory access, enabling the processing rate to exceed 4 Gbps, achieving a processing time of a 512-point FFT in less than 200 ns using a two-GPU solution.

  9. A direct computer control concept for mammalian cell fermentation processes

    OpenAIRE

    Büntemeyer, Heino; Marzahl, Rainer; Lehmann, Jürgen

    1994-01-01

    In the last 10 years, new assignments and the special demands of mammalian cells to the culture conditions caused the development of complex small scale fermentation setups. The use of continuous fermentation and cell retention devices requires appropriate process control systems. An arrangement for control and data-acquisition of complex laboratory-scale bioreactors is presented. The fundamental idea was the usage of a standard personal computer, which is connected to pumps, valves and senso...

  10. Synthesis of computational structures for analog signal processing

    CERN Document Server

    Popa, Cosmin Radu

    2011-01-01

    Presents the most important classes of computational structures for analog signal processing, including differential or multiplier structures, squaring or square-rooting circuits, exponential or Euclidean distance structures and active resistor circuitsIntroduces the original concept of the multifunctional circuit, an active structure that is able to implement, starting from the same circuit core, a multitude of continuous mathematical functionsCovers mathematical analysis, design and implementation of a multitude of function generator structures

  11. Simulation of Stir Casting Process Using Computational Fluid Dynamics

    OpenAIRE

    M. V. S. Pavan Kumar; M. V. Sekhar Babu

    2015-01-01

    Stir casting process is one of the methods to produce Metal Matrix Composites (MMCs). But the Particle Distribution of Non-Homogenous material is the greatest problem facing now days to produce MMCs. The present simulations were conducted how the speed of the stirrer effects the Particle Distribution of NonHomogenous material. The Simulations were performed using Computational Fluid Dynamics. In this experiment Copper is used as Semi Solid Metal (SSM) and Silicon-Carbide is used a...

  12. A Parallel Pipelined Computer Architecture for Digital Signal Processing

    OpenAIRE

    Gümüşkaya, Halûk

    1998-01-01

    This paper presents a parallel pipelined computer architecture and its six network configurations targeted for the implementation of a wide range of digital signal processing (DSP) algorithms described by both atomic and large grain data flow graphs. The proposed architecture is considered together with programmability, yielding a system solution that combines extensive concurrency with simple programming. It is an SSIMD (Skewed Single Instruction Multiple Data) or MIMD (Multiple Instruction ...

  13. The multistage nature of labour migration from Eastern and Central Europe (experience of Ukraine, Poland, United Kingdom and Germany during the 2002-2011 period)

    OpenAIRE

    Khrystyna FOGEL

    2015-01-01

    This article examines the consequences of the biggest round of EU Enlargement in 2004 on the labour migration flows from the new accession countries (A8) of the Eastern and Central Europe to Western Europe. The main focus of our research is the unique multistage nature of labour migration in the region. As a case study, we take labour migration from Poland to the United Kingdom and Germany and similar processes taking place in the labour migration from Ukraine to Poland. In particular, a new ...

  14. Analyzing shelf life of processed cheese by soft computing

    Directory of Open Access Journals (Sweden)

    S. Goyal

    2012-09-01

    Full Text Available Feedforward soft computing multilayer models were developed for analyzing shelf life of processed cheese. The models were trained with 80% of total observations and validated with 20% of the remaining data. Mean Square Error, Root Mean Square Error, Coefficient of Determination and Nash - Sutcliffo Coefficient were used in order to compare the prediction ability of the developed models. From the study, it is concluded that feedforward multilayer models are good in predicting the shelf life of processed cheese stored at 7-8o C.

  15. Decreased Mexican and Central American labor migration to the United States in the context of the crisis

    OpenAIRE

    José Luis Hernández Suárez

    2016-01-01

    This article analyzes the migration of Mexican and Central American workers to the United States, based on the theory of imperialism and underdevelopment, especially as regards the absolute surplus workers given the chronic inability of the underdeveloped capitalist economy to absorb, and the expected depletion of the system, to make room for some of them through international migration, because the law of population of capital installed it makes international labor...

  16. Multiple-User, Multitasking, Virtual-Memory Computer System

    Science.gov (United States)

    Generazio, Edward R.; Roth, Don J.; Stang, David B.

    1993-01-01

    Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.

  17. Computation studies into architecture and energy transfer properties of photosynthetic units from filamentous anoxygenic phototrophs

    Energy Technology Data Exchange (ETDEWEB)

    Linnanto, Juha Matti [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu (Estonia); Freiberg, Arvi [Institute of Physics, University of Tartu, Riia 142, 51014 Tartu, Estonia and Institute of Molecular and Cell Biology, University of Tartu, Riia 23, 51010 Tartu (Estonia)

    2014-10-06

    We have used different computational methods to study structural architecture, and light-harvesting and energy transfer properties of the photosynthetic unit of filamentous anoxygenic phototrophs. Due to the huge number of atoms in the photosynthetic unit, a combination of atomistic and coarse methods was used for electronic structure calculations. The calculations reveal that the light energy absorbed by the peripheral chlorosome antenna complex transfers efficiently via the baseplate and the core B808–866 antenna complexes to the reaction center complex, in general agreement with the present understanding of this complex system.

  18. Central axis dose verification in patients treated with total body irradiation of photons using a Computed Radiography system

    International Nuclear Information System (INIS)

    To propose and evaluate a method for the central axis dose verification in patients treated with total body irradiation (TBI) of photons using images obtained through a Computed Radiography (CR) system. It was used the Computed Radiography (Fuji) portal imaging cassette readings and correlate with measured of absorbed dose in water using 10 x 10 irradiation fields with ionization chamber in the 60Co equipment. The analytical and graphic expression is obtained through software 'Origin8', the TBI patient portal verification images were processed using software ImageJ, to obtain the patient dose. To validate the results, the absorbed dose in RW3 models was measured with ionization chamber with different thickness, simulating TBI real conditions. Finally it was performed a retrospective study over the last 4 years obtaining the patients absorbed dose based on the reading in the image and comparing with the planned dose. The analytical equation obtained permits estimate the absorbed dose using image pixel value and the dose measured with ionization chamber and correlated with patient clinical records. Those results are compared with reported evidence obtaining a difference less than 02%, the 3 methods were compared and the results are within 10%. (Author)

  19. Automation of the CFD Process on Distributed Computing Systems

    Science.gov (United States)

    Tejnil, Ed; Gee, Ken; Rizk, Yehia M.

    2000-01-01

    A script system was developed to automate and streamline portions of the CFD process. The system was designed to facilitate the use of CFD flow solvers on supercomputer and workstation platforms within a parametric design event. Integrating solver pre- and postprocessing phases, the fully automated ADTT script system marshalled the required input data, submitted the jobs to available computational resources, and processed the resulting output data. A number of codes were incorporated into the script system, which itself was part of a larger integrated design environment software package. The IDE and scripts were used in a design event involving a wind tunnel test. This experience highlighted the need for efficient data and resource management in all parts of the CFD process. To facilitate the use of CFD methods to perform parametric design studies, the script system was developed using UNIX shell and Perl languages. The goal of the work was to minimize the user interaction required to generate the data necessary to fill a parametric design space. The scripts wrote out the required input files for the user-specified flow solver, transferred all necessary input files to the computational resource, submitted and tracked the jobs using the resource queuing structure, and retrieved and post-processed the resulting dataset. For computational resources that did not run queueing software, the script system established its own simple first-in-first-out queueing structure to manage the workload. A variety of flow solvers were incorporated in the script system, including INS2D, PMARC, TIGER and GASP. Adapting the script system to a new flow solver was made easier through the use of object-oriented programming methods. The script system was incorporated into an ADTT integrated design environment and evaluated as part of a wind tunnel experiment. The system successfully generated the data required to fill the desired parametric design space. This stressed the computational

  20. COMPUTER MODEL AND SIMULATION OF A GLOVE BOX PROCESS

    Energy Technology Data Exchange (ETDEWEB)

    C. FOSTER; ET AL

    2001-01-01

    The development of facilities to deal with the disposition of nuclear materials at an acceptable level of Occupational Radiation Exposure (ORE) is a significant issue facing the nuclear community. One solution is to minimize the worker's exposure though the use of automated systems. However, the adoption of automated systems for these tasks is hampered by the challenging requirements that these systems must meet in order to be cost effective solutions in the hazardous nuclear materials processing environment. Retrofitting current glove box technologies with automation systems represents potential near-term technology that can be applied to reduce worker ORE associated with work in nuclear materials processing facilities. Successful deployment of automation systems for these applications requires the development of testing and deployment strategies to ensure the highest level of safety and effectiveness. Historically, safety tests are conducted with glove box mock-ups around the finished design. This late detection of problems leads to expensive redesigns and costly deployment delays. With wide spread availability of computers and cost effective simulation software it is possible to discover and fix problems early in the design stages. Computer simulators can easily create a complete model of the system allowing a safe medium for testing potential failures and design shortcomings. The majority of design specification is now done on computer and moving that information to a model is relatively straightforward. With a complete model and results from a Failure Mode Effect Analysis (FMEA), redesigns can be worked early. Additional issues such as user accessibility, component replacement, and alignment problems can be tackled early in the virtual environment provided by computer simulation. In this case, a commercial simulation package is used to simulate a lathe process operation at the Los Alamos National Laboratory (LANL). The Lathe process operation is

  1. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  2. Experimental determination of the segregation process using computer tomography

    Directory of Open Access Journals (Sweden)

    Konstantin Beckmann

    2016-07-01

    Full Text Available Modelling methods such as DEM and CFD are increasingly used for developing high efficient combine cleaning systems. For this purpose it is necessary to verify the complex segregation and separation processes in the combine cleaning system. One way is to determine the segregation and separation function using 3D computer tomography (CT. This method makes it possible to visualize and analyse the movement behaviour of the components of the mixture during the segregation and separation process as well as the derivation of descriptive process parameters. A mechanically excited miniature test rig was designed and built at the company CLAAS Selbstfahrende Erntemaschinen GmbH to achieve this aim. The investigations were carried out at the Fraunhofer Institute for Integrated Circuits IIS. Through the evaluation of the recorded images the segregation process is described visually. A more detailed analysis enabled the development of segregation and separation function based on the different densities of grain and material other than grain.

  3. Computational Approaches for Modeling the Multiphysics in Pultrusion Process

    DEFF Research Database (Denmark)

    Carlone, P.; Baran, Ismet; Hattel, Jesper Henri;

    2013-01-01

    Pultrusion is a continuousmanufacturing process used to produce high strength composite profiles with constant cross section.The mutual interactions between heat transfer, resin flow and cure reaction, variation in the material properties, and stress/distortion evolutions strongly affect the...... process dynamics together with the mechanical properties and the geometrical precision of the final product. In the present work, pultrusion process simulations are performed for a unidirectional (UD) graphite/epoxy composite rod including several processing physics, such as fluid flow, heat transfer......, chemical reaction, and solid mechanics. The pressure increase and the resin flow at the tapered inlet of the die are calculated by means of a computational fluid dynamics (CFD) finite volume model. Several models, based on different homogenization levels and solution schemes, are proposed and compared for...

  4. Accelerating All-Atom Normal Mode Analysis with Graphics Processing Unit.

    Science.gov (United States)

    Liu, Li; Liu, Xiaofeng; Gong, Jiayu; Jiang, Hualiang; Li, Honglin

    2011-06-14

    All-atom normal mode analysis (NMA) is an efficient way to predict the collective motions in a given macromolecule, which is essential for the understanding of protein biological function and drug design. However, the calculations are limited in time scale mainly because the required diagonalization of the Hessian matrix by Householder-QR transformation is a computationally exhausting task. In this paper, we demonstrate the parallel computing power of the graphics processing unit (GPU) in NMA by mapping Householder-QR transformation onto GPU using Compute Unified Device Architecture (CUDA). The results revealed that the GPU-accelerated all-atom NMA could reduce the runtime of diagonalization significantly and achieved over 20× speedup over CPU-based NMA. In addition, we analyzed the influence of precision on both the performance and the accuracy of GPU. Although the performance of GPU with double precision is weaker than that with single precision in theory, more accurate results and an acceptable speedup of double precision were obtained in our approach by reducing the data transfer time to a minimum. Finally, the inherent drawbacks of GPU and the corresponding solution to deal with the limitation in computational scale are also discussed in this study. PMID:26596427

  5. GAMER: A Graphic Processing Unit Accelerated Adaptive-Mesh-Refinement Code for Astrophysics

    Science.gov (United States)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 40963 effective resolution and 16 GPUs with 81923 effective resolution, respectively.

  6. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 40963 effective resolution and 16 GPUs with 81923 effective resolution, respectively.

  7. 2012 Groundwater Monitoring Report Central Nevada Test Area, Subsurface Corrective Action Unit 443

    Energy Technology Data Exchange (ETDEWEB)

    None

    2013-04-01

    The Central Nevada Test Area was the site of a 0.2- to 1-megaton underground nuclear test in 1968. The surface of the site has been closed, but the subsurface is still in the corrective action process. The corrective action alternative selected for the site was monitoring with institutional controls. Annual sampling and hydraulic head monitoring are conducted as part of the subsurface corrective action strategy. The site is currently in the fourth year of the 5-year proof-of-concept period that is intended to validate the compliance boundary. Analytical results from the 2012 monitoring are consistent with those of previous years. Tritium remains at levels below the laboratory minimum detectable concentration in all wells in the monitoring network. Samples collected from reentry well UC-1-P-2SR, which is not in the monitoring network but was sampled as part of supplemental activities conducted during the 2012 monitoring, indicate concentrations of tritium that are consistent with previous sampling results. This well was drilled into the chimney shortly after the detonation, and water levels continue to rise, demonstrating the very low permeability of the volcanic rocks. Water level data from new wells MV-4 and MV-5 and recompleted well HTH-1RC indicate that hydraulic heads are still recovering from installation and testing. Data from wells MV-4 and MV-5 also indicate that head levels have not yet recovered from the 2011 sampling event during which several thousand gallons of water were purged. It has been recommended that a low-flow sampling method be adopted for these wells to allow head levels to recover to steady-state conditions. Despite the lack of steady-state groundwater conditions, hydraulic head data collected from alluvial wells installed in 2009 continue to support the conceptual model that the southeast-bounding graben fault acts as a barrier to groundwater flow at the site.

  8. Water-chemical process in reactor units of nuclear icebreakers and floating power units

    International Nuclear Information System (INIS)

    The design specific features and operational experience of the reactor plants used at Russian nuclear ships and icebreakers are discussed. The role of different factors affecting the primary coolant circuit water-chemical characteristics is considered. The primary circuit of the ship propulsion reactors has closed and relatively small volume. The coolant after washing and first filling is exchanged not more than 3-4 times for the whole operating period. The general approach to water chemistry for the primary coolant circuits of the ship propulsion reactor plants is suggested basing on the generalizing of the large information volume gained during laboratory investigations into the water-chemical processes, benchmark tests and in the course of long-time operation which is the most important source of information. It is shown that the ammonium water chemistry applied in ship propulsion plants is stable, easily organized and maintained, the volume of liquid radioactive wastes produced is not great. Use of these conditions in reactor plants of new small-power NPP and floating power unit projects is considered to be reasonable and sufficiently supported by data. The operational experience shows that improvements may be directed to changes in the regime of cleaning system utilization and increasing the ion-exchange slim service lifetime

  9. Evapotranspiration Units for the Diamond Valley Flow System Groundwater Discharge Area, Central Nevada, 2010

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data were created as part of a hydrologic study to characterize groundwater budgets and water quality in the Diamond Valley Flow System (DVFS), central...

  10. Lossy hyperspectral image compression on a graphics processing unit: parallelization strategy and performance evaluation

    Science.gov (United States)

    Santos, Lucana; Magli, Enrico; Vitulli, Raffaele; Núñez, Antonio; López, José F.; Sarmiento, Roberto

    2013-01-01

    There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.

  11. The ATLAS Fast Tracker Processing Units - input and output data preparation

    CERN Document Server

    Bolz, Arthur Eugen; The ATLAS collaboration

    2016-01-01

    The ATLAS Fast Tracker is a hardware processor built to reconstruct tracks at a rate of up to 100 kHz and provide them to the high level trigger system. The Fast Tracker will allow the trigger to utilize tracking information from the entire detector at an earlier event selection stage than ever before, allowing for more efficient event rejection. The connection of the system from to the detector read-outs and to the high level trigger computing farms are made through custom boards implementing Advanced Telecommunications Computing Technologies standard. The input is processed by the Input Mezzanines and Data Formatter boards, designed to receive and sort the data coming from the Pixel and Semi-conductor Tracker. The Fast Tracker to Level-2 Interface Card connects the system to the computing farm. The Input Mezzanines are 128 boards, performing clustering, placed on the 32 Data Formatter mother boards that sort the information into 64 logical regions required by the downstream processing units. This necessitat...

  12. Ultra-Fast Displaying Spectral Domain Optical Doppler Tomography System Using a Graphics Processing Unit

    Directory of Open Access Journals (Sweden)

    Jeong-Yeon Kim

    2012-05-01

    Full Text Available We demonstrate an ultrafast displaying Spectral Domain Optical Doppler Tomography system using Graphics Processing Unit (GPU computing. The calculation of FFT and the Doppler frequency shift is accelerated by the GPU. Our system can display processed OCT and ODT images simultaneously in real time at 120 fps for 1,024 pixels × 512 lateral A-scans. The computing time for the Doppler information was dependent on the size of the moving average window, but with a window size of 32 pixels the ODT computation time is only 8.3 ms, which is comparable to the data acquisition time. Also the phase noise decreases significantly with the window size. Since the performance of a real-time display for OCT/ODT is very important for clinical applications that need immediate diagnosis for screening or biopsy. Intraoperative surgery can take much benefit from the real-time display flow rate information from the technology. Moreover, the GPU is an attractive tool for clinical and commercial systems for functional OCT features as well.

  13. Children's Writing Processes when Using Computers: Insights Based on Combining Analyses of Product and Process

    Science.gov (United States)

    Gnach, Aleksandra; Wiesner, Esther; Bertschi-Kaufmann, Andrea; Perrin, Daniel

    2007-01-01

    Children and young people are increasingly performing a variety of writing tasks using computers, with word processing programs thus becoming their natural writing environment. The development of keystroke logging programs enables us to track the process of writing, without changing the writing environment for the writers. In the myMoment schools…

  14. Processing techniques for data from the Kuosheng Unit 1 shakedown safety-relief-valve tests

    International Nuclear Information System (INIS)

    This report describes techniques developed at the Lawrence Livermore National Laobratory, Livermore, CA for processing original data from the Taiwan Power Company's Kuosheng MKIII Unit 1 Safety Relief Valve Shakedown Tests conducted in April/May 1981. The computer codes used, TPSORT, TPPLOT, and TPPSD, form a special evaluation system for treating the data from its original packed binary form to ordered, calibrated ASCII transducer files and then to production of time-history plots, numerical output files, and spectral analyses. Using the data processing techniques described, a convenient means of independently examining and analyzing a unique data base for steam condensation phenomena in the MARKIII wetwell is described. The techniques developed for handling these data are applicable to the treatment of similar, but perhaps differently structured, experiment data sets

  15. 78 FR 47011 - Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants

    Science.gov (United States)

    2013-08-02

    ... identification as Draft Regulatory Guide, DG-1208 on August 22, 2012 (77 FR 50722) for a 60-day public comment... COMMISSION Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants..., ``Software Unit Testing for Digital Computer Software Used in Safety Systems of Nuclear Power Plants.''...

  16. 24 CFR 290.21 - Computing annual number of units eligible for substitution of tenant-based assistance or...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Computing annual number of units eligible for substitution of tenant-based assistance or alternative uses. 290.21 Section 290.21 Housing and... Multifamily Projects § 290.21 Computing annual number of units eligible for substitution of...

  17. A snow hydroclimatology of the central and southern Appalachian Mountains, United States of America

    Science.gov (United States)

    Graybeal, Daniel Y.

    Background. A significant vulnerability to snowmelt-related flooding in the Appalachians was demonstrated by massive events in March, 1936; January, 1996; and January, 1998. Yet, no quantitative estimate of this vulnerability has been published for these mountains. High elevations extending far southward confound the extrapolation of snow hydroclimatology from adjacent regions. Objectives. The principal objective was to develop a complete snow hydroclimatology of the central and southern Appalachians, considering the deposition, detention, and depletion phases of snow cover. A snowfall climatology addressed whether and how often sufficient snow falls to create a flood hazard, while a snow cover climatology addressed whether and how often snow is allowed to build to floodrisk proportions. A snowmelt hydroclimatology addressed whether and how often snowmelt contributes directly to large peakflows in a representative watershed. Approach. Monthly and daily temperature, precipitation, and snow data were obtained from approximately 1000 cooperative-network stations with >=10 seasons (Oct-May) of snow data. Mean, maximum, percentiles, and interseasonal and monthly variability were mapped. Time series were analyzed, and proportions of seasonal snowfall from significant events determined, at select stations. A spatially distributed, index snow cover model facilitated classification of Cheat River, WV, peakflows by generating process. Confidence intervals about fitted peakflow frequency curves were used to evaluate differences among processes. Results. Climates in which snow significantly affects floods have been discriminated in the literature by 150 cm mean seasonal snowfall, 30 days mean snow cover duration, or 50 cm mean seasonal maximum snow depth. In the Appalachian Mountains south to North Carolina, these criteria lie within 95% confidence intervals about the median or mean values of these parameters. At return periods of 10 and 20 years, these thresholds are usually

  18. Central limit theorems for smoothed extreme value estimates of Poisson point processes boundaries

    OpenAIRE

    Girard, Stéphane; Menneteau, Ludovic

    2011-01-01

    In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias corrected extreme values of the point process. We show how the smoothing leads Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.

  19. Central limit theorems for smoothed extreme value estimates of point processes boundaries

    OpenAIRE

    Girard, Stéphane; Menneteau, Ludovic

    2005-01-01

    In this paper, we give sufficient conditions to establish central limit theorems for boundary estimates of Poisson point processes. The considered estimates are obtained by smoothing some bias corrected extreme values of the point process. We show how the smoothing leads Gaussian asymptotic distributions and therefore pointwise confidence intervals. Some new unidimensional and multidimensional examples are provided.

  20. MycoperonDB: a database of computationally identified operons and transcriptional units in Mycobacteria

    Directory of Open Access Journals (Sweden)

    Ranjan Akash

    2006-12-01

    Full Text Available Abstract Background A key post genomics challenge is to identify how genes in an organism come together and perform physiological functions. An important first step in this direction is to identify transcriptional units, operons and regulons in a genome. Here we implement and report a strategy to computationally identify transcriptional units and operons of mycobacteria and construct a database-MycoperonDB. Description We have predicted transcriptional units and operons in mycobacteria and organized these predictions in the form of relational database called MycoperonDB. MycoperonDB database at present consists of 18053 genes organized as 8256 predicted operons and transcriptional units from five closely related species of mycobacteria. The database further provides literature links for experimentally characterized operons. All known promoters and related information is collected, analysed and stored. It provides a user friendly interface to allow a web based navigation of transcription units and operons. The web interface provides search tools to locate transcription factor binding DNA motif upstream to various genes. The reliability of operon prediction has been assessed by comparing the predicted operons with a set of known operons. Conclusion MycoperonDB is a publicly available structured relational database which has information about mycobacterial genes, transcriptional units and operons. We expect this database to assist molecular biologists/microbiologists in general, to hypothesize functional linkages between operonic genes of mycobacteria, their experimental characterization and validation. The database is freely available from our website http://www.cdfd.org.in/mycoperondb/index.html.