WorldWideScience

Sample records for unit cpu time

  1. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  2. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    International Nuclear Information System (INIS)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun

    2016-01-01

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD

  3. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD.

  4. CPU time reduction strategies for the Lambda modes calculation of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V.; Garayoa, J.; Hernandez, V. [Universidad Politecnica de Valencia (Spain). Dept. de Sistemas Informaticos y Computacion; Navarro, J.; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Dept. de Ingenieria Quimica y Nuclear; Ginestar, D. [Universidad Politecnica de Valencia (Spain). Dept. de Matematica Aplicada

    1997-12-01

    In this paper, we present two strategies to reduce the CPU time spent in the lambda modes calculation for a realistic nuclear power reactor.The discretization of the multigroup neutron diffusion equation has been made using a nodal collocation method, solving the associated eigenvalue problem with two different techniques: the Subspace Iteration Method and Arnoldi`s Method. CPU time reduction is based on a coarse grain parallelization approach together with a multistep algorithm to initialize adequately the solution. (author). 9 refs., 6 tabs.

  5. Enhanced round robin CPU scheduling with burst time based time quantum

    Science.gov (United States)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  6. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    OpenAIRE

    Hiienkari, Markus; Teittinen, Jukka; Koskinen, Lauri; Turnquist, Matthew; Mäkipää, Jani; Rantala, Arto; Sopanen, Matti; Kaltiokallio, Mikko

    2015-01-01

    To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable ...

  7. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  8. An FPGA Based Multiprocessing CPU for Beam Synchronous Timing in CERN's SPS and LHC

    CERN Document Server

    Ballester, F J; Gras, J J; Lewis, J; Savioz, J J; Serrano, J

    2003-01-01

    The Beam Synchronous Timing system (BST) will be used around the LHC and its injector, the SPS, to broadcast timing meassages and synchronize actions with the beam in different receivers. To achieve beam synchronization, the BST Master card encodes messages using the bunch clock, with a nominal value of 40.079 MHz for the LHC. These messages are produced by a set of tasks every revolution period, which is every 89 us for the LHC and every 23 us for the SPS, therefore imposing a hard real-time constraint on the system. To achieve determinism, the BST Master uses a dedicated CPU inside its main Field Programmable Gate Array (FPGA) featuring zero-delay hardware task switching and a reduced instruction set. This paper describes the BST Master card, stressing the main FPGA design, as well as the associated software, including the LynxOS driver and the tailor-made assembler.

  9. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    Directory of Open Access Journals (Sweden)

    Markus Hiienkari

    2015-04-01

    Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.

  10. Using the CPU and GPU for real-time video enhancement on a mobile computer

    CSIR Research Space (South Africa)

    Bachoo, AK

    2010-09-01

    Full Text Available . In this paper, the current advances in mobile CPU and GPU hardware are used to implement video enhancement algorithms in a new way on a mobile computer. Both the CPU and GPU are used effectively to achieve realtime performance for complex image enhancement...

  11. Interactive dose shaping - efficient strategies for CPU-based real-time treatment planning

    International Nuclear Information System (INIS)

    Ziegenhein, P; Kamerling, C P; Oelfke, U

    2014-01-01

    Conventional intensity modulated radiation therapy (IMRT) treatment planning is based on the traditional concept of iterative optimization using an objective function specified by dose volume histogram constraints for pre-segmented VOIs. This indirect approach suffers from unavoidable shortcomings: i) The control of local dose features is limited to segmented VOIs. ii) Any objective function is a mathematical measure of the plan quality, i.e., is not able to define the clinically optimal treatment plan. iii) Adapting an existing plan to changed patient anatomy as detected by IGRT procedures is difficult. To overcome these shortcomings, we introduce the method of Interactive Dose Shaping (IDS) as a new paradigm for IMRT treatment planning. IDS allows for a direct and interactive manipulation of local dose features in real-time. The key element driving the IDS process is a two-step Dose Modification and Recovery (DMR) strategy: A local dose modification is initiated by the user which translates into modified fluence patterns. This also affects existing desired dose features elsewhere which is compensated by a heuristic recovery process. The IDS paradigm was implemented together with a CPU-based ultra-fast dose calculation and a 3D GUI for dose manipulation and visualization. A local dose feature can be implemented via the DMR strategy within 1-2 seconds. By imposing a series of local dose features, equal plan qualities could be achieved compared to conventional planning for prostate and head and neck cases within 1-2 minutes. The idea of Interactive Dose Shaping for treatment planning has been introduced and first applications of this concept have been realized.

  12. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  13. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  14. Online real-time reconstruction of adaptive TSENSE with commodity CPU / GPU hardware

    DEFF Research Database (Denmark)

    Roujol, Sebastien; de Senneville, Baudouin; Vahala, E.

    2009-01-01

    A real-time reconstruction for adaptive TSENSE is presented that is optimized for MR-guidance of interventional procedures. The proposed method allows high frame-rate imaging with low image latencies, even when large coil arrays are employed and can be implemented on affordable commodity hardware....

  15. Online real-time reconstruction of adaptive TSENSE with commodity CPU / GPU hardware

    DEFF Research Database (Denmark)

    Roujol, Sebastien; de Senneville, Baudouin Denis; Vahalla, Erkki

    2009-01-01

    Adaptive temporal sensitivity encoding (TSENSE) has been suggested as a robust parallel imaging method suitable for MR guidance of interventional procedures. However, in practice, the reconstruction of adaptive TSENSE images obtained with large coil arrays leads to long reconstruction times...... image sizes used in interventional imaging (128 × 96, 16 channels, sensitivity encoding (SENSE) factor 2-4), the pipeline is able to reconstruct adaptive TSENSE images with image latencies below 90 ms at frame rates of up to 40 images/s, rendering the MR performance in practice limited...... by the constraints of the MR acquisition. Its performance is demonstrated by the online reconstruction of in vivo MR images for rapid temperature mapping of the kidney and for cardiac catheterization....

  16. SAFARI digital processing unit: performance analysis of the SpaceWire links in case of a LEON3-FT based CPU

    Science.gov (United States)

    Giusi, Giovanni; Liu, Scige J.; Di Giorgio, Anna M.; Galli, Emanuele; Pezzuto, Stefano; Farina, Maria; Spinoglio, Luigi

    2014-08-01

    SAFARI (SpicA FAR infrared Instrument) is a far-infrared imaging Fourier Transform Spectrometer for the SPICA mission. The Digital Processing Unit (DPU) of the instrument implements the functions of controlling the overall instrument and implementing the science data compression and packing. The DPU design is based on the use of a LEON family processor. In SAFARI, all instrument components are connected to the central DPU via SpaceWire links. On these links science data, housekeeping and commands flows are in some cases multiplexed, therefore the interface control shall be able to cope with variable throughput needs. The effective data transfer workload can be an issue for the overall system performances and becomes a critical parameter for the on-board software design, both at application layer level and at lower, and more HW related, levels. To analyze the system behavior in presence of the expected SAFARI demanding science data flow, we carried out a series of performance tests using the standard GR-CPCI-UT699 LEON3-FT Development Board, provided by Aeroflex/Gaisler, connected to the emulator of the SAFARI science data links, in a point-to-point topology. Two different communication protocols have been used in the tests, the ECSS-E-ST-50-52C RMAP protocol and an internally defined one, the SAFARI internal data handling protocol. An incremental approach has been adopted to measure the system performances at different levels of the communication protocol complexity. In all cases the performance has been evaluated by measuring the CPU workload and the bus latencies. The tests have been executed initially in a custom low level execution environment and finally using the Real- Time Executive for Multiprocessor Systems (RTEMS), which has been selected as the operating system to be used onboard SAFARI. The preliminary results of the carried out performance analysis confirmed the possibility of using a LEON3 CPU processor in the SAFARI DPU, but pointed out, in agreement

  17. First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC

    CERN Document Server

    Halyo, V.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-07

    Recent innovations focused around {\\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \\xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \\x...

  18. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    Science.gov (United States)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  19. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    Science.gov (United States)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  20. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  2. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  3. CPU and GPU (Cuda Template Matching Comparison

    Directory of Open Access Journals (Sweden)

    Evaldas Borcovas

    2014-05-01

    Full Text Available Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I, NVidia GeForce GT320M CUDAcompliable graphics card (GPU I and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II, NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II.Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have.

  4. GeantV: from CPU to accelerators

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  5. GeantV: from CPU to accelerators

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Arora, A; Apostolakis, J; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S; Lima, G; Duhem, L

    2016-01-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs. (paper)

  6. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    Directory of Open Access Journals (Sweden)

    Chun-Liang Lee

    Full Text Available The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  7. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC

    International Nuclear Information System (INIS)

    V Halyo, V Halyo; LeGresley, P; Lujan, P; Karpusenko, V; Vladimirov, A

    2014-01-01

    Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented

  8. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  9. ITCA: Inter-Task Conflict-Aware CPU accounting for CMP

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla Almeida, Francisco Javier; Gioiosa, Roberto; Valero Cortés, Mateo

    2010-01-01

    Chip-MultiProcessors (CMP) introduce complexities when accounting CPU utilization to processes because the progress done by a process during an interval of time highly depends on the activity of the other processes it is coscheduled with. We propose a new hardware CPU accounting mechanism to improve the accuracy when measuring the CPU utilization in CMPs and compare it with previous accounting mechanisms. Our results show that currently known mechanisms lead to a 16% average error when it com...

  10. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  11. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  12. STEM image simulation with hybrid CPU/GPU programming

    International Nuclear Information System (INIS)

    Yao, Y.; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-01-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  13. STEM image simulation with hybrid CPU/GPU programming

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Y., E-mail: yaoyuan@iphy.ac.cn; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-07-15

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  14. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  15. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  16. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  17. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    Energy Technology Data Exchange (ETDEWEB)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve 1348 (Belgium); Sterpin, Edmond [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and Department of Oncology, Katholieke Universiteit Leuven, O& N I Herestraat 49, 3000 Leuven (Belgium)

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  18. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    International Nuclear Information System (INIS)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-01-01

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10"7 primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  19. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    Science.gov (United States)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  20. Unit-time scheduling problems with time dependent resources

    NARCIS (Netherlands)

    Tautenhahn, T.; Woeginger, G.

    1997-01-01

    We investigate the computational complexity of scheduling problems, where the operations consume certain amounts of renewable resources which are available in time-dependent quantities. In particular, we consider unit-time open shop problems and unit-time scheduling problems with identical parallel

  1. ITCA: Inter-Task Conflict-Aware CPU accounting for CMPs

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla, Francisco; Gioiosa, Roberto; Buyuktosunoglu, Alper; Valero Cortés, Mateo

    2009-01-01

    Chip-MultiProcessor (CMP) architectures are becoming more and more popular as an alternative to the traditional processors that only extract instruction-level parallelism from an application. CMPs introduce complexities when accounting CPU utilization. This is due to the fact that the progress done by an application during an interval of time highly depends on the activity of the other applications it is co-scheduled with. In this paper, we identify how an inaccurate measurement of the CPU ut...

  2. Multiperiod Maximum Loss is time unit invariant.

    Science.gov (United States)

    Kovacevic, Raimund M; Breuer, Thomas

    2016-01-01

    Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.

  3. A combined PLC and CPU approach to multiprocessor control

    International Nuclear Information System (INIS)

    Harris, J.J.; Broesch, J.D.; Coon, R.M.

    1995-10-01

    A sophisticated multiprocessor control system has been developed for use in the E-Power Supply System Integrated Control (EPSSIC) on the DIII-D tokamak. EPSSIC provides control and interlocks for the ohmic heating coil power supply and its associated systems. Of particular interest is the architecture of this system: both a Programmable Logic Controller (PLC) and a Central Processor Unit (CPU) have been combined on a standard VME bus. The PLC and CPU input and output signals are routed through signal conditioning modules, which provide the necessary voltage and ground isolation. Additionally these modules adapt the signal levels to that of the VME I/O boards. One set of I/O signals is shared between the two processors. The resulting multiprocessor system provides a number of advantages: redundant operation for mission critical situations, flexible communications using conventional TCP/IP protocols, the simplicity of ladder logic programming for the majority of the control code, and an easily maintained and expandable non-proprietary system

  4. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  5. Discrete Events as Units of Perceived Time

    Science.gov (United States)

    Liverence, Brandon M.; Scholl, Brian J.

    2012-01-01

    In visual images, we perceive both space (as a continuous visual medium) and objects (that inhabit space). Similarly, in dynamic visual experience, we perceive both continuous time and discrete events. What is the relationship between these units of experience? The most intuitive answer may be similar to the spatial case: time is perceived as an…

  6. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  7. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  8. Time recording unit for a neutron time of flight spectrometer

    International Nuclear Information System (INIS)

    Puranik, Praful; Ajit Kiran, S.; Chandak, R.M.; Poudel, S.K.; Mukhopadhyay, R.

    2011-01-01

    Here the architecture and design of Time Recording Unit for a Neutron Time of Flight Spectrometer have been described. The Spectrometer would have an array of 50 Nos. of one meter long linear Position Sensitive Detector (PSD) placed vertically around the sample at a distance of 2000 mm. The sample receives periodic pulsed neutron beam coming through a Fermi chopper. The time and zone of detection of a scattered neutron in a PSD gives information of its flight time and path length, which will be used to calculate its energy. A neutron event zone (position) and time detection module for each PSD provides a 2 bit position/zone code and an event timing pulse. The path length assigned to a neutron detected in a zone (Z1, Z2 etc) in the PSD is the mean path length seen by the neutrons detected in that zone of the PSD. A Time recording unit described here receives event zone code and timing pulse for all the 50 detectors, tags a proper time window code to it, before streaming it to computer for calculation of the energy distribution of neutrons scattered from the sample

  9. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  10. Real-time computation of parameter fitting and image reconstruction using graphical processing units

    Science.gov (United States)

    Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin

    2017-06-01

    In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.

  11. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    International Nuclear Information System (INIS)

    Yoon, Jong Seon; Choi, Hyoung Gwon; Jeon, Byoung Jin

    2017-01-01

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  12. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of)

    2017-02-15

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  13. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2015-01-01

    Full Text Available The Smith-Waterman (SW algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  14. Online performance evaluation of RAID 5 using CPU utilization

    Science.gov (United States)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  15. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  16. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  17. Real-time autocorrelator for fluorescence correlation spectroscopy based on graphical-processor-unit architecture: method, implementation, and comparative studies

    Science.gov (United States)

    Laracuente, Nicholas; Grossman, Carl

    2013-03-01

    We developed an algorithm and software to calculate autocorrelation functions from real-time photon-counting data using the fast, parallel capabilities of graphical processor units (GPUs). Recent developments in hardware and software have allowed for general purpose computing with inexpensive GPU hardware. These devices are more suited for emulating hardware autocorrelators than traditional CPU-based software applications by emphasizing parallel throughput over sequential speed. Incoming data are binned in a standard multi-tau scheme with configurable points-per-bin size and are mapped into a GPU memory pattern to reduce time-expensive memory access. Applications include dynamic light scattering (DLS) and fluorescence correlation spectroscopy (FCS) experiments. We ran the software on a 64-core graphics pci card in a 3.2 GHz Intel i5 CPU based computer running Linux. FCS measurements were made on Alexa-546 and Texas Red dyes in a standard buffer (PBS). Software correlations were compared to hardware correlator measurements on the same signals. Supported by HHMI and Swarthmore College

  18. Thermoelectric mini cooler coupled with micro thermosiphon for CPU cooling system

    International Nuclear Information System (INIS)

    Liu, Di; Zhao, Fu-Yun; Yang, Hong-Xing; Tang, Guang-Fa

    2015-01-01

    In the present study, a thermoelectric mini cooler coupling with a micro thermosiphon cooling system has been proposed for the purpose of CPU cooling. A mathematical model of heat transfer, depending on one-dimensional treatment of thermal and electric power, is firstly established for the thermoelectric module. Analytical results demonstrate the relationship between the maximal COP (Coefficient of Performance) and Q c with the figure of merit. Full-scale experiments have been conducted to investigate the effect of thermoelectric operating voltage, power input of heat source, and thermoelectric module number on the performance of the cooling system. Experimental results indicated that the cooling production increases with promotion of thermoelectric operating voltage. Surface temperature of CPU heat source linearly increases with increasing of power input, and its maximum value reached 70 °C as the prototype CPU power input was equivalent to 84 W. Insulation between air and heat source surface can prevent the condensate water due to low surface temperature. In addition, thermal performance of this cooling system could be enhanced when the total dimension of thermoelectric module matched well with the dimension of CPU. This research could benefit the design of thermal dissipation of electronic chips and CPU units. - Highlights: • A cooling system coupled with thermoelectric module and loop thermosiphon is developed. • Thermoelectric module coupled with loop thermosiphon can achieve high heat-transfer efficiency. • A mathematical model of thermoelectric cooling is built. • An analysis of modeling results for design and experimental data are presented. • Influence of power input and operating voltage on the cooling system are researched

  19. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  20. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Improving the Performance of CPU Architectures by Reducing the Operating System Overhead (Extended Version

    Directory of Open Access Journals (Sweden)

    Zagan Ionel

    2016-07-01

    Full Text Available The predictable CPU architectures that run hard real-time tasks must be executed with isolation in order to provide a timing-analyzable execution for real-time systems. The major problems for real-time operating systems are determined by an excessive jitter, introduced mainly through task switching. This can alter deadline requirements, and, consequently, the predictability of hard real-time tasks. New requirements also arise for a real-time operating system used in mixed-criticality systems, when the executions of hard real-time applications require timing predictability. The present article discusses several solutions to improve the performance of CPU architectures and eventually overcome the Operating Systems overhead inconveniences. This paper focuses on the innovative CPU implementation named nMPRA-MT, designed for small real-time applications. This implementation uses the replication and remapping techniques for the program counter, general purpose registers and pipeline registers, enabling multiple threads to share a single pipeline assembly line. In order to increase predictability, the proposed architecture partially removes the hazard situation at the expense of larger execution latency per one instruction.

  2. "Units of Comparison" across Languages, across Time

    Science.gov (United States)

    Thomas, Margaret

    2009-01-01

    Lardiere's keynote article adverts to a succession of "units of comparison" that have been employed in the study of cross-linguistic differences, including mid-twentieth-century structural patterns, generative grammar's parameters, and (within contemporary Minimalism) features. This commentary expands on the idea of units of cross-linguistic…

  3. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  4. The PAMELA storage and control unit

    Energy Technology Data Exchange (ETDEWEB)

    Casolino, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy)]. E-mail: Marco.Casolino@roma2.infn.it; Altamura, F. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Basili, A. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); De Pascale, M.P. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Minori, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Nagni, M. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Picozza, P. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Sparvoli, R. [INFN, Structure of Rome II, Physics Department, University of Rome II ' Tor Vergata' , I-00133 Rome (Italy); Adriani, O. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Papini, P. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Spillantini, P. [INFN, Structure of Florence, Physics Department, University of Florence, I-50019 Sesto Fiorentino (Italy); Castellini, G. [CNR-Istituto di Fisica Applicata ' Nello Carrara' , I-50127 Florence (Italy); Boezio, M. [INFN, Structure of Trieste, Physics Department, University of Trieste, I-34147 Trieste (Italy)

    2007-03-01

    The PAMELA Storage and Control Unit (PSCU) comprises a Central Processing Unit (CPU) and a Mass Memory (MM). The CPU of the experiment is based on a ERC-32 architecture (a SPARC v7 implementation) running a real time operating system (RTEMS). The main purpose of the CPU is to handle slow control, acquisition and store data on a 2 GB MM. Communications between PAMELA and the satellite are done via a 1553B bus. Data acquisition from the sub-detectors is performed via a 2 MB/s interface. Download from the PAMELA MM towards the satellite main storage unit is handled by a 16 MB/s bus. The maximum daily amount of data transmitted to ground is about 20 GB.

  5. The PAMELA storage and control unit

    International Nuclear Information System (INIS)

    Casolino, M.; Altamura, F.; Basili, A.; De Pascale, M.P.; Minori, M.; Nagni, M.; Picozza, P.; Sparvoli, R.; Adriani, O.; Papini, P.; Spillantini, P.; Castellini, G.; Boezio, M.

    2007-01-01

    The PAMELA Storage and Control Unit (PSCU) comprises a Central Processing Unit (CPU) and a Mass Memory (MM). The CPU of the experiment is based on a ERC-32 architecture (a SPARC v7 implementation) running a real time operating system (RTEMS). The main purpose of the CPU is to handle slow control, acquisition and store data on a 2 GB MM. Communications between PAMELA and the satellite are done via a 1553B bus. Data acquisition from the sub-detectors is performed via a 2 MB/s interface. Download from the PAMELA MM towards the satellite main storage unit is handled by a 16 MB/s bus. The maximum daily amount of data transmitted to ground is about 20 GB

  6. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores

    Directory of Open Access Journals (Sweden)

    Wang Kai

    2011-05-01

    Full Text Available Abstract Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs have multiple cores, whereas Graphics Processing Units (GPUs also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1 the interaction of SNPs within it in parallel, and 2 the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  7. A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU

    Science.gov (United States)

    Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha

    2018-03-01

    Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.

  8. The Effect of NUMA Tunings on CPU Performance

    Science.gov (United States)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  9. The Effect of NUMA Tunings on CPU Performance

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-01-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory.The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality.As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software. (paper)

  10. CPU and cache efficient management of memory-resident databases

    NARCIS (Netherlands)

    Pirk, H.; Funke, F.; Grund, M.; Neumann, T.; Leser, U.; Manegold, S.; Kemper, A.; Kersten, M.L.

    2013-01-01

    Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations,

  11. CPU and Cache Efficient Management of Memory-Resident Databases

    NARCIS (Netherlands)

    H. Pirk (Holger); F. Funke; M. Grund; T. Neumann (Thomas); U. Leser; S. Manegold (Stefan); A. Kemper (Alfons); M.L. Kersten (Martin)

    2013-01-01

    htmlabstractMemory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current

  12. Promise of a low power mobile CPU based embedded system in artificial leg control.

    Science.gov (United States)

    Hernandez, Robert; Zhang, Fan; Zhang, Xiaorong; Huang, He; Yang, Qing

    2012-01-01

    This paper presents the design and implementation of a low power embedded system using mobile processor technology (Intel Atom™ Z530 Processor) specifically tailored for a neural-machine interface (NMI) for artificial limbs. This embedded system effectively performs our previously developed NMI algorithm based on neuromuscular-mechanical fusion and phase-dependent pattern classification. The analysis shows that NMI embedded system can meet real-time constraints with high accuracies for recognizing the user's locomotion mode. Our implementation utilizes the mobile processor efficiently to allow a power consumption of 2.2 watts and low CPU utilization (less than 4.3%) while executing the complex NMI algorithm. Our experiments have shown that the highly optimized C program implementation on the embedded system has superb advantages over existing PC implementations on MATLAB. The study results suggest that mobile-CPU-based embedded system is promising for implementing advanced control for powered lower limb prostheses.

  13. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  14. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  15. Design of a memory-access controller with 3.71-times-enhanced energy efficiency for Internet-of-Things-oriented nonvolatile microcontroller unit

    Science.gov (United States)

    Natsui, Masanori; Hanyu, Takahiro

    2018-04-01

    In realizing a nonvolatile microcontroller unit (MCU) for sensor nodes in Internet-of-Things (IoT) applications, it is important to solve the data-transfer bottleneck between the central processing unit (CPU) and the nonvolatile memory constituting the MCU. As one circuit-oriented approach to solving this problem, we propose a memory access minimization technique for magnetoresistive-random-access-memory (MRAM)-embedded nonvolatile MCUs. In addition to multiplexing and prefetching of memory access, the proposed technique realizes efficient instruction fetch by eliminating redundant memory access while considering the code length of the instruction to be fetched and the transition of the memory address to be accessed. As a result, the performance of the MCU can be improved while relaxing the performance requirement for the embedded MRAM, and compact and low-power implementation can be performed as compared with the conventional cache-based one. Through the evaluation using a system consisting of a general purpose 32-bit CPU and embedded MRAM, it is demonstrated that the proposed technique increases the peak efficiency of the system up to 3.71 times, while a 2.29-fold area reduction is achieved compared with the cache-based one.

  16. Use of general purpose graphics processing units with MODFLOW

    Science.gov (United States)

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  17. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    Science.gov (United States)

    Taft, James R.

    2000-01-01

    aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.

  18. Acceleration of stereo-matching on multi-core CPU and GPU

    OpenAIRE

    Tian, Xu; Cockshott, Paul; Oehler, Susanne

    2014-01-01

    This paper presents an accelerated version of a\\ud dense stereo-correspondence algorithm for two different parallelism\\ud enabled architectures, multi-core CPU and GPU. The\\ud algorithm is part of the vision system developed for a binocular\\ud robot-head in the context of the CloPeMa 1 research project.\\ud This research project focuses on the conception of a new clothes\\ud folding robot with real-time and high resolution requirements\\ud for the vision system. The performance analysis shows th...

  19. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  20. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU

    International Nuclear Information System (INIS)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A.; Vega C, H. R.; Alonso M, O. E.

    2016-10-01

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  1. Liquid Cooling System for CPU by Electroconjugate Fluid

    Directory of Open Access Journals (Sweden)

    Yasuo Sakurai

    2014-06-01

    Full Text Available The dissipated power of CPU for personal computer has been increased because the performance of personal computer becomes higher. Therefore, a liquid cooling system has been employed in some personal computers in order to improve their cooling performance. Electroconjugate fluid (ECF is one of the functional fluids. ECF has a remarkable property that a strong jet flow is generated between electrodes when a high voltage is applied to ECF through the electrodes. By using this strong jet flow, an ECF-pump with simple structure, no sliding portion, no noise, and no vibration seems to be able to be developed. And then, by the use of the ECF-pump, a new liquid cooling system by ECF seems to be realized. In this study, to realize this system, an ECF-pump is proposed and fabricated to investigate the basic characteristics of the ECF-pump experimentally. Next, by utilizing the ECF-pump, a model of a liquid cooling system by ECF is manufactured and some experiments are carried out to investigate the performance of this system. As a result, by using this system, the temperature of heat source of 50 W is kept at 60°C or less. In general, CPU is usually used at this temperature or less.

  2. Critical values for unit root tests in seasonal time series

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); B. Hobijn (Bart)

    1997-01-01

    textabstractIn this paper, we present tables with critical values for a variety of tests for seasonal and non-seasonal unit roots in seasonal time series. We consider (extensions of) the Hylleberg et al. and Osborn et al. test procedures. These extensions concern time series with increasing seasonal

  3. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  4. Designing of Vague Logic Based 2-Layered Framework for CPU Scheduler

    Directory of Open Access Journals (Sweden)

    Supriya Raheja

    2016-01-01

    Full Text Available Fuzzy based CPU scheduler has become of great interest by operating system because of its ability to handle imprecise information associated with task. This paper introduces an extension to the fuzzy based round robin scheduler to a Vague Logic Based Round Robin (VBRR scheduler. VBRR scheduler works on 2-layered framework. At the first layer, scheduler has a vague inference system which has the ability to handle the impreciseness of task using vague logic. At the second layer, Vague Logic Based Round Robin (VBRR scheduling algorithm works to schedule the tasks. VBRR scheduler has the learning capability based on which scheduler adapts intelligently an optimum length for time quantum. An optimum time quantum reduces the overhead on scheduler by reducing the unnecessary context switches which lead to improve the overall performance of system. The work is simulated using MATLAB and compared with the conventional round robin scheduler and the other two fuzzy based approaches to CPU scheduler. Given simulation analysis and results prove the effectiveness and efficiency of VBRR scheduler.

  5. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  6. A real-time GNSS-R system based on software-defined radio and graphics processing units

    Science.gov (United States)

    Hobiger, Thomas; Amagai, Jun; Aida, Masanori; Narita, Hideki

    2012-04-01

    Reflected signals of the Global Navigation Satellite System (GNSS) from the sea or land surface can be utilized to deduce and monitor physical and geophysical parameters of the reflecting area. Unlike most other remote sensing techniques, GNSS-Reflectometry (GNSS-R) operates as a passive radar that takes advantage from the increasing number of navigation satellites that broadcast their L-band signals. Thereby, most of the GNSS-R receiver architectures are based on dedicated hardware solutions. Software-defined radio (SDR) technology has advanced in the recent years and enabled signal processing in real-time, which makes it an ideal candidate for the realization of a flexible GNSS-R system. Additionally, modern commodity graphic cards, which offer massive parallel computing performances, allow to handle the whole signal processing chain without interfering with the PC's CPU. Thus, this paper describes a GNSS-R system which has been developed on the principles of software-defined radio supported by General Purpose Graphics Processing Units (GPGPUs), and presents results from initial field tests which confirm the anticipated capability of the system.

  7. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    Science.gov (United States)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  8. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Samfass, Philipp [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-26

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. One of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).

  9. Obesity, diabetes, and length of time in the United States

    OpenAIRE

    Tsujimoto, Tetsuro; Kajio, Hiroshi; Sugiyama, Takehiro

    2016-01-01

    Abstract Obesity prevalence remains high in the United States (US), and is rising in most other countries. This is a repeated cross-sectional study using a nationally representative sample of the National Health and Nutrition Examination Survey 1999 to 2012. Multivariate logistic regression analyses were separately performed for adults (n?=?37,639) and children/adolescents (n?=?28,282) to assess the associations between the length of time in the US, and the prevalences of obesity and diabetes...

  10. Adaptive real-time methodology for optimizing energy-efficient computing

    Science.gov (United States)

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  11. Deployment of IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  12. A Block-Asynchronous Relaxation Method for Graphics Processing Units

    OpenAIRE

    Anzt, H.; Dongarra, J.; Heuveline, Vincent; Tomov, S.

    2011-01-01

    In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the r...

  13. Centre-containing spiral-geometric structure of the space-time and nonrelativistic relativity of the unit time

    International Nuclear Information System (INIS)

    Shakhazizyan, S.R.

    1987-01-01

    The problem of nonrelativistic dependence of unit length and unit time on the position in the space is considered on the basis of centre-containing spiral-geometric structure of the space-time. The experimental results of variation of the unit time are analyzed which well agree with the requirements of the model proposed. 13 refs.; 12 figs

  14. The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration

    Science.gov (United States)

    Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.

    2017-03-01

    In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.

  15. Pseudo-random number generators for Monte Carlo simulations on ATI Graphics Processing Units

    Science.gov (United States)

    Demchik, Vadim

    2011-03-01

    Basic uniform pseudo-random number generators are implemented on ATI Graphics Processing Units (GPU). The performance results of the realized generators (multiplicative linear congruential (GGL), XOR-shift (XOR128), RANECU, RANMAR, RANLUX and Mersenne Twister (MT19937)) on CPU and GPU are discussed. The obtained speed up factor is hundreds of times in comparison with CPU. RANLUX generator is found to be the most appropriate for using on GPU in Monte Carlo simulations. The brief review of the pseudo-random number generators used in modern software packages for Monte Carlo simulations in high-energy physics is presented.

  16. Discrepancy Between Clinician and Research Assistant in TIMI Score Calculation (TRIAGED CPU

    Directory of Open Access Journals (Sweden)

    Taylor, Brian T.

    2014-11-01

    Full Text Available Introduction: Several studies have attempted to demonstrate that the Thrombolysis in Myocardial Infarction (TIMI risk score has the ability to risk stratify emergency department (ED patients with potential acute coronary syndromes (ACS. Most of the studies we reviewed relied on trained research investigators to determine TIMI risk scores rather than ED providers functioning in their normal work capacity. We assessed whether TIMI risk scores obtained by ED providers in the setting of a busy ED differed from those obtained by trained research investigators. Methods: This was an ED-based prospective observational cohort study comparing TIMI scores obtained by 49 ED providers admitting patients to an ED chest pain unit (CPU to scores generated by a team of trained research investigators. We examined provider type, patient gender, and TIMI elements for their effects on TIMI risk score discrepancy. Results: Of the 501 adult patients enrolled in the study, 29.3% of TIMI risk scores determined by ED providers and trained research investigators were generated using identical TIMI risk score variables. In our low-risk population the majority of TIMI risk score differences were small; however, 12% of TIMI risk scores differed by two or more points. Conclusion: TIMI risk scores determined by ED providers in the setting of a busy ED frequently differ from scores generated by trained research investigators who complete them while not under the same pressure of an ED provider. [West J Emerg Med. 2015;16(1:24–33.

  17. A Programming Framework for Scientific Applications on CPU-GPU Systems

    Energy Technology Data Exchange (ETDEWEB)

    Owens, John

    2013-03-24

    At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiple parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.

  18. Discovering the impact of preceding units' characteristics on the wait time of cardiac surgery unit from statistic data.

    Directory of Open Access Journals (Sweden)

    Jiming Liu

    Full Text Available INTRODUCTION: Prior research shows that clinical demand and supplier capacity significantly affect the throughput and the wait time within an isolated unit. However, it is doubtful whether characteristics (i.e., demand, capacity, throughput, and wait time of one unit would affect the wait time of subsequent units on the patient flow process. Focusing on cardiac care, this paper aims to examine the impact of characteristics of the catheterization unit (CU on the wait time of cardiac surgery unit (SU. METHODS: This study integrates published data from several sources on characteristics of the CU and SU units in 11 hospitals in Ontario, Canada between 2005 and 2008. It proposes a two-layer wait time model (with each layer representing one unit to examine the impact of CU's characteristics on the wait time of SU and test the hypotheses using the Partial Least Squares-based Structural Equation Modeling analysis tool. RESULTS: Results show that: (i wait time of CU has a direct positive impact on wait time of SU (β = 0.330, p < 0.01; (ii capacity of CU has a direct positive impact on demand of SU (β = 0.644, p < 0.01; (iii within each unit, there exist significant relationships among different characteristics (except for the effect of throughput on wait time in SU. CONCLUSION: Characteristics of CU have direct and indirect impacts on wait time of SU. Specifically, demand and wait time of preceding unit are good predictors for wait time of subsequent units. This suggests that considering such cross-unit effects is necessary when alleviating wait time in a health care system. Further, different patient risk profiles may affect wait time in different ways (e.g., positive or negative effects within SU. This implies that the wait time management should carefully consider the relationship between priority triage and risk stratification, especially for cardiac surgery.

  19. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU; Reconstruccion del espectro de neutrones usando una red neuronal artificial (RNA) en CPU y GPU

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A. [Universidad de Cordoba, 14002 Cordoba (Spain); Vega C, H. R.; Alonso M, O. E., E-mail: vic.mc68010@gmail.com [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  20. Discovering the impact of preceding units' characteristics on the wait time of cardiac surgery unit from statistic data.

    Science.gov (United States)

    Liu, Jiming; Tao, Li; Xiao, Bo

    2011-01-01

    Prior research shows that clinical demand and supplier capacity significantly affect the throughput and the wait time within an isolated unit. However, it is doubtful whether characteristics (i.e., demand, capacity, throughput, and wait time) of one unit would affect the wait time of subsequent units on the patient flow process. Focusing on cardiac care, this paper aims to examine the impact of characteristics of the catheterization unit (CU) on the wait time of cardiac surgery unit (SU). This study integrates published data from several sources on characteristics of the CU and SU units in 11 hospitals in Ontario, Canada between 2005 and 2008. It proposes a two-layer wait time model (with each layer representing one unit) to examine the impact of CU's characteristics on the wait time of SU and test the hypotheses using the Partial Least Squares-based Structural Equation Modeling analysis tool. Results show that: (i) wait time of CU has a direct positive impact on wait time of SU (β = 0.330, p relationships among different characteristics (except for the effect of throughput on wait time in SU). Characteristics of CU have direct and indirect impacts on wait time of SU. Specifically, demand and wait time of preceding unit are good predictors for wait time of subsequent units. This suggests that considering such cross-unit effects is necessary when alleviating wait time in a health care system. Further, different patient risk profiles may affect wait time in different ways (e.g., positive or negative effects) within SU. This implies that the wait time management should carefully consider the relationship between priority triage and risk stratification, especially for cardiac surgery.

  1. [Real-time safety audits in a neonatal unit].

    Science.gov (United States)

    Bergon-Sendin, Elena; Perez-Grande, María Del Carmen; Lora-Pablos, David; Melgar-Bonis, Ana; Ureta-Velasco, Noelia; Moral-Pumarega, María Teresa; Pallas-Alonso, Carmen Rosa

    2017-09-01

    Random audits are a safety tool to help in the prevention of adverse events, but they have not been widely used in hospitals. The aim of the study was to determine, through random safety audits, whether the information and material required for resuscitation were available for each patient in a neonatal intensive care unit and determine if factors related to the patient, time or location affect the implementation of the recommendations. Prospective observational study conducted in a level III-C neonatal intensive care unit during the year 2012. The evaluation of written information on the endotracheal tube, mask and ambu bag prepared of each patient and laryngoscopes of the emergency trolley were included within a broader audit of technological resources and study procedures. The technological resources and procedures were randomly selected twice a week for audit. Appropriate overall use was defined when all evaluated variables were correctly programmed in the same procedure. A total of 296 audits were performed. The kappa coefficient of inter-observer agreement was 0.93. The rate of appropriate overall use of written information and material required for resuscitation was 62.50% (185/296). Mask and ambu bag prepared for each patient was the variable with better compliance (97.3%, P=.001). Significant differences were found with improved usage during weekends versus working-day (73.97 vs. 58.74%, P=.01), and the rest of the year versus 3 rd quarter (66.06 vs. 52%, P=.02). Only in 62.5% of cases was the information and the material necessary to attend to a critical situation urgently easily available. Opportunities for improvement were identified through the audits. Copyright © 2016 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. United States Forest Disturbance Trends Observed Using Landsat Time Series

    Science.gov (United States)

    Masek, Jeffrey G.; Goward, Samuel N.; Kennedy, Robert E.; Cohen, Warren B.; Moisen, Gretchen G.; Schleeweis, Karen; Huang, Chengquan

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing U.S. land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest disturbance across the conterminous United States for 1985-2005. The geographic sample design used a probability-based scheme to encompass major forest types and maximize geographic dispersion. For each sample location disturbance was identified in the Landsat series using the Vegetation Change Tracker (VCT) algorithm. The NAFD analysis indicates that, on average, 2.77 Mha/yr of forests were disturbed annually, representing 1.09%/yr of US forestland. These satellite-based national disturbance rates estimates tend to be lower than those derived from land management inventories, reflecting both methodological and definitional differences. In particular the VCT approach used with a biennial time step has limited sensitivity to low-intensity disturbances. Unlike prior satellite studies, our biennial forest disturbance rates vary by nearly a factor of two between high and low years. High western US disturbance rates were associated with active fire years and insect activity, while variability in the east is more strongly related to harvest rates in managed forests. We note that generating a geographic sample based on representing forest type and variability may be problematic since the spatial pattern of disturbance does not necessarily correlate with forest type. We also find that the prevalence of diffuse, non-stand clearing disturbance in US forests makes the application of a biennial geographic sample problematic. Future satellite-based studies of disturbance at regional and national scales should focus on wall-to-wall analyses with annual time step for improved accuracy.

  3. Inhibition of CPU0213, a Dual Endothelin Receptor Antagonist, on Apoptosis via Nox4-Dependent ROS in HK-2 Cells

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-06-01

    Full Text Available Background/Aims: Our previous studies have indicated that a novel endothelin receptor antagonist CPU0213 effectively normalized renal function in diabetic nephropathy. However, the molecular mechanisms mediating the nephroprotective role of CPU0213 remain unknown. Methods and Results: In the present study, we first detected the role of CPU0213 on apoptosis in human renal tubular epithelial cell (HK-2. It was shown that high glucose significantly increased the protein expression of Bax and decreased Bcl-2 protein in HK-2 cells, which was reversed by CPU0213. The percentage of HK-2 cells that showed Annexin V-FITC binding was markedly suppressed by CPU0213, which confirmed the inhibitory role of CPU0213 on apoptosis. Given the regulation of endothelin (ET system to oxidative stress, we determined the role of redox signaling in the regulation of CPU0213 on apoptosis. It was demonstrated that the production of superoxide (O2-. was substantially attenuated by CPU0213 treatment in HK-2 cells. We further found that CPU0213 dramatically inhibited expression of Nox4 protein, which gene silencing mimicked the role of CPU0213 on the apoptosis under high glucose stimulation. We finally examined the role of CPU0213 on ET-1 receptors and found that high glucose-induced protein expression of endothelin A and B receptors was dramatically inhibited by CPU0213. Conclusion: Taken together, these results suggest that this Nox4-dependenet O2- production is critical for the apoptosis of HK-2 cells in high glucose. Endothelin receptor antagonist CPU0213 has an anti-apoptosis role through Nox4-dependent O2-.production, which address the nephroprotective role of CPU0213 in diabetic nephropathy.

  4. Efficiency of performing pulmonary procedures in a shared endoscopy unit: procedure time, turnaround time, delays, and procedure waiting time.

    Science.gov (United States)

    Verma, Akash; Lee, Mui Yok; Wang, Chunhong; Hussein, Nurmalah B M; Selvi, Kalai; Tee, Augustine

    2014-04-01

    The purpose of this study was to assess the efficiency of performing pulmonary procedures in the endoscopy unit in a large teaching hospital. A prospective study from May 20 to July 19, 2013, was designed. The main outcome measures were procedure delays and their reasons, duration of procedural steps starting from patient's arrival to endoscopy unit, turnaround time, total case durations, and procedure wait time. A total of 65 procedures were observed. The most common procedure was BAL (61%) followed by TBLB (31%). Overall procedures for 35 (53.8%) of 65 patients were delayed by ≥ 30 minutes, 21/35 (60%) because of "spillover" of the gastrointestinal and surgical cases into the time block of pulmonary procedure. Time elapsed between end of pulmonary procedure and start of the next procedure was ≥ 30 minutes in 8/51 (16%) of cases. In 18/51 (35%) patients there was no next case in the room after completion of the pulmonary procedure. The average idle time of the room after the end of pulmonary procedure and start of next case or end of shift at 5:00 PM if no next case was 58 ± 53 minutes. In 17/51 (33%) patients the room's idle time was >60 minutes. A total of 52.3% of patients had the wait time >2 days and 11% had it ≥ 6 days, reason in 15/21 (71%) being unavailability of the slot. Most pulmonary procedures were delayed due to spillover of the gastrointestinal and surgical cases into the block time allocated to pulmonary procedures. The most common reason for difficulty encountered in scheduling the pulmonary procedure was slot unavailability. This caused increased procedure waiting time. The strategies to reduce procedure delays and turnaround times, along with improved scheduling methods, may have a favorable impact on the volume of procedures performed in the unit thereby optimizing the existing resources.

  5. A heterogeneous CPU+GPU Poisson solver for space charge calculations in beam dynamics studies

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Dawei; Rienen, Ursula van [University of Rostock, Institute of General Electrical Engineering (Germany)

    2016-07-01

    In beam dynamics studies in accelerator physics, space charge plays a central role in the low energy regime of an accelerator. Numerical space charge calculations are required, both, in the design phase and in the operation of the machines as well. Due to its efficiency, mostly the Particle-In-Cell (PIC) method is chosen for the space charge calculation. Then, the solution of Poisson's equation for the charge distribution in the rest frame is the most prominent part within the solution process. The Poisson solver directly affects the accuracy of the self-field applied on the charged particles when the equation of motion is solved in the laboratory frame. As the Poisson solver consumes the major part of the computing time in most simulations it has to be as fast as possible since it has to be carried out once per time step. In this work, we demonstrate a novel heterogeneous CPU+GPU routine for the Poisson solver. The novel solver also benefits from our new research results on the utilization of a discrete cosine transform within the classical Hockney and Eastwood's convolution routine.

  6. Comparing GPU and CPU in OLAP Cubes Creation

    Science.gov (United States)

    Kaczmarski, Krzysztof

    GPGPU (General Purpose Graphical Processing Unit) programming is receiving more attention recently because of enormous computations speed up offered by this technology. GPGPU is applied in many branches of science and industry not excluding databases, even if this is not the primary field of expected benefits.

  7. Functions and requirements for a cesium demonstration unit

    International Nuclear Information System (INIS)

    Howden, G.F.

    1994-04-01

    Westinghouse Hanford Company is investigating alternative means to pretreat the wastes in the Hanford radioactive waste storage tanks. Alternatives include (but are not limited to) in-tank pretreatment, use of above ground transportable compact processing units (CPU) located adjacent to a tank farm, and fixed processing facilities. This document provides the functions and requirements for a CPU to remove cesium from tank waste as a demonstration of the CPU concept. It is therefore identified as the Cesium Demonstration Unit CDU

  8. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  9. Timing comparison of two-dimensional discrete-ordinates codes for criticality calculations

    International Nuclear Information System (INIS)

    Miller, W.F. Jr.; Alcouffe, R.E.; Bosler, G.E.; Brinkley, F.W. Jr.; O'dell, R.D.

    1979-01-01

    The authors compare two-dimensional discrete-ordinates neutron transport computer codes to solve reactor criticality problems. The fundamental interest is in determining which code requires the minimum Central Processing Unit (CPU) time for a given numerical model of a reasonably realistic fast reactor core and peripherals. The computer codes considered are the most advanced available and, in three cases, are not officially released. The conclusion, based on the study of four fast reactor core models, is that for this class of problems the diffusion synthetic accelerated version of TWOTRAN, labeled TWOTRAN-DA, is superior to the other codes in terms of CPU requirements

  10. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  11. Application of total care time and payment per unit time model for physician reimbursement for common general surgery operations.

    Science.gov (United States)

    Chatterjee, Abhishek; Holubar, Stefan D; Figy, Sean; Chen, Lilian; Montagne, Shirley A; Rosen, Joseph M; Desimone, Joseph P

    2012-06-01

    The relative value unit system relies on subjective measures of physician input in the care of patients. A payment per unit time model incorporates surgeon reimbursement to the total care time spent in the operating room, postoperative in-house, and clinic time to define payment per unit time. We aimed to compare common general surgery operations by using the total care time and payment per unit time method in order to demonstrate a more objective measurement for physician reimbursement. Average total physician payment per case was obtained for 5 outpatient operations and 4 inpatient operations in general surgery. Total care time was defined as the sum of operative time, 30 minutes per hospital day, and 30 minutes per office visit for each operation. Payment per unit time was calculated by dividing the physician reimbursement per case by the total care time. Total care time, physician payment per case, and payment per unit time for each type of operation demonstrated that an average payment per time spent for inpatient operations was $455.73 and slightly more at $467.51 for outpatient operations. Partial colectomy with primary anastomosis had the longest total care time (8.98 hours) and the least payment per unit time ($188.52). Laparoscopic gastric bypass had the highest payment per time ($707.30). The total care time and payment per unit time method can be used as an adjunct to compare reimbursement among different operations on an institutional level as well as on a national level. Although many operations have similar payment trends based on time spent by the surgeon, payment differences using this methodology are seen and may be in need of further review. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  12. Accelerating Molecular Dynamic Simulation on Graphics Processing Units

    Science.gov (United States)

    Friedrichs, Mark S.; Eastman, Peter; Vaidyanathan, Vishal; Houston, Mike; Legrand, Scott; Beberg, Adam L.; Ensign, Daniel L.; Bruns, Christopher M.; Pande, Vijay S.

    2009-01-01

    We describe a complete implementation of all-atom protein molecular dynamics running entirely on a graphics processing unit (GPU), including all standard force field terms, integration, constraints, and implicit solvent. We discuss the design of our algorithms and important optimizations needed to fully take advantage of a GPU. We evaluate its performance, and show that it can be more than 700 times faster than a conventional implementation running on a single CPU core. PMID:19191337

  13. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    International Nuclear Information System (INIS)

    Priimak, Dmitri

    2014-01-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques

  14. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    Energy Technology Data Exchange (ETDEWEB)

    Priimak, Dmitri

    2014-12-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.

  15. A PC based multi-CPU severe accident simulation trainer

    International Nuclear Information System (INIS)

    Jankowski, M.W.; Bienarz, P.P.; Sartmadjiev, A.D.

    2004-01-01

    MELSIM Severe Accident Simulation Trainer is a personal computer based system being developed by the International Atomic Energy Agency and Risk Management Associates, Inc. for the purpose of training the operators of nuclear power stations. It also serves for evaluating accident management strategies as well as assessing complex interfaces between emergency operating procedures and accident management guidelines. The system is being developed for the Soviet designed WWER-440/Model 213 reactor and it is plant specific. The Bohunice V2 power station in the Slovak Republic has been selected for trial operation of the system. The trainer utilizes several CPUs working simultaneously on different areas of simulation. Detailed plant operation displays are provided on colour monitor mimic screens which show changing plant conditions in approximate real-time. Up to 28 000 curves can be plotted on a separate monitor as the MELSIM program proceeds. These plots proceed concurrently with the program, and time specific segments can be recalled for review. A benchmarking (limited in scope) against well validated thermal-hydraulic codes and selected plant accident data (WWER-440/213 Rovno NPP, Ukraine) has been initiated. Preliminary results are presented and discussed. (author)

  16. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  17. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm

  18. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  19. DSM vs. NSM: CPU Performance Tradeoffs in Block-Oriented Query Processing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); N.J. Nes (Niels); P.A. Boncz (Peter)

    2008-01-01

    textabstractComparisons between the merits of row-wise storage (NSM) and columnar storage (DSM) are typically made with respect to the persistent storage layer of database systems. In this paper, however, we focus on the CPU efficiency tradeoffs of tuple representations inside the query

  20. Conserved-peptide upstream open reading frames (CPuORFs are associated with regulatory genes in angiosperms

    Directory of Open Access Journals (Sweden)

    Richard A Jorgensen

    2012-08-01

    Full Text Available Upstream open reading frames (uORFs are common in eukaryotic transcripts, but those that encode conserved peptides (CPuORFs occur in less than 1% of transcripts. The peptides encoded by three plant CPuORF families are known to control translation of the downstream ORF in response to a small signal molecule (sucrose, polyamines and phosphocholine. In flowering plants, transcription factors are statistically over-represented among genes that possess CPuORFs, and in general it appeared that many CPuORF genes also had other regulatory functions, though the significance of this suggestion was uncertain (Hayden and Jorgensen, 2007. Five years later the literature provides much more information on the functions of many CPuORF genes. Here we reassess the functions of 27 known CPuORF gene families and find that 22 of these families play a variety of different regulatory roles, from transcriptional control to protein turnover, and from small signal molecules to signal transduction kinases. Clearly then, there is indeed a strong association of CPuORFs with regulatory genes. In addition, 16 of these families play key roles in a variety of different biological processes. Most strikingly, the core sucrose response network includes three different CPuORFs, creating the potential for sophisticated balancing of the network in response to three different molecular inputs. We propose that the function of most CPuORFs is to modulate translation of a downstream major ORF (mORF in response to a signal molecule recognized by the conserved peptide and that because the mORFs of CPuORF genes generally encode regulatory proteins, many of them centrally important in the biology of plants, CPuORFs play key roles in balancing such regulatory networks.

  1. Siblings and children's time use in the United States

    Directory of Open Access Journals (Sweden)

    Rachel Dunifon

    2017-11-01

    Full Text Available Background: Eighty-two percent of children under age 18 live with at least one sibling, and the sibling relationship is typically the longest-lasting family relationship in an individual's life. Nevertheless, siblings remain understudied in the family demography literature. Objective: We ask how having a sibling structures children's time spent with others and in specific activities, and how children's time and activities with siblings vary by social class, gender, and age. Methods: We use time diary data from the US Panel Study of Income Dynamics' Child Development Supplement (PSID-CDS, comparing the time use of children with and without siblings and presenting regression-adjusted descriptive statistics on patterns among those with siblings. Results: Children with siblings spend about half of their discretionary time engaged with siblings. They spend less time alone with parents and more time in unstructured play than those without siblings. Brothers and more closely spaced siblings spend more time together and more time in unstructured play. For example, boys with at least one brother spend five more hours per week with their siblings and over three more hours per week in unstructured play than boys with no brothers. Conclusions: The presence and characteristics of siblings shape children's time use in ways that may have implications for child development. Contribution: This is the first study to use children's time diary data to examine how the presence and characteristics of siblings structure ways in which children spend their time. This contributes to our broader understanding of sibling relationships and family dynamics.

  2. The relationship among CPU utilization, temperature, and thermal power for waste heat utilization

    International Nuclear Information System (INIS)

    Haywood, Anna M.; Sherbeck, Jon; Phelan, Patrick; Varsamopoulos, Georgios; Gupta, Sandeep K.S.

    2015-01-01

    Highlights: • This work graphs a triad relationship among CPU utilization, temperature and power. • Using a custom-built cold plate, we were able capture CPU-generated high quality heat. • The work undertakes a radical approach using mineral oil to directly cool CPUs. • We found that it is possible to use CPU waste energy to power an absorption chiller. - Abstract: This work addresses significant datacenter issues of growth in numbers of computer servers and subsequent electricity expenditure by proposing, analyzing and testing a unique idea of recycling the highest quality waste heat generated by datacenter servers. The aim was to provide a renewable and sustainable energy source for use in cooling the datacenter. The work incorporates novel approaches in waste heat usage, graphing CPU temperature, power and utilization simultaneously, and a mineral oil experimental design and implementation. The work presented investigates and illustrates the quantity and quality of heat that can be captured from a variably tasked liquid-cooled microprocessor on a datacenter server blade. It undertakes a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Results indicate that 123 servers encapsulated in mineral oil can power a 10-ton chiller with a design point of 50.2 kW th . Compared with water-cooling experiments, the mineral oil experiment mitigated the temperature drop between the heat source and discharge line by up to 81%. In addition, due to this reduction in temperature drop, the heat quality in the oil discharge line was up to 12.3 °C higher on average than for water-cooled experiments. Furthermore, mineral oil cooling holds the potential to eliminate the 50% cooling expenditure which initially motivated this project

  3. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    Science.gov (United States)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  4. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease.

    Science.gov (United States)

    Shamonin, Denis P; Bron, Esther E; Lelieveldt, Boudewijn P F; Smits, Marion; Klein, Stefan; Staring, Marius

    2013-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4-5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15-60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license.

  5. Fast Parallel Image Registration on CPU and GPU for Diagnostic Classification of Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Denis P Shamonin

    2014-01-01

    Full Text Available Nonrigid image registration is an important, but time-consuming taskin medical image analysis. In typical neuroimaging studies, multipleimage registrations are performed, i.e. for atlas-based segmentationor template construction. Faster image registration routines wouldtherefore be beneficial.In this paper we explore acceleration of the image registrationpackage elastix by a combination of several techniques: iparallelization on the CPU, to speed up the cost function derivativecalculation; ii parallelization on the GPU building on andextending the OpenCL framework from ITKv4, to speed up the Gaussianpyramid computation and the image resampling step; iii exploitationof certain properties of the B-spline transformation model; ivfurther software optimizations.The accelerated registration tool is employed in a study ondiagnostic classification of Alzheimer's disease and cognitivelynormal controls based on T1-weighted MRI. We selected 299participants from the publicly available Alzheimer's DiseaseNeuroimaging Initiative database. Classification is performed with asupport vector machine based on gray matter volumes as a marker foratrophy. We evaluated two types of strategies (voxel-wise andregion-wise that heavily rely on nonrigid image registration.Parallelization and optimization resulted in an acceleration factorof 4-5x on an 8-core machine. Using OpenCL a speedup factor of ~2was realized for computation of the Gaussian pyramids, and 15-60 forthe resampling step, for larger images. The voxel-wise and theregion-wise classification methods had an area under thereceiver operator characteristic curve of 88% and 90%,respectively, both for standard and accelerated registration.We conclude that the image registration package elastix wassubstantially accelerated, with nearly identical results to thenon-optimized version. The new functionality will become availablein the next release of elastix as open source under the BSD license.

  6. Leveraging the checkpoint-restart technique for optimizing CPU efficiency of ATLAS production applications on opportunistic platforms

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2017-01-01

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and services. In some cases the initialization step can take as long as 10-15 minutes. Such slow initialization, being inherently serial, has a significant negative impact on overall CPU efficiency of the production job, especially when the job is executed on opportunistic, often short-lived, resources such as commercial clouds or volunteer computing. In order to improve this situation, we can take advantage of the fact that ATLAS runs large numbers of production jobs with similar configuration parameters (e.g. jobs within the same production task). This allows us to checkpoint one job at the end of its configuration step a...

  7. United States forest disturbance trends observed with landsat time series

    Science.gov (United States)

    Jeffrey G. Masek; Samuel N. Goward; Robert E. Kennedy; Warren B. Cohen; Gretchen G. Moisen; Karen Schleweiss; Chengquan. Huang

    2013-01-01

    Disturbance events strongly affect the composition, structure, and function of forest ecosystems; however, existing US land management inventories were not designed to monitor disturbance. To begin addressing this gap, the North American Forest Dynamics (NAFD) project has examined a geographic sample of 50 Landsat satellite image time series to assess trends in forest...

  8. Der ATLAS LVL2-Trigger mit FPGA-Prozessoren : Entwicklung, Aufbau und Funktionsnachweis des hybriden FPGA/CPU-basierten Prozessorsystems ATLANTIS

    CERN Document Server

    Singpiel, Holger

    2000-01-01

    This thesis describes the conception and implementation of the hybrid FPGA/CPU based processing system ATLANTIS as trigger processor for the proposed ATLAS experiment at CERN. CompactPCI provides the close coupling of a multi FPGA system and a standard CPU. The system is scalable in computing power and flexible in use due to its partitioning into dedicated FPGA boards for computation, I/O tasks and a private communication. Main focus of the research activities based on the usage of the ATLANTIS system are two areas in the second level trigger (LVL2). First, the acceleration of time critical B physics trigger algorithms is the major aim. The execution of the full scan TRT algorithm on ATLANTIS, which has been used as a demonstrator, results in a speedup of 5.6 compared to a standard CPU. Next, the ATLANTIS system is used as a hardware platform for research work in conjunction with the ATLAS readout systems. For further studies a permanent installation of the ATLANTIS system in the LVL2 application testbed is f...

  9. Real-time bias-adjusted O 3 and PM 2.5 air quality index forecasts and their performance evaluations over the continental United States

    Science.gov (United States)

    Kang, Daiwen; Mathur, Rohit; Trivikrama Rao, S.

    2010-06-01

    The National Air Quality Forecast Capacity (NAQFC) system, which links NOAA's North American Mesoscale (NAM) meteorological model with EPA's Community Multiscale Air Quality (CMAQ) model, provided operational ozone (O 3) and experimental fine particular matter (PM 2.5) forecasts over the continental United States (CONUS) during 2008. This paper describes the implementation of a real-time Kalman Filter (KF) bias-adjustment technique to improve the accuracy of O 3 and PM 2.5 forecasts at discrete monitoring locations. The operational surface-level O 3 and PM 2.5 forecasts from the NAQFC system were post-processed by the KF bias-adjusted technique using near real-time hourly O 3 and PM 2.5 observations obtained from EPA's AIRNow measurement network. The KF bias-adjusted forecasts were created daily, providing 24-h hourly bias-adjusted forecasts for O 3 and PM 2.5 at all AIRNow monitoring sites within the CONUS domain. The bias-adjustment post-processing implemented in this study requires minimal computational cost; requiring less than 10 min of CPU on a single processor Linux machine to generate 24-h hourly bias-adjusted forecasts over the entire CONUS domain. The results show that the real-time KF bias-adjusted forecasts for both O 3 and PM 2.5 have performed as well as or even better than the previous studies when the same technique was applied to the historical O 3 and PM 2.5 time series from archived AQF in earlier years. Compared to the raw forecasts, the KF forecasts displayed significant improvement in the daily maximum 8-h O 3 and daily mean PM 2.5 forecasts in terms of both discrete (i.e., reduced errors, increased correlation coefficients, and index of agreement) and categorical (increased hit rate and decreased false alarm ratio) evaluation metrics at almost all locations during the study period in 2008.

  10. Real time material accountability in a chemical reprocessing unit

    International Nuclear Information System (INIS)

    Morrison, G.W.; Blakeman, E.D.

    1979-01-01

    Real time material accountability for a pulse column in a chemical reprocessing plant has been investigated using a simple two state Kalman Filter. Operation of the pulse column was simulated by the SEPHIS-MOD4 code. Noisy measurements of the column inventory were obtained from two neutron detectors with various simulated counting errors. Various loss scenarios were simulated and analyzed by the Kalman Filter. In all cases considered the Kalman Filter was a superior estimator of material loss

  11. Graphics processing unit (GPU) real-time infrared scene generation

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  12. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-28

    Many database applications, such as sequence comparing, sequence searching, and sequence matching, etc, process large database sequences. we introduce a novel and efficient technique to improve the performance of database applica- tions by using a Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm. The experimental results show that our Hybrid GPU/CPU technique improves the average performance by a factor of 2.2, and improves the peak performance by a factor of 2.8 when compared to earlier implementations. Copyright © 2011 by ASME.

  13. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  14. Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Fork/Join is a framework for parallelizing calculations using recursive decomposition, also called divide and conquer. These algorithms occasionally end up duplicating work, especially at the beginning of the run. We can reduce wasted CPU cycles by implementing a reserved caching scheme. Before a task starts its calculation, it tries to reserve an entry in the shared map. If it is successful, it immediately begins. If not, it blocks until the other thread has finished its calculation. Unfortunately this might result in a significant number of blocked threads, decreasing CPU utilization. In this talk we will demonstrate this issue and offer a solution in the form of the ManagedBlocker. Combined with the Fork/Join, it can keep parallelism at the desired level.

  15. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  16. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  17. Signs of Time - Metamorphoses of Historical Former Barrack Units

    Science.gov (United States)

    Gawryluk, Dorota

    2017-10-01

    The article analyzes the aesthetic changes which were introduced as regards the historic barracks in Polish cities from 1918 to the present day. The purpose of the analysis was to determine certain periods in history and to assign characteristic forms of initiatives in reference to the post-military objects to the aforementioned periods. The results of the research served as foundation for establishing three periods: 1) 1918-1939, 2) 1945-1989, 3) after 1989, which were determined in reference to typical types of approach towards the modernization of barrack buildings, conditioned by Poland’s political and economic situation. Consequently, the aesthetics of the modernization period characteristic for particular time frames were indicated-the “signs of time” readable in the architecture, referring to periods as follows: 1) symbols of Polishness, associated with regained independence - sculptures, statues, new buildings in contrast with the barracks remaining after the partition, 2) socialistic economy - using barrack as construction resource, utilitarian approach - adaptation to serve new civil functions, often for the needs of production technology, combination of historical and industrial forms, such initiatives on many occasions led to the moral degradation of former militarian districts 3) market economy - constructing new, positive identity and function of barracks buildings confirmed by the tactical changes in their architectural form.

  18. Time-Temperature Profiling of United Kingdom Consumers' Domestic Refrigerators.

    Science.gov (United States)

    Evans, Ellen W; Redmond, Elizabeth C

    2016-12-01

    Increased consumer demand for convenience and ready-to-eat food, along with changes to consumer food purchase and storage practices, have resulted in an increased reliance on refrigeration to maximize food safety. Previous research suggests that many domestic refrigerators operate at temperatures exceeding recommendations; however, the results of several studies were determined by means of one temperature data point, which, given temperature fluctuation, may not be a true indicator of actual continual operating temperatures. Data detailing actual operating temperatures and the effects of consumer practices on temperatures are limited. This study has collated the time-temperature profiles of domestic refrigerators in consumer kitchens (n = 43) over 6.5 days with concurrent self-reported refrigerator usage. Overall, the findings established a significant difference (P < 0.05) between one-off temperature (the recording of one temperature data point) and mean operating temperature. No refrigerator operated at ≤5.0°C for the entire duration of the study. Mean temperatures exceeding 5.0°C were recorded in the majority (91%) of refrigerators. No significant associations or differences were determined for temperature profiles and demographics, including household size, or refrigerator characteristics (age, type, loading, and location). A positive correlation (P < 0.05) between room temperature and refrigerator temperature was determined. Reported door opening frequency correlated with temperature fluctuation (P < 0.05). Thermometer usage was determined to be infrequent. Cumulatively, research findings have established that the majority of domestic refrigerators in consumer homes operate at potentially unsafe temperatures and that this is influenced by consumer usage. The findings from this study may be utilized to inform the development of shelf-life testing based on realistic domestic storage conditions. Furthermore, the data can inform the development of future

  19. Billing the CPU Time Used by System Components on Behalf of VMs

    OpenAIRE

    Djomgwe Teabe , Boris; Tchana , Alain-Bouzaïde; Hagimont , Daniel

    2016-01-01

    International audience; Nowadays, virtualization is present in almost all cloud infrastructures. In virtualized cloud, virtual machines (VMs) are the basis for allocating resources. A VM is launched with a fixed allocated computing capacity that should be strictly provided by the hosting system scheduler. Unfortunately, this allocated capacity is not always respected, due to mechanisms provided by the virtual machine monitoring system (also known as hypervisor). For instance, we observe that ...

  20. Billing the CPU Time Used by System Components on Behalf of VMs

    OpenAIRE

    Djomgwe Teabe, Boris; Tchana, Alain-Bouzaïde; Hagimont, Daniel

    2016-01-01

    Nowadays, virtualization is present in almost all cloud infrastructures. In virtualized cloud, virtual machines (VMs) are the basis for allocating resources. A VM is launched with a fixed allocated computing capacity that should be strictly provided by the hosting system scheduler. Unfortunately, this allocated capacity is not always respected, due to mechanisms provided by the virtual machine monitoring system (also known as hypervisor). For instance, we observe that a significant amount of ...

  1. Kozloduy NPP units 3 and 4 rest life time program execution

    International Nuclear Information System (INIS)

    Genov, S.

    2005-01-01

    In this paper the following tasks are considered: Kozloduy NPP units 3 and 4 life time evaluation; programme for the units life time assuring; units 3 and 4 renewals. The main activities for the programme implementation are described and the obtained results are presented. In conclusion, the executed activities of program for assuring the life time of units 3 and 4 of Kozloduy NPP, cogently prove that the lifetime of structures, systems and components, is assured duly and those structures, systems and components will be in service safely, economically effectively and mostly reliable till the end of the 30 years design lifetime. For some of them it has been proved even for 35 and 40 years. Program activities continue during 2005, although the early shutdown of units 3 and 4 is possible

  2. Decreasing laboratory turnaround time and patient wait time by implementing process improvement methodologies in an outpatient oncology infusion unit.

    Science.gov (United States)

    Gjolaj, Lauren N; Gari, Gloria A; Olier-Pino, Angela I; Garcia, Juan D; Fernandez, Gustavo L

    2014-11-01

    Prolonged patient wait times in the outpatient oncology infusion unit indicated a need to streamline phlebotomy processes by using existing resources to decrease laboratory turnaround time and improve patient wait time. Using the DMAIC (define, measure, analyze, improve, control) method, a project to streamline phlebotomy processes within the outpatient oncology infusion unit in an academic Comprehensive Cancer Center known as the Comprehensive Treatment Unit (CTU) was completed. Laboratory turnaround time for patients who needed same-day lab and CTU services and wait time for all CTU patients was tracked for 9 weeks. During the pilot, the wait time from arrival to CTU to sitting in treatment area decreased by 17% for all patients treated in the CTU during the pilot. A total of 528 patients were seen at the CTU phlebotomy location, representing 16% of the total patients who received treatment in the CTU, with a mean turnaround time of 24 minutes compared with a baseline turnaround time of 51 minutes. Streamlining workflows and placing a phlebotomy station inside of the CTU decreased laboratory turnaround times by 53% for patients requiring same day lab and CTU services. The success of the pilot project prompted the team to make the station a permanent fixture. Copyright © 2014 by American Society of Clinical Oncology.

  3. The Magnitude and Time Course of Muscle Cross-section Decrease in Intensive Care Unit Patients

    NARCIS (Netherlands)

    Haaf, D. Ten; Hemmen, B.; Meent, H. van de; Bovend'Eerdt, T.J.H.

    2017-01-01

    OBJECTIVE: Bedriddenness and immobilization of patients at an intensive care unit may result in muscle atrophy and devaluation in quality of life. The exact effect of immobilization on intensive care unit patients is not known. The aim of this study was to investigate the magnitude and time course

  4. Changes in time and frequency related aspects of motor unit action potentials during fatigue

    NARCIS (Netherlands)

    Wallinga, W.; Bouwens, Jeroen S.; Baten, Christian T.M.

    1996-01-01

    During fatigue the shape of motor unit action potentials (MUAPs) change. Characteristics of the MUAPs described before concern several time related aspects. No attention has been given to the frequency spectrum changes of MUAPS. The median frequency of MUAPS has now been determined for motor units

  5. VMware vSphere performance designing CPU, memory, storage, and networking for performance-intensive workloads

    CERN Document Server

    Liebowitz, Matt; Spies, Rynardt

    2014-01-01

    Covering the latest VMware vSphere software, an essential book aimed at solving vSphere performance problems before they happen VMware vSphere is the industry's most widely deployed virtualization solution. However, if you improperly deploy vSphere, performance problems occur. Aimed at VMware administrators and engineers and written by a team of VMware experts, this resource provides guidance on common CPU, memory, storage, and network-related problems. Plus, step-by-step instructions walk you through techniques for solving problems and shed light on possible causes behind the problems. Divu

  6. Simulation of small-angle scattering patterns using a CPU-efficient algorithm

    Science.gov (United States)

    Anitas, E. M.

    2017-12-01

    Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.

  7. MASSIVELY PARALLEL LATENT SEMANTIC ANALYSES USING A GRAPHICS PROCESSING UNIT

    Energy Technology Data Exchange (ETDEWEB)

    Cavanagh, J.; Cui, S.

    2009-01-01

    Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using Singular Value Decomposition. However, with the ever-expanding size of datasets, current implementations are not fast enough to quickly and easily compute the results on a standard PC. A graphics processing unit (GPU) can solve some highly parallel problems much faster than a traditional sequential processor or central processing unit (CPU). Thus, a deployable system using a GPU to speed up large-scale LSA processes would be a much more effective choice (in terms of cost/performance ratio) than using a PC cluster. Due to the GPU’s application-specifi c architecture, harnessing the GPU’s computational prowess for LSA is a great challenge. We presented a parallel LSA implementation on the GPU, using NVIDIA® Compute Unifi ed Device Architecture and Compute Unifi ed Basic Linear Algebra Subprograms software. The performance of this implementation is compared to traditional LSA implementation on a CPU using an optimized Basic Linear Algebra Subprograms library. After implementation, we discovered that the GPU version of the algorithm was twice as fast for large matrices (1 000x1 000 and above) that had dimensions not divisible by 16. For large matrices that did have dimensions divisible by 16, the GPU algorithm ran fi ve to six times faster than the CPU version. The large variation is due to architectural benefi ts of the GPU for matrices divisible by 16. It should be noted that the overall speeds for the CPU version did not vary from relative normal when the matrix dimensions were divisible by 16. Further research is needed in order to produce a fully implementable version of LSA. With that in mind, the research we presented shows that the GPU is a viable option for increasing the speed of LSA, in terms of cost/performance ratio.

  8. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    Science.gov (United States)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  9. The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

    Directory of Open Access Journals (Sweden)

    Ra Inta

    2012-01-01

    Full Text Available The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”, built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard. Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

  10. Health care aides use of time in a residential long-term care unit: a time and motion study.

    Science.gov (United States)

    Mallidou, Anastasia A; Cummings, Greta G; Schalm, Corinne; Estabrooks, Carole A

    2013-09-01

    Organizational resources such as caregiver time use with older adults in residential long-term care facilities (nursing homes) have not been extensively studied, while levels of nurse staffing and staffing-mix are the focus of many publications on all types of healthcare organizations. Evidence shows that front-line caregivers' sufficient working time with residents is associated with performance, excellence, comprehensive care, quality of outcomes (e.g., reductions in pressure ulcers, urinary tract infections, and falls), quality of life, cost savings, and may be affiliated with transformation of organizational culture. To explore organizational resources in a long-term care unit within a multilevel residential facility, to measure healthcare aides' use of time with residents, and to describe working environment and unit culture. An observational pilot study was conducted in a Canadian urban 52-bed long-term care unit within a faith-based residential multilevel care facility. A convenience sample of seven healthcare aides consented to participate. To collect the data, we used an observational sheet (to monitor caregiver time use on certain activities such as personal care, assisting with eating, socializing, helping residents to be involved in therapeutic activities, paperwork, networking, personal time, and others), semi-structured interview (to assess caregiver perceptions of their working environment), and field notes (to illustrate the unit culture). Three hundred and eighty seven hours of observation were completed. The findings indicate that healthcare aides spent most of their working time (on an eight-hour day-shift) in "personal care" (52%) and in "other" activities (23%). One-to-three minute activities consumed about 35% of the time spent in personal care and 20% of time spent in assisting with eating. Overall, caregivers' time spent socializing was less than 1%, about 6% in networking, and less than 4% in paperwork. Re-organizing healthcare aides

  11. Discrete-Event Execution Alternatives on General Purpose Graphical Processing Units

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.

    2006-01-01

    Graphics cards, traditionally designed as accelerators for computer graphics, have evolved to support more general-purpose computation. General Purpose Graphical Processing Units (GPGPUs) are now being used as highly efficient, cost-effective platforms for executing certain simulation applications. While most of these applications belong to the category of time-stepped simulations, little is known about the applicability of GPGPUs to discrete event simulation (DES). Here, we identify some of the issues and challenges that the GPGPU stream-based interface raises for DES, and present some possible approaches to moving DES to GPGPUs. Initial performance results on simulation of a diffusion process show that DES-style execution on GPGPU runs faster than DES on CPU and also significantly faster than time-stepped simulations on either CPU or GPGPU.

  12. Classification of hyperspectral imagery using MapReduce on a NVIDIA graphics processing unit (Conference Presentation)

    Science.gov (United States)

    Ramirez, Andres; Rahnemoonfar, Maryam

    2017-04-01

    A hyperspectral image provides multidimensional figure rich in data consisting of hundreds of spectral dimensions. Analyzing the spectral and spatial information of such image with linear and non-linear algorithms will result in high computational time. In order to overcome this problem, this research presents a system using a MapReduce-Graphics Processing Unit (GPU) model that can help analyzing a hyperspectral image through the usage of parallel hardware and a parallel programming model, which will be simpler to handle compared to other low-level parallel programming models. Additionally, Hadoop was used as an open-source version of the MapReduce parallel programming model. This research compared classification accuracy results and timing results between the Hadoop and GPU system and tested it against the following test cases: the CPU and GPU test case, a CPU test case and a test case where no dimensional reduction was applied.

  13. Study of Track Irregularity Time Series Calibration and Variation Pattern at Unit Section

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2014-01-01

    Full Text Available Focusing on problems existing in track irregularity time series data quality, this paper first presents abnormal data identification, data offset correction algorithm, local outlier data identification, and noise cancellation algorithms. And then proposes track irregularity time series decomposition and reconstruction through the wavelet decomposition and reconstruction approach. Finally, the patterns and features of track irregularity standard deviation data sequence in unit sections are studied, and the changing trend of track irregularity time series is discovered and described.

  14. Different contexts, different effects? Work time and mental health in the United States and Germany.

    Science.gov (United States)

    Kleiner, Sibyl; Schunck, Reinhard; Schömann, Klaus

    2015-03-01

    This paper takes a comparative approach to the topic of work time and health, asking whether weekly work hours matter for mental health. We hypothesize that these relationships differ within the United States and Germany, given the more regulated work time environments within Germany and the greater incentives to work long hours in the United States. We further hypothesize that German women will experience greatest penalties to long hours. We use data from the German Socioeconomic Panel and the National Longitudinal Survey of Youth to examine hours effects on mental health score at midlife. The results support our initial hypothesis. In Germany, longer work time is associated with worse mental health, while in the United States, as seen in previous research, the associations are more complex. Our results do not show greater mental health penalties for German women and suggest instead a selection effect into work hours operating by gender. © American Sociological Association 2015.

  15. Comparison of the CPU and memory performance of StatPatternRecognitions (SPR) and Toolkit for MultiVariate Analysis (TMVA)

    International Nuclear Information System (INIS)

    Palombo, G.

    2012-01-01

    High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the learning process scale versus data set size. As classifiers, we consider Random Forests, Boosted Decision Trees and Neural Networks only, each with specific settings. For our tests, we employ a data set widely used in the machine learning community, “Threenorm” data set, as well as data tailored for testing various edge cases. For each data set, we constantly increase its size and check CPU time and memory needed to build the classifiers implemented in SPR and TMVA. We show that SPR is often significantly faster and consumes significantly less memory. For example, the SPR implementation of Random Forest is by an order of magnitude faster and consumes an order of magnitude less memory than TMVA on Threenorm data.

  16. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  17. A Bit String Content Aware Chunking Strategy for Reduced CPU Energy on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    2015-01-01

    Full Text Available In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.

  18. Working at the Weekend: Fathers' Time with Family in the United Kingdom.

    Science.gov (United States)

    Hook, Jennifer L

    2012-08-01

    Whereas most resident fathers are able to spend more time with their children on weekends than on weekdays, many fathers work on the weekends spending less time with their children on these days. There are conflicting findings about whether fathers are able to make up for lost weekend time on weekdays. Using unique features of the United Kingdom's National Survey of Time Use 2000 (UKTUS) I examine the impact of fathers' weekend work on the time fathers spend with their children, family, and partners (N = 595 fathers). I find that weekend work is common among fathers and is associated with less time with children, families, and partners. Fathers do not recover lost time with children on weekdays, largely because weekend work is a symptom of overwork. Findings also reveal that even if fathers had compensatory time, they are unlikely to recover lost time spent as a family or couple.

  19. Detection and attribution of streamflow timing changes to climate change in the Western United States

    Science.gov (United States)

    Hidalgo, H.G.; Das, T.; Dettinger, M.D.; Cayan, D.R.; Pierce, D.W.; Barnett, T.P.; Bala, G.; Mirin, A.; Wood, A.W.; Bonfils, Celine; Santer, B.D.; Nozawa, T.

    2009-01-01

    This article applies formal detection and attribution techniques to investigate the nature of observed shifts in the timing of streamflow in the western United States. Previous studies have shown that the snow hydrology of the western United States has changed in the second half of the twentieth century. Such changes manifest themselves in the form of more rain and less snow, in reductions in the snow water contents, and in earlier snowmelt and associated advances in streamflow "center" timing (the day in the "water-year" on average when half the water-year flow at a point has passed). However, with one exception over a more limited domain, no other study has attempted to formally attribute these changes to anthropogenic increases of greenhouse gases in the atmosphere. Using the observations together with a set of global climate model simulations and a hydrologic model (applied to three major hydrological regions of the western United States_the California region, the upper Colorado River basin, and the Columbia River basin), it is found that the observed trends toward earlier "center" timing of snowmelt-driven streamflows in the western United States since 1950 are detectably different from natural variability (significant at the p analysis, and it is the only basin that showed a detectable signal when the analysis was performed on individual basins. It should be noted that although climate change is an important signal, other climatic processes have also contributed to the hydrologic variability of large basins in the western United States. ?? 2009 American Meteorological Society.

  20. Decreasing Postanesthesia Care Unit to Floor Transfer Times to Facilitate Short Stay Total Joint Replacements.

    Science.gov (United States)

    Sibia, Udai S; Grover, Jennifer; Turcotte, Justin J; Seanger, Michelle L; England, Kimberly A; King, Jennifer L; King, Paul J

    2018-04-01

    We describe a process for studying and improving baseline postanesthesia care unit (PACU)-to-floor transfer times after total joint replacements. Quality improvement project using lean methodology. Phase I of the investigational process involved collection of baseline data. Phase II involved developing targeted solutions to improve throughput. Phase III involved measured project sustainability. Phase I investigations revealed that patients spent an additional 62 minutes waiting in the PACU after being designated ready for transfer. Five to 16 telephone calls were needed between the PACU and the unit to facilitate each patient transfer. The most common reason for delay was unavailability of the unit nurse who was attending to another patient (58%). Phase II interventions resulted in transfer times decreasing to 13 minutes (79% reduction, P care at other institutions. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  1. The Measurement of Time: Children's Construction of Transitivity, Unit Iteration, and Conservation of Speed.

    Science.gov (United States)

    Long, Kathy; Kamii, Constance

    2001-01-01

    Interviews 120 children in kindergarten and grades 2, 4, and 6 with five Piagetian tasks to determine the grade level at which most have constructed transitive reasoning, unit iteration, and conservation of speed. Indicates that construction of the logic necessary to make sense of the measurement of time is generally not complete before sixth…

  2. Economic Conditions and the Divorce Rate: A Time-Series Analysis of the Postwar United States.

    Science.gov (United States)

    South, Scott J.

    1985-01-01

    Challenges the belief that the divorce rate rises during prosperity and falls during economic recessions. Time-series regression analysis of postwar United States reveals small but positive effects of unemployment on divorce rate. Stronger influences on divorce rates are changes in age structure and labor-force participation rate of women.…

  3. Creating Deep Time Diaries: An English/Earth Science Unit for Middle School Students

    Science.gov (United States)

    Jordan, Vicky; Barnes, Mark

    2006-01-01

    Students love a good story. That is why incorporating literary fiction that parallels teaching goals and standards can be effective. In the interdisciplinary, thematic six-week unit described in this article, the authors use the fictional book "The Deep Time Diaries," by Gary Raham, to explore topics in paleontology, Earth science, and creative…

  4. Rest life time management of Kozloduy NPPP Unit 3 and 4

    International Nuclear Information System (INIS)

    Vodenicharov, St.

    2002-01-01

    The radiation life time of reactor pressure vessel (RPV) is the most important limiting factor for the term of exploitation of the whole power unit. The main degradation mechanism of RPV metal is the neutron induced embrittlement. Processes of radiation ageing running in RPV metal lead to fracture toughness decrease and to increased probability of brittle fracture of the vessel under thermal shocks. This explains the importance of RPV integrity assessment and rest life time management

  5. FAST CALCULATION OF THE LOMB-SCARGLE PERIODOGRAM USING GRAPHICS PROCESSING UNITS

    International Nuclear Information System (INIS)

    Townsend, R. H. D.

    2010-01-01

    I introduce a new code for fast calculation of the Lomb-Scargle periodogram that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match eight CPU cores, and on a high-end GPU it is faster by a factor approaching 30. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities, and Monte Carlo simulation of periodogram statistical properties.

  6. Breastfeeding protection, promotion, and support in the United States: a time to nudge, a time to measure.

    Science.gov (United States)

    Pérez-Escamilla, Rafael; Chapman, Donna J

    2012-05-01

    Strong evidence-based advocacy efforts have now translated into high level political support and concrete goals for improving breastfeeding outcomes among women in the United States. In spite of this, major challenge remain for promoting, supporting and especially for protecting breastfeeding in the country. The goals of this commentary are to argue in favor of: A) Changes in the default social and environmental systems, that would allow women to implement their right to breastfeed their infants, B) A multi-level and comprehensive monitoring system to measure process and outcomes indicators in the country. Evidence-based commentary. Breastfeeding rates in the United States can improve based on a well coordinated social marketing framework. This approach calls for innovative promotion through mass media, appropriate facility based and community based support (e.g., Baby Friendly Hospital Initiative, WIC-coordinated community based peer counseling), and adequate protection for working women (e.g., longer paid maternity leave, breastfeeding or breast milk extraction breaks during the working day) and women at large by adhering and enforcing the WHO ethics Code for the Marketing of Breast Milk Substitutes. Sound infant feeding practices monitoring systems, which include WIC administrative food package data, are needed. Given the current high level of political support to improve breastfeeding in the United States, a window of opportunity has been opened. Establishing breastfeeding as the social norm in the USA will take time, but the global experience indicates that it can be done.

  7. GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task Scheduling

    OpenAIRE

    Kim, Sookyoung

    2008-01-01

    This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes t...

  8. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  9. Design of a Message Passing Model for Use in a Heterogeneous CPU-NFP Framework for Network Analytics

    CSIR Research Space (South Africa)

    Pennefather, S

    2017-09-01

    Full Text Available of applications written in the Go programming language to be executed on a Network Flow Processor (NFP) for enhanced performance. This paper explores the need and feasibility of implementing a message passing model for data transmission between the NFP and CPU...

  10. Overtaking CPU DBMSes with a GPU in whole-query analytic processing with parallelism-friendly execution plan optimization

    NARCIS (Netherlands)

    A. Agbaria (Adnan); D. Minor (David); N. Peterfreund (Natan); E. Rozenberg (Eyal); O. Rosenberg (Ofer); Huawei Research

    2016-01-01

    textabstractExisting work on accelerating analytic DB query processing with (discrete) GPUs fails to fully realize their potential for speedup through parallelism: Published results do not achieve significant speedup over more performant CPU-only DBMSes when processing complete queries. This

  11. Fall risk as a function of time after admission to sub-acute geriatric hospital units.

    Science.gov (United States)

    Rapp, Kilian; Ravindren, Johannes; Becker, Clemens; Lindemann, Ulrich; Jaensch, Andrea; Klenk, Jochen

    2016-10-07

    There is evidence about time-dependent fracture rates in different settings and situations. Lacking are data about underlying time-dependent fall risk patterns. The objective of the study was to analyse fall rates as a function of time after admission to sub-acute hospital units and to evaluate the time-dependent impact of clinical factors at baseline on fall risk. This retrospective cohort study used data of 5,255 patients admitted to sub-acute units in a geriatric rehabilitation clinic in Germany between 2010 and 2014. Falls, personal characteristics and functional status at admission were extracted from the hospital information system. The rehabilitation stay was divided in 3-day time-intervals. The fall rate was calculated for each time-interval in all patients combined and in subgroups of patients. To analyse the influence of covariates on fall risk over time multivariate negative binomial regression models were applied for each of 5 time-intervals. The overall fall rate was 10.2 falls/1,000 person-days with highest fall risks during the first week and decreasing risks within the following weeks. A particularly pronounced risk pattern with high fall risks during the first days and decreasing risks thereafter was observed in men, disoriented people, and people with a low functional status or impaired cognition. In disoriented patients, for example, the fall rate decreased from 24.6 falls/1,000 person-days in day 2-4 to about 13 falls/1,000 person-days 2 weeks later. The incidence rate ratio of baseline characteristics changed also over time. Fall risk differs considerably over time during sub-acute hospitalisation. The strongest association between time and fall risk was observed in functionally limited patients with high risks during the first days after admission and declining risks thereafter. This should be considered in the planning and application of fall prevention measures.

  12. Timing of union formation and partner choice in immigrant societies: The United States and Germany.

    Science.gov (United States)

    Soehl, Thomas; Yahirun, Jenjira

    2011-12-01

    As Gordon noted in his 1964 treatise on assimilation, marriage across ethnic boundaries and in particular, marriage into the mainstream is a key indicator as well as a mechanism of immigrant assimilation. Since then research has investigated numerous micro- and macro level correlates of exogamy. In this paper we focus on a topic that has received less attention thus far - how the timing of marriage is associated with partner choice. We compare the United States and Germany as two countries with significant immigrant and second-generation populations but where mainstream patterns of union formation differ. In both contexts we show that unions that cross ethnic boundaries happen later in life than those that stay within. Comparing across countries we argue that in Germany differences in the timing of union formation between the second generation and the mainstream, may pose additional barriers to intermarriage that do not exist in the United States.

  13. Development of a processor embedded timing unit for the synchronized operation in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woongryol, E-mail: wrlee@nfri.re.kr; Lee, Taegu; Hong, Jaesic

    2016-11-15

    Highlights: • Timing board for the synchronized tokamak operation. • Processor embedded distributed control system. • Single clock source and multiple trigger signal for the plasma diagnostics. • Delay compensation among the distributed timing boards. - Abstract: The Local Timing Unit (LTU) in KSTAR provides a single clock source and multiple trigger signals with flexible configuration. Over the past seven years, the LTU had a mechanical redesign and several firmware updates for the purpose of provision of a robust operation and precision timing signal. Now we have developed a third version of a local timing unit which has a standalone operation capability. The LTU is built in a cabinet mountable 1U PIZZA box and provides twelve signal output ports, a packet mirroring interface, and an LCD interface panel. The core functions of the LTU are implemented in a Field Programmable Gate Array (FPGA) which has an internal hardcore processor. The internal processor allows the use of Linux Operating System (OS) and the Experimental Physics and Industrial Control System (EPICS). All user level application functions are controllable through the EPICS, however the time critical internal functions are performed by the FPGA logic blocks same as the previous version. The new LTU provides pluggable output module so that we can easily extend the signal output port. The easy installation and effective replacement reduce the efforts of maintenance. This paper describes design, development, and commissioning results of the new KSTAR LTU.

  14. Development of a processor embedded timing unit for the synchronized operation in KSTAR

    International Nuclear Information System (INIS)

    Lee, Woongryol; Lee, Taegu; Hong, Jaesic

    2016-01-01

    Highlights: • Timing board for the synchronized tokamak operation. • Processor embedded distributed control system. • Single clock source and multiple trigger signal for the plasma diagnostics. • Delay compensation among the distributed timing boards. - Abstract: The Local Timing Unit (LTU) in KSTAR provides a single clock source and multiple trigger signals with flexible configuration. Over the past seven years, the LTU had a mechanical redesign and several firmware updates for the purpose of provision of a robust operation and precision timing signal. Now we have developed a third version of a local timing unit which has a standalone operation capability. The LTU is built in a cabinet mountable 1U PIZZA box and provides twelve signal output ports, a packet mirroring interface, and an LCD interface panel. The core functions of the LTU are implemented in a Field Programmable Gate Array (FPGA) which has an internal hardcore processor. The internal processor allows the use of Linux Operating System (OS) and the Experimental Physics and Industrial Control System (EPICS). All user level application functions are controllable through the EPICS, however the time critical internal functions are performed by the FPGA logic blocks same as the previous version. The new LTU provides pluggable output module so that we can easily extend the signal output port. The easy installation and effective replacement reduce the efforts of maintenance. This paper describes design, development, and commissioning results of the new KSTAR LTU.

  15. The influence of time units on the flexibility of the spatial numerical association of response codes effect.

    Science.gov (United States)

    Zhao, Tingting; He, Xianyou; Zhao, Xueru; Huang, Jianrui; Zhang, Wei; Wu, Shuang; Chen, Qi

    2018-05-01

    The Spatial Numerical/Temporal Association of Response Codes (SNARC/STEARC) effects are considered evidence of the association between number or time and space, respectively. As the SNARC effect was proposed by Dehaene, Bossini, and Giraux in 1993, several studies have suggested that different tasks and cultural factors can affect the flexibility of the SNARC effect. This study explored the influence of time units on the flexibility of the SNARC effect via materials with Arabic numbers, which were suffixed with time units and subjected to magnitude comparison tasks. Experiment 1 replicated the SNARC effect for numbers and the STEARC effect for time units. Experiment 2 explored the flexibility of the SNARC effect when numbers were attached to time units, which either conflicted with the numerical magnitude or in which the time units were the same or different. Experiment 3 explored whether the SNARC effect of numbers was stable when numbers were near the transition of two adjacent time units. The results indicate that the SNARC effect was flexible when the numbers were suffixed with time units: Time units influenced the direction of the SNARC effect in a way which could not be accounted for by the mathematical differences between the time units and numbers. This suggests that the SNARC effect is not obligatory and can be easily adapted or inhibited based on the current context. © 2017 The Authors. British Journal of Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  16. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    Directory of Open Access Journals (Sweden)

    Yu Wang

    Full Text Available Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome. Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  17. Time use and physical activity in a specialised brain injury rehabilitation unit: an observational study.

    Science.gov (United States)

    Hassett, Leanne; Wong, Siobhan; Sheaves, Emma; Daher, Maysaa; Grady, Andrew; Egan, Cara; Seeto, Carol; Hosking, Talia; Moseley, Anne

    2018-04-18

    To determine what is the use of time and physical activity in people undertaking inpatient rehabilitation in a specialised brain injury unit. To determine participants' level of independence related to the use of time and physical activity. Design: Cross-sectional observation study. Fourteen people [mean (SD) age 40 (15) years] with brain injuries undertaking inpatient rehabilitation. Participants were observed every 12 minutes over 5 days (Monday to Friday from 7:30 am until 7:30 pm) using a behaviour mapping tool. Observation of location, people present, body position and activity engaged in (both therapeutic and nontherapeutic). Functional Independence Measure (FIM) scores were determined for each participant. Participants spent a large part of their time alone (34%) in sedentary positions (83%) and in their bedrooms (48%) doing non-therapeutic activities (78%). There was a positive relationship between a higher level of independence (higher FIM score) and being observed in active body positions (r=0.60; p=0.03) and participating in physically active therapeutic activities (r=0.53; p=0.05). Similar to stroke units, inpatients in a specialised brain injury unit spend large parts of the day sedentary, alone and doing non-therapeutic activities. Strategies need to be evaluated to address this problem, particularly for people with greater physical dependence.

  18. Massively parallel signal processing using the graphics processing unit for real-time brain-computer interface feature extraction

    Directory of Open Access Journals (Sweden)

    J. Adam Wilson

    2009-07-01

    Full Text Available The clock speeds of modern computer processors have nearly plateaued in the past five years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card (GPU was developed for real-time neural signal processing of a brain-computer interface (BCI. The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter, followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally-intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a CPU-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  19. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    Science.gov (United States)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  20. The impact of financial and nonfinancial incentives on business-unit outcomes over time.

    Science.gov (United States)

    Peterson, Suzanne J; Luthans, Fred

    2006-01-01

    Unlike previous behavior management research, this study used a quasi-experimental, control group design to examine the impact of financial and nonfinancial incentives on business-unit (21 stores in a fast-food franchise corporation) outcomes (profit, customer service, and employee turnover) over time. The results showed that both types of incentives had a significant impact on all measured outcomes. The financial incentive initially had a greater effect on all 3 outcomes, but over time, the financial and nonfinancial incentives had an equally significant impact except in terms of employee turnover. (c) 2006 APA, all rights reserved.

  1. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    Science.gov (United States)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result

  2. Dynamic Agricultural Land Unit Profile Database Generation using Landsat Time Series Images

    Science.gov (United States)

    Torres-Rua, A. F.; McKee, M.

    2012-12-01

    Agriculture requires continuous supply of inputs to production, while providing final or intermediate outputs or products (food, forage, industrial uses, etc.). Government and other economic agents are interested in the continuity of this process and make decisions based on the available information about current conditions within the agriculture area. From a government point of view, it is important that the input-output chain in agriculture for a given area be enhanced in time, while any possible abrupt disruption be minimized or be constrained within the variation tolerance of the input-output chain. The stability of the exchange of inputs and outputs becomes of even more important in disaster-affected zones, where government programs will look for restoring the area to equal or enhanced social and economical conditions before the occurrence of the disaster. From an economical perspective, potential and existing input providers require up-to-date, precise information of the agriculture area to determine present and future inputs and stock amounts. From another side, agriculture output acquirers might want to apply their own criteria to sort out present and future providers (farmers or irrigators) based on the management done during the irrigation season. In the last 20 years geospatial information has become available for large areas in the globe, providing accurate, unbiased historical records of actual agriculture conditions at individual land units for small and large agricultural areas. This data, adequately processed and stored in any database format, can provide invaluable information for government and economic interests. Despite the availability of the geospatial imagery records, limited or no geospatial-based information about past and current farming conditions at the level of individual land units exists for many agricultural areas in the world. The absence of this information challenges the work of policy makers to evaluate previous or current

  3. 76 FR 45508 - Polyethylene Terephthalate Film, Sheet and Strip From the United Arab Emirates: Extension of Time...

    Science.gov (United States)

    2011-07-29

    ... Film, Sheet and Strip From the United Arab Emirates: Extension of Time Limit for Preliminary Results of... polyethylene terephthalate film, sheet and strip from the United Arab Emirates (UAE) for the period November 01... producer and/or exporter of the subject merchandise to the United States: JBF RAK LLC (JBF). Extension of...

  4. Containment closure time following loss of cooling under shutdown conditions of YGN units 3 and 4

    Energy Technology Data Exchange (ETDEWEB)

    Seul, Kwang Won; Bang, Young Seok; Kim, Se Won; Kim, Hho Jung [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    1998-12-31

    The YGN Units 3 and 4 plant conditions during shutdown operation were reviewed to identify the possible event scenarios following the loss of shutdown cooling. The thermal hydraulic analyses were performed for the five cases of RCS configurations under the worst event scenario, unavailable secondary cooling and no RCS inventory makeup, using the RELAP5/MOD3.2 code to investigate the plant behavior. From the analyses results, times to boil, times to core uncovery and times to core heat up were estimated to determine the containment closure time to prevent the uncontrolled release of fission products to atmosphere. These data provide useful information to the abnormal procedure to cope with the event. 6 refs., 7 figs., 2 tabs. (Author)

  5. Containment closure time following loss of cooling under shutdown conditions of YGN units 3 and 4

    International Nuclear Information System (INIS)

    Seul, Kwang Won; Bang, Young Seok; Kim, Se Won; Kim, Hho Jung

    1998-01-01

    The YGN Units 3 and 4 plant conditions during shutdown operation were reviewed to identify the possible event scenarios following the loss of shutdown cooling. The thermal hydraulic analyses were performed for the five cases of RCS configurations under the worst event scenario, unavailable secondary cooling and no RCS inventory makeup, using the RELAP5/MOD3.2 code to investigate the plant behavior. From the analyses results, times to boil, times to core uncovery and times to core heat up were estimated to determine the containment closure time to prevent the uncontrolled release of fission products to atmosphere. These data provide useful information to the abnormal procedure to cope with the event

  6. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  7. Development of a Real-time Personal Dosimeter System and its Application to Hanul Unit-4

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kidoo; Cho, Moonhyung; Son, Jungkwon [Korea Hydro Nuclear Power Co., Seoul (Korea, Republic of)

    2013-10-15

    The main reasons to adopt the system are to minimize unnecessary exposure, to calculate one's dose faster, to provide a possible alternatives of personnel such as radiation safety manager. The KHNP's Remote radiation Monitoring System (KRMS) is characterized as integrated, less bulky, lighter comparing to existing instrument although it have multifunction of real-time dosimetry and voice communication. After laboratory test in Central Research Institute (CRI) and field test in Hanbit unit-3 and 4, KRMS was applied to main radiation works in Hanul unit-4. KHNP-CRI has developed real-time personal dose monitoring system and applied to Hanul overhaul which include steam generator replacement. It took 5 days to install the system in reactor building and the optimal location for the repeater was 3 points at 122ft and 3 points at 100ft. Owing to the optimization of repeater and high sensitivity antenna, there was no shaded area of wireless network and no loss of dose data in spite of wearing lead jacket. The average deviation of personal dose received by KRMS and existing ADR is about 2%, which tell us it matches well. The lessons learned in Hanul unit-4 are it needs simplification of operating system and it requires a function to be able to check battery level at remote area.

  8. Emergency department boarding times for patients admitted to intensive care unit: Patient and organizational influences.

    Science.gov (United States)

    Montgomery, Phyllis; Godfrey, Michelle; Mossey, Sharolyn; Conlon, Michael; Bailey, Patricia

    2014-04-01

    Critically ill patients can be subject to prolonged stays in the emergency department following receipt of an order to admit to an intensive care unit. The purpose of this study was to explore patient and organizational influences on the duration of boarding times for intensive care bound patients. This exploratory descriptive study was situated in a Canadian hospital in northern Ontario. Through a six-month retrospective review of three data sources, information was collected pertaining to 16 patient and organizational variables detailing the emergency department boarding time of adults awaiting transfer to the intensive care unit. Data analysis involved descriptive and non-parametric methods. The majority of the 122 critically ill patients boarded in the ED were male, 55 years of age or older, arriving by ground ambulance on a weekday, and had an admitting diagnosis of trauma. The median boarding time was 34 min, with a range of 0-1549 min. Patients designated as most acute, intubated, and undergoing multiple diagnostic procedures had statistically significantly shorter boarding times. The study results provide a profile that may assist clinicians in understanding the complex and site-specific interplay of variables contributing to boarding of critically ill patients. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Microcontroller based resonance tracking unit for time resolved continuous wave cavity-ringdown spectroscopy measurements.

    Science.gov (United States)

    Votava, Ondrej; Mašát, Milan; Parker, Alexander E; Jain, Chaithania; Fittschen, Christa

    2012-04-01

    We present in this work a new tracking servoloop electronics for continuous wave cavity-ringdown absorption spectroscopy (cw-CRDS) and its application to time resolved cw-CRDS measurements by coupling the system with a pulsed laser photolysis set-up. The tracking unit significantly increases the repetition rate of the CRDS events and thus improves effective time resolution (and/or the signal-to-noise ratio) in kinetics studies with cw-CRDS in given data acquisition time. The tracking servoloop uses novel strategy to track the cavity resonances that result in a fast relocking (few ms) after the loss of tracking due to an external disturbance. The microcontroller based design is highly flexible and thus advanced tracking strategies are easy to implement by the firmware modification without the need to modify the hardware. We believe that the performance of many existing cw-CRDS experiments, not only time-resolved, can be improved with such tracking unit without any additional modification to the experiment. © 2012 American Institute of Physics

  10. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  11. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  12. Peak capacity and peak capacity per unit time in capillary and microchip zone electrophoresis.

    Science.gov (United States)

    Foley, Joe P; Blackney, Donna M; Ennis, Erin J

    2017-11-10

    The origins of the peak capacity concept are described and the important contributions to the development of that concept in chromatography and electrophoresis are reviewed. Whereas numerous quantitative expressions have been reported for one- and two-dimensional separations, most are focused on chromatographic separations and few, if any, quantitative unbiased expressions have been developed for capillary or microchip zone electrophoresis. Making the common assumption that longitudinal diffusion is the predominant source of zone broadening in capillary electrophoresis, analytical expressions for the peak capacity are derived, first in terms of migration time, diffusion coefficient, migration distance, and desired resolution, and then in terms of the remaining underlying fundamental parameters (electric field, electroosmotic and electrophoretic mobilities) that determine the migration time. The latter expressions clearly illustrate the direct square root dependence of peak capacity on electric field and migration distance and the inverse square root dependence on solute diffusion coefficient. Conditions that result in a high peak capacity will result in a low peak capacity per unit time and vice-versa. For a given symmetrical range of relative electrophoretic mobilities for co- and counter-electroosmotic species (cations and anions), the peak capacity increases with the square root of the electric field even as the temporal window narrows considerably, resulting in a significant reduction in analysis time. Over a broad relative electrophoretic mobility interval [-0.9, 0.9], an approximately two-fold greater amount of peak capacity can be generated for counter-electroosmotic species although it takes about five-fold longer to do so, consistent with the well-known bias in migration time and resolving power for co- and counter-electroosmotic species. The optimum lower bound of the relative electrophoretic mobility interval [μ r,Z , μ r,A ] that provides the maximum

  13. Multiple time scales in modeling the incidence of infections acquired in intensive care units

    Directory of Open Access Journals (Sweden)

    Martin Wolkewitz

    2016-09-01

    Full Text Available Abstract Background When patients are admitted to an intensive care unit (ICU their risk of getting an infection will be highly depend on the length of stay at-risk in the ICU. In addition, risk of infection is likely to vary over calendar time as a result of fluctuations in the prevalence of the pathogen on the ward. Hence risk of infection is expected to depend on two time scales (time in ICU and calendar time as well as competing events (discharge or death and their spatial location. The purpose of this paper is to develop and apply appropriate statistical models for the risk of ICU-acquired infection accounting for multiple time scales, competing risks and the spatial clustering of the data. Methods A multi-center data base from a Spanish surveillance network was used to study the occurrence of an infection due to Methicillin-resistant Staphylococcus aureus (MRSA. The analysis included 84,843 patient admissions between January 2006 and December 2011 from 81 ICUs. Stratified Cox models were used to study multiple time scales while accounting for spatial clustering of the data (patients within ICUs and for death or discharge as competing events for MRSA infection. Results Both time scales, time in ICU and calendar time, are highly associated with the MRSA hazard rate and cumulative risk. When using only one basic time scale, the interpretation and magnitude of several patient-individual risk factors differed. Risk factors concerning the severity of illness were more pronounced when using only calendar time. These differences disappeared when using both time scales simultaneously. Conclusions The time-dependent dynamics of infections is complex and should be studied with models allowing for multiple time scales. For patient individual risk-factors we recommend stratified Cox regression models for competing events with ICU time as the basic time scale and calendar time as a covariate. The inclusion of calendar time and stratification by ICU

  14. Patient-care time allocation by nurse practitioners and physician assistants in the intensive care unit.

    Science.gov (United States)

    Carpenter, David L; Gregg, Sara R; Owens, Daniel S; Buchman, Timothy G; Coopersmith, Craig M

    2012-02-15

    Use of nurse practitioners and physician assistants ("affiliates") is increasing significantly in the intensive care unit (ICU). Despite this, few data exist on how affiliates allocate their time in the ICU. The purpose of this study was to understand the allocation of affiliate time into patient-care and non-patient-care activity, further dividing the time devoted to patient care into billable service and equally important but nonbillable care. We conducted a quasi experimental study in seven ICUs in an academic hospital and a hybrid academic/community hospital. After a period of self-reporting, a one-time monetary incentive of $2,500 was offered to 39 affiliates in each ICU in which every affiliate documented greater than 75% of their time devoted to patient care over a 6-month period in an effort to understand how affiliates allocated their time throughout a shift. Documentation included billable time (critical care, evaluation and management, procedures) and a new category ("zero charge time"), which facilitated record keeping of other patient-care activities. At baseline, no ICUs had documentation of 75% patient-care time by all of its affiliates. In the 6 months in which reporting was tied to a group incentive, six of seven ICUs had every affiliate document greater than 75% of their time. Individual time documentation increased from 53% to 84%. Zero-charge time accounted for an average of 21% of each shift. The most common reason was rounding, which accounted for nearly half of all zero-charge time. Sign out, chart review, and teaching were the next most common zero-charge activities. Documentation of time spent on billable activities also increased from 53% of an affiliate's shift to 63%. Time documentation was similar regardless of during which shift an affiliate worked. Approximately two thirds of an affiliate's shift is spent providing billable services to patients. Greater than 20% of each shift is spent providing equally important but not reimbursable

  15. Plant Outage Time Savings Provided by Subcritical Physics Testing at Vogtle Unit 2

    International Nuclear Information System (INIS)

    Cupp, Philip; Heibel, M.D.

    2006-01-01

    The most recent core reload design verification physics testing done at Southern Nuclear Company's (SNC) Vogtle Unit 2, performed prior to initial power operations in operating cycle 12, was successfully completed while the reactor was at least 1% ΔK/K subcritical. The testing program used was the first application of the Subcritical Physics Testing (SPT) program developed by the Westinghouse Electric Company LLC. The SPT program centers on the application of the Westinghouse Subcritical Rod Worth Measurement (SRWM) methodology that was developed in cooperation with the Vogtle Reactor Engineering staff. The SRWM methodology received U. S. Nuclear Regulatory Commission (NRC) approval in August of 2005. The first application of the SPT program occurred at Vogtle Unit 2 in October of 2005. The results of the core design verification measurements obtained during the SPT program demonstrated excellent agreement with prediction, demonstrating that the predicted core characteristics were in excellent agreement with the actual operating characteristics of the core. This paper presents an overview of the SPT Program used at Vogtle Unit 2 during operating cycle 12, and a discussion of the critical path outage time savings the SPT program is capable of providing. (authors)

  16. EMG analysis tuned for determining the timing and level of activation in different motor units.

    Science.gov (United States)

    Lee, Sabrina S M; Miara, Maria de Boef; Arnold, Allison S; Biewener, Andrew A; Wakeling, James M

    2011-08-01

    Recruitment patterns and activation dynamics of different motor units greatly influence the temporal pattern and magnitude of muscle force development, yet these features are not often considered in muscle models. The purpose of this study was to characterize the recruitment and activation dynamics of slow and fast motor units from electromyographic (EMG) recordings and twitch force profiles recorded directly from animal muscles. EMG and force data from the gastrocnemius muscles of seven goats were recorded during in vivo tendon-tap reflex and in situ nerve stimulation experiments. These experiments elicited EMG signals with significant differences in frequency content (p<0.001). The frequency content was characterized using wavelet and principal components analysis, and optimized wavelets with centre frequencies, 149.94 Hz and 323.13 Hz, were obtained. The optimized wavelets were used to calculate the EMG intensities and, with the reconstructed twitch force profiles, to derive transfer functions for slow and fast motor units that estimate the activation state of the muscle from the EMG signal. The resulting activation-deactivation time constants gave r values of 0.98-0.99 between the activation state and the force profiles. This work establishes a framework for developing improved muscle models that consider the intrinsic properties of slow and fast fibres within a mixed muscle, and that can more accurately predict muscle force output from EMG. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Containment closure time following the loss of shutdown cooling event of YGN Units 3 and 4

    International Nuclear Information System (INIS)

    Seul, Kwang Won; Bang, Young Seok; Kim, Hho Jung

    1999-01-01

    The YGN Units 3 and 4 plant conditions during shutdown operation were reviewed to identify the possible event scenarios following the loss of shutdown cooling (SDC) event. For the five cases of typical reactor coolant system (RCS) configurations under the worst event sequence, such as unavailable secondary cooling and no RCS inventory makeup, the thermal hydraulic analyses were performed using the RELAP5/MOS3.2 code to investigate the plant behavior following the event. The thermal hydraulic analyses include the estimation of time to boil, time to core uncovery, and time to core heat up to determine the containment closure time to prevent the uncontrolled release of fission products to atmosphere. The result indicates that the containment closure is recommended to be achieved within 42 minutes after the loss of SDC for the steam generator (SG) inlet plenum manway open case or the large cold leg open case under the worst event sequence. The containment closure time is significantly dependent on the elevation and size of the opening and the SG secondary water level condition. It is also found that the containment closure needs to be initiated before the boiling time to ensure the survivability of the workers in the containment. These results will provide using information to operators to cope with the loss of SDC event. (Author). 15 refs., 3 tabs., 7 figs

  18. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  19. Referral Regions for Time-Sensitive Acute Care Conditions in the United States.

    Science.gov (United States)

    Wallace, David J; Mohan, Deepika; Angus, Derek C; Driessen, Julia R; Seymour, Christopher M; Yealy, Donald M; Roberts, Mark M; Kurland, Kristen S; Kahn, Jeremy M

    2018-03-24

    Regional, coordinated care for time-sensitive and high-risk medical conditions is a priority in the United States. A necessary precursor to coordinated regional care is regions that are actionable from clinical and policy standpoints. The Dartmouth Atlas of Health Care, the major health care referral construct in the United States, uses regions that cross state and county boundaries, limiting fiscal or political ownership by key governmental stakeholders in positions to create incentive and regulate regional care coordination. Our objective is to develop and evaluate referral regions that define care patterns for patients with acute myocardial infraction, acute stroke, or trauma, yet also preserve essential political boundaries. We developed a novel set of acute care referral regions using Medicare data in the United States from 2011. For acute myocardial infraction, acute stroke, or trauma, we iteratively aggregated counties according to patient home location and treating hospital address, using a spatial algorithm. We evaluated referral political boundary preservation and spatial accuracy for each set of referral regions. The new set of referral regions, the Pittsburgh Atlas, had 326 distinct regions. These referral regions did not cross any county or state borders, whereas 43.1% and 98.1% of all Dartmouth Atlas hospital referral regions crossed county and state borders. The Pittsburgh Atlas was comparable to the Dartmouth Atlas in measures of spatial accuracy and identified larger at-risk populations for all 3 conditions. A novel and straightforward spatial algorithm generated referral regions that were politically actionable and accountable for time-sensitive medical emergencies. Copyright © 2018 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  20. Practical Testing and Performance Analysis of Phasor Measurement Unit Using Real Time Digital Simulator (RTDS)

    DEFF Research Database (Denmark)

    Liu, Leo; Rather, Zakir Hussain; Stearn, Nathen

    2012-01-01

    Wide Area Measurement Systems (WAMS) and Wide Area Monitoring, Protection and Control Systems (WAMPACS) have evolved rapidly over the last two decades [1]. This fast emerging technology enables real time synchronized monitoring of power systems. Presently, WAMS are mainly used for real time...... visualisation and post event analysis of power systems. It is expected however, that through integration with traditional Supervisory Control and Data Acquisition (SCADA) systems, closed loop control applications will be possible. Phasor Measurement Units (PMUs) are fundamental components of WAMS. Large WAMS...... proposed to realize highly precise phasoreasurements. Further a comparative study based on features of PMUs from different major manufacturers is presented. The selection of optimal parameters, such as phasor format and filter length is also discussed for various applications....

  1. MASMA: a versatile multifunctional unit (gated window amplifier, analog memory, and height-to-time converter)

    International Nuclear Information System (INIS)

    Goursky, V.; Thenes, P.

    1969-01-01

    This multipurpose unit is designed to accomplish one of the following functions: - gated window amplifier, - Analog memory and - Amplitude-to-time converter. The first function is mainly devoted to improve the poor resolution of pulse-height analyzers with a small number of channels. The analog memory, a new function in the standard range of plug-in modules, is capable of performing a number of operations: 1) fixed delay, or variable delay dependent on an external parameter (application to the analog processing of non-coincident pulses), 2) de-randomiser to increase the efficiency of the pulse height analysis in a spectrometry experiment, 3) linear multiplexer to allow an analyser to serve as many spectrometry devices as memory elements that it possesses. Associated with a coding scaler, this unit, if used as a amplitude-to-time converter, constitutes a Wilkinson A.D.C with a capability of 10 bits (or more) and with a 100 MHz clock frequency. (authors) [fr

  2. Software architecture for a multi-purpose real-time control unit for research purposes

    Science.gov (United States)

    Epple, S.; Jung, R.; Jalba, K.; Nasui, V.

    2017-05-01

    A new, freely programmable, scalable control system for academic research purposes was developed. The intention was, to have a control unit capable of handling multiple PT1000 temperature sensors at reasonable accuracy and temperature range, as well as digital input signals and providing powerful output signals. To take full advantage of the system, control-loops are run in real time. The whole eight bit system with very limited memory runs independently of a personal computer. The two on board RS232 connectors allow to connect further units or to connect other equipment, as required in real time. This paper describes the software architecture for the third prototype that now provides stable measurements and an improvement in accuracy compared to the previous designs. As test case a thermal solar system to produce hot tap water and assist heating in a single-family house was implemented. The solar fluid pump was power-controlled and several temperatures at different points in the hydraulic system were measured and used in the control algorithms. The software architecture proved suitable to test several different control strategies and their corresponding algorithms for the thermal solar system.

  3. River flood seasonality in the Northeast United States and trends in annual timing

    Science.gov (United States)

    Collins, M. J.

    2017-12-01

    The New England and Mid-Atlantic regions of the Northeast United States have experienced climate-associated increases in both the magnitude and frequency of floods. However, a detailed understanding of flood seasonality across these regions, and how flood seasonality may have changed over the instrumental record, has not been established. The annual timing of river floods reflects the flood-generating mechanisms operating in a basin and many aquatic and riparian organisms are adapted to flood seasonality, as are human uses of river channels and floodplains. Changes in flood seasonality may indicate changes in flood-generating mechanisms, and their interactions, with important implications for habitats, floodplain infrastructure, and human communities. For example, changes in spring or fall flood timing may negatively or positively affect a vulnerable life stage for a migratory fish (e.g., egg setting) depending on whether floods occur more frequently before or after the life history event. In this study I apply an objective, probabilistic method for identifying flood seasons at a monthly resolution for 90 climate-sensitive watersheds in New England and the Mid-Atlantic (Hydrologic Unit Codes 01 and 02). Historical trends in flood timing during the year are also investigated. The analyses are based on partial duration flood series that are an average of 85 years long. The seasonality of flooding in these regions, and any historical changes, are considered in the context of other ongoing or expected phenological changes in the Northeast U.S. environment that affect flood generation—e.g., the timing of leaf-off/leaf-out for deciduous plants. How these factors interact will affect whether and how flood magnitudes and frequencies change in the future and associated impacts.

  4. Comparison between dynamic programming and genetic algorithm for hydro unit economic load dispatch

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2014-10-01

    Full Text Available The hydro unit economic load dispatch (ELD is of great importance in energy conservation and emission reduction. Dynamic programming (DP and genetic algorithm (GA are two representative algorithms for solving ELD problems. The goal of this study was to examine the performance of DP and GA while they were applied to ELD. We established numerical experiments to conduct performance comparisons between DP and GA with two given schemes. The schemes included comparing the CPU time of the algorithms when they had the same solution quality, and comparing the solution quality when they had the same CPU time. The numerical experiments were applied to the Three Gorges Reservoir in China, which is equipped with 26 hydro generation units. We found the relation between the performance of algorithms and the number of units through experiments. Results show that GA is adept at searching for optimal solutions in low-dimensional cases. In some cases, such as with a number of units of less than 10, GA's performance is superior to that of a coarse-grid DP. However, GA loses its superiority in high-dimensional cases. DP is powerful in obtaining stable and high-quality solutions. Its performance can be maintained even while searching over a large solution space. Nevertheless, due to its exhaustive enumerating nature, it costs excess time in low-dimensional cases.

  5. Quantum processes: probability fluxes, transition probabilities in unit time and vacuum vibrations

    International Nuclear Information System (INIS)

    Oleinik, V.P.; Arepjev, Ju D.

    1989-01-01

    Transition probabilities in unit time and probability fluxes are compared in studying the elementary quantum processes -the decay of a bound state under the action of time-varying and constant electric fields. It is shown that the difference between these quantities may be considerable, and so the use of transition probabilities W instead of probability fluxes Π, in calculating the particle fluxes, may lead to serious errors. The quantity W represents the rate of change with time of the population of the energy levels relating partly to the real states and partly to the virtual ones, and it cannot be directly measured in experiment. The vacuum background is shown to be continuously distorted when a perturbation acts on a system. Because of this the viewpoint of an observer on the physical properties of real particles continuously varies with time. This fact is not taken into consideration in the conventional theory of quantum transitions based on using the notion of probability amplitude. As a result, the probability amplitudes lose their physical meaning. All the physical information on quantum dynamics of a system is contained in the mean values of physical quantities. The existence of considerable differences between the quantities W and Π permits one in principle to make a choice of the correct theory of quantum transitions on the basis of experimental data. (author)

  6. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.

    Science.gov (United States)

    Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin

    2014-10-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

  7. Phasor Measurement Unit and Phasor Data Concentrator test with Real Time Digital Simulator

    DEFF Research Database (Denmark)

    Diakos, Konstantinos; Wu, Qiuwei; Nielsen, Arne Hejde

    2014-01-01

    that is able to derive and communicate synchrophasor measurements of different parts of the power network and the development of tests, according to IEEE standards, that evaluate the performance of PMUs and PDCs. The tests are created by using a Real Time Digital Simulation (RTDS) system. The results obtained......The main focus of the electrical engineers nowadays, is to develop a smart grid that is able to monitor, evaluate and control the power system operation. The integration of Intelligent Electronic Devices (IED s) to the power network, is a strong indication of the inclination to lead the power...... network to a more reliable, secure and economic operation. The implementation of these devices though, demands the warranty of a secure operation and high-accuracy performance. This paper describes the procedure of establishing a PMU (Phasor Measurement Unit)–PDC (Phasor Data Concentrator) platform...

  8. Real-time track-less Cherenkov ring fitting trigger system based on Graphics Processing Units

    Science.gov (United States)

    Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.

    2017-12-01

    The parallel computing power of commercial Graphics Processing Units (GPUs) is exploited to perform real-time ring fitting at the lowest trigger level using information coming from the Ring Imaging Cherenkov (RICH) detector of the NA62 experiment at CERN. To this purpose, direct GPU communication with a custom FPGA-based board has been used to reduce the data transmission latency. The GPU-based trigger system is currently integrated in the experimental setup of the RICH detector of the NA62 experiment, in order to reconstruct ring-shaped hit patterns. The ring-fitting algorithm running on GPU is fed with raw RICH data only, with no information coming from other detectors, and is able to provide more complex trigger primitives with respect to the simple photodetector hit multiplicity, resulting in a higher selection efficiency. The performance of the system for multi-ring Cherenkov online reconstruction obtained during the NA62 physics run is presented.

  9. Timing of the Three Mile Island Unit 2 core degradation as determined by forensic engineering

    International Nuclear Information System (INIS)

    Henrie, J.O.

    1988-01-01

    Unlike computer simulation of an event, forensic engineering is the evaluation of recorded data and damaged as well as surviving components after an event to determine progressive causes of the event. Such an evaluation of the 1979 Three Mile Island Unit 2 accident indicates that gas began accumulating in steam, generator A at 6:10, or 130 min into the accident and, therefore, fuel cladding ruptures and/or zirconium-water reactions began at that time. Zirconium oxidation/hydrogen generation rates were highest (∼70 kg of hydrogen per minute) during the core quench and collapse at 175 min. By 180 min, over 85% of the hydrogen generated by the zirconium-water reaction had been produced, and ∼400 kg of hydrogen had accumulated in the reactor coolant system. At that time, hydrogen concentrations at the steam/water interfaces in both steam generators approached 90%. By 203 min, the damaged reactor core had been reflooded and has not been uncovered since that time. Therefore, the core was completely under water at 225 min, when molten core material flowed into the lower head of the reactor vessel. 10 refs., 7 figs., 1 tab

  10. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit.

    Science.gov (United States)

    Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert

    2016-01-01

    Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.

  11. Years of Life Gained Due to Leisure-Time Physical Activity in the United States

    Science.gov (United States)

    Janssen, Ian; Carson, Valerie; Lee, I-Min; Katzmarzyk, Peter T.; Blair, Steven N.

    2013-01-01

    Background Physical inactivity is an important modifiable risk factor for non-communicable disease. The degree to which physical activity affects the life expectancy of Americans is unknown. This study estimated the potential years of life gained due to leisure-time physical activity across the adult lifespan in the United States. Methods Data from the National Health and Nutrition Examination Survey (2007–2010), National Health Interview Study mortality linkage (1990–2006), and US Life Tables (2006) were used to estimate and compare life expectancy at each age of adult life for inactive (no moderate-to-vigorous physical activity), somewhat active (some moderate-to-vigorous activity but active (≥500 metabolic equivalent min/week of moderate-to-vigorous activity) adults. Analyses were conducted in 2012. Results Somewhat active and active non-Hispanic white men had a life expectancy at age 20 that was around 2.4 years longer than the inactive men; this life expectancy advantage was 1.2 years at age 80. Similar observations were made in non-Hispanic white women, with a higher life expectancy within the active category of 3.0 years at age 20 and 1.6 years at age 80. In non-Hispanic black women, as many as 5.5 potential years of life were gained due to physical activity. Significant increases in longevity were also observed within somewhat active and active non-Hispanic black men; however, among Hispanics the years of life gained estimates were more variable and not significantly different from 0 years gained. Conclusions Leisure-time physical activity is associated with increases in longevity in the United States. PMID:23253646

  12. CPU0213, a novel endothelin type A and type B receptor antagonist, protects against myocardial ischemia/reperfusion injury in rats

    Directory of Open Access Journals (Sweden)

    Z.Y. Wang

    2011-11-01

    Full Text Available The efficacy of endothelin receptor antagonists in protecting against myocardial ischemia/reperfusion (I/R injury is controversial, and the mechanisms remain unclear. The aim of this study was to investigate the effects of CPU0123, a novel endothelin type A and type B receptor antagonist, on myocardial I/R injury and to explore the mechanisms involved. Male Sprague-Dawley rats weighing 200-250 g were randomized to three groups (6-7 per group: group 1, Sham; group 2, I/R + vehicle. Rats were subjected to in vivo myocardial I/R injury by ligation of the left anterior descending coronary artery and 0.5% sodium carboxymethyl cellulose (1 mL/kg was injected intraperitoneally immediately prior to coronary occlusion. Group 3, I/R + CPU0213. Rats were subjected to identical surgical procedures and CPU0213 (30 mg/kg was injected intraperitoneally immediately prior to coronary occlusion. Infarct size, cardiac function and biochemical changes were measured. CPU0213 pretreatment reduced infarct size as a percentage of the ischemic area by 44.5% (I/R + vehicle: 61.3 ± 3.2 vs I/R + CPU0213: 34.0 ± 5.5%, P < 0.05 and improved ejection fraction by 17.2% (I/R + vehicle: 58.4 ± 2.8 vs I/R + CPU0213: 68.5 ± 2.2%, P < 0.05 compared to vehicle-treated animals. This protection was associated with inhibition of myocardial inflammation and oxidative stress. Moreover, reduction in Akt (protein kinase B and endothelial nitric oxide synthase (eNOS phosphorylation induced by myocardial I/R injury was limited by CPU0213 (P < 0.05. These data suggest that CPU0123, a non-selective antagonist, has protective effects against myocardial I/R injury in rats, which may be related to the Akt/eNOS pathway.

  13. Implementation of RLS-based Adaptive Filterson nVIDIA GeForce Graphics Processing Unit

    OpenAIRE

    Hirano, Akihiro; Nakayama, Kenji

    2011-01-01

    This paper presents efficient implementa- tion of RLS-based adaptive filters with a large number of taps on nVIDIA GeForce graphics processing unit (GPU) and CUDA software development environment. Modification of the order and the combination of calcu- lations reduces the number of accesses to slow off-chip memory. Assigning tasks into multiple threads also takes memory access order into account. For a 4096-tap case, a GPU program is almost three times faster than a CPU program.

  14. Operation time extension for power units of the first generation NPP and the liability for potential damage

    International Nuclear Information System (INIS)

    Kovalevich, O.M.

    2000-01-01

    The problem on the operation time extension for the six operating NPP first generation power units is discussed. However it is not advisable to improve the safety of these power units up to the acceptable level, therefore there arises the contradiction between the operation time extension of these power units and potential damage for the population. The possibility of having the increased civilian-legal responsibility for potential harm and losses in case of an accident is proposed to be considered as a compensating measure. The measures for realization of this civilian-legal responsibility are described [ru

  15. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  16. Timing and locations of reef fish spawning off the southeastern United States.

    Directory of Open Access Journals (Sweden)

    Nicholas A Farmer

    Full Text Available Managed reef fish in the Atlantic Ocean of the southeastern United States (SEUS support a multi-billion dollar industry. There is a broad interest in locating and protecting spawning fish from harvest, to enhance productivity and reduce the potential for overfishing. We assessed spatiotemporal cues for spawning for six species from four reef fish families, using data on individual spawning condition collected by over three decades of regional fishery-independent reef fish surveys, combined with a series of predictors derived from bathymetric features. We quantified the size of spawning areas used by reef fish across many years and identified several multispecies spawning locations. We quantitatively identified cues for peak spawning and generated predictive maps for Gray Triggerfish (Balistes capriscus, White Grunt (Haemulon plumierii, Red Snapper (Lutjanus campechanus, Vermilion Snapper (Rhomboplites aurorubens, Black Sea Bass (Centropristis striata, and Scamp (Mycteroperca phenax. For example, Red Snapper peak spawning was predicted in 24.7-29.0°C water prior to the new moon at locations with high curvature in the 24-30 m depth range off northeast Florida during June and July. External validation using scientific and fishery-dependent data collections strongly supported the predictive utility of our models. We identified locations where reconfiguration or expansion of existing marine protected areas would protect spawning reef fish. We recommend increased sampling off southern Florida (south of 27° N, during winter months, and in high-relief, high current habitats to improve our understanding of timing and location of reef fish spawning off the southeastern United States.

  17. Independent calculation of the monitor units and times of treatment in radiotherapy

    International Nuclear Information System (INIS)

    Mueller, Marcio Rogerio

    2005-01-01

    In this work, an independent verification system of calculations in radiotherapy was developed and applied, using Visual Basic TM programming language. The computational program performs calculations of monitor units and treatment time, based on the algorithm of manual calculation. The calculations executed for the independent system had initially been compared with the manual calculations performed by the medical physicists of the Institute of Radiotherapy of the Hospital das Clinicas da Universidade de Sao Paulo. In this step, the results found for more than two hundred fields studied were similar to those found in the literature; deviations larger than +- 1% were found only in five cases involving errors in manual calculation. The application of the independent system, in this stage, could have identified errors up to +- 2,4%. Based on these data, the system was validated for use in clinical routine. In a second step, calculations were compared with calculations realized by the treatment computerized planning system CadPIan TM . When, again, the results were similar to those published in other works allowing to obtain levels of acceptance of the discrepancies between the calculations executed for the independent system and the calculations developed from the planning system, separated by anatomical region, as recommended according by the recent literature. For beams of 6 MV, the levels of acceptance for deviations between the calculations of monitor units, separated by treatment region were the following; breast +- 1.7%, head and neck +2%; hypophysis +- 2.2%; pelvis +- 4 . 1% and thorax +- 1.5%. For beams of 15 MV, the level suggested for pelvis was of +- 4.5%. (author)

  18. Acoustic reverse-time migration using GPU card and POSIX thread based on the adaptive optimal finite-difference scheme and the hybrid absorbing boundary condition

    Science.gov (United States)

    Cai, Xiaohui; Liu, Yang; Ren, Zhiming

    2018-06-01

    Reverse-time migration (RTM) is a powerful tool for imaging geologically complex structures such as steep-dip and subsalt. However, its implementation is quite computationally expensive. Recently, as a low-cost solution, the graphic processing unit (GPU) was introduced to improve the efficiency of RTM. In the paper, we develop three ameliorative strategies to implement RTM on GPU card. First, given the high accuracy and efficiency of the adaptive optimal finite-difference (FD) method based on least squares (LS) on central processing unit (CPU), we study the optimal LS-based FD method on GPU. Second, we develop the CPU-based hybrid absorbing boundary condition (ABC) to the GPU-based one by addressing two issues of the former when introduced to GPU card: time-consuming and chaotic threads. Third, for large-scale data, the combinatorial strategy for optimal checkpointing and efficient boundary storage is introduced for the trade-off between memory and recomputation. To save the time of communication between host and disk, the portable operating system interface (POSIX) thread is utilized to create the other CPU core at the checkpoints. Applications of the three strategies on GPU with the compute unified device architecture (CUDA) programming language in RTM demonstrate their efficiency and validity.

  19. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed

    2012-08-20

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  20. Accelerating VASP electronic structure calculations using graphic processing units

    KAUST Repository

    Hacene, Mohamed; Anciaux-Sedrakian, Ani; Rozanska, Xavier; Klahr, Diego; Guignon, Thomas; Fleurat-Lessard, Paul

    2012-01-01

    We present a way to improve the performance of the electronic structure Vienna Ab initio Simulation Package (VASP) program. We show that high-performance computers equipped with graphics processing units (GPUs) as accelerators may reduce drastically the computation time when offloading these sections to the graphic chips. The procedure consists of (i) profiling the performance of the code to isolate the time-consuming parts, (ii) rewriting these so that the algorithms become better-suited for the chosen graphic accelerator, and (iii) optimizing memory traffic between the host computer and the GPU accelerator. We chose to accelerate VASP with NVIDIA GPU using CUDA. We compare the GPU and original versions of VASP by evaluating the Davidson and RMM-DIIS algorithms on chemical systems of up to 1100 atoms. In these tests, the total time is reduced by a factor between 3 and 8 when running on n (CPU core + GPU) compared to n CPU cores only, without any accuracy loss. © 2012 Wiley Periodicals, Inc.

  1. [Analysis of cost and efficiency of a medical nursing unit using time-driven activity-based costing].

    Science.gov (United States)

    Lim, Ji Young; Kim, Mi Ja; Park, Chang Gi

    2011-08-01

    Time-driven activity-based costing was applied to analyze the nursing activity cost and efficiency of a medical unit. Data were collected at a medical unit of a general hospital. Nursing activities were measured using a nursing activities inventory and classified as 6 domains using Easley-Storfjell Instrument. Descriptive statistics were used to identify general characteristics of the unit, nursing activities and activity time, and stochastic frontier model was adopted to estimate true activity time. The average efficiency of the medical unit using theoretical resource capacity was 77%, however the efficiency using practical resource capacity was 96%. According to these results, the portion of non-added value time was estimated 23% and 4% each. The sums of total nursing activity costs were estimated 109,860,977 won in traditional activity-based costing and 84,427,126 won in time-driven activity-based costing. The difference in the two cost calculating methods was 25,433,851 won. These results indicate that the time-driven activity-based costing provides useful and more realistic information about the efficiency of unit operation compared to traditional activity-based costing. So time-driven activity-based costing is recommended as a performance evaluation framework for nursing departments based on cost management.

  2. Optimized Laplacian image sharpening algorithm based on graphic processing unit

    Science.gov (United States)

    Ma, Tinghuai; Li, Lu; Ji, Sai; Wang, Xin; Tian, Yuan; Al-Dhelaan, Abdullah; Al-Rodhaan, Mznah

    2014-12-01

    In classical Laplacian image sharpening, all pixels are processed one by one, which leads to large amount of computation. Traditional Laplacian sharpening processed on CPU is considerably time-consuming especially for those large pictures. In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform of Graphic Processing Units (GPU), and analyze the impact of picture size on performance and the relationship between the processing time of between data transfer time and parallel computing time. Further, according to different features of different memory, an improved scheme of our method is developed, which exploits shared memory in GPU instead of global memory and further increases the efficiency. Experimental results prove that two novel algorithms outperform traditional consequentially method based on OpenCV in the aspect of computing speed.

  3. A "Neurological Emergency Trolley" reduces turnaround time for high-risk medications in a general intensive care unit.

    Science.gov (United States)

    Ajzenberg, Henry; Newman, Paula; Harris, Gail-Anne; Cranston, Marnie; Boyd, J Gordon

    2018-02-01

    To reduce medication turnaround times during neurological emergencies, a multidisciplinary team developed a neurological emergency crash trolley in our intensive care unit. This trolley includes phenytoin, hypertonic saline and mannitol, as well as other equipment. The aim of this study was to assess whether the cart reduced turnaround times for these medications. In this retrospective cohort study, medication delivery times for two year epochs before and after its implementation were compared. Eligible patients were identified from our intensive care unit screening log. Adults who required emergent use of phenytoin, hypertonic saline or mannitol while in the intensive care unit were included. Groups were compared with nonparametric analyses. 33-bed general medical-surgical intensive care unit in an academic teaching hospital. Time to medication administration. In the pre-intervention group, there were 43 patients with 66 events. In the post-intervention group, there were 45 patients with 80 events. The median medication turnaround time was significantly reduced after implementation of the neurological emergency trolley (25 vs. 10minutes, p=0.003). There was no statistically significant difference in intensive care or 30-day survival between the two cohorts. The implementation of a novel neurological emergency crash trolley in our intensive care unit reduced medication turnaround times. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. United States Policy and The Islamic Republic of Iran: A Time For Change

    National Research Council Canada - National Science Library

    Constantine, B

    2000-01-01

    .... This paper provides current information on Iran's government, economy, military, culture, religion, political process, and presents arguments for a change in current United States Policy concerning...

  5. High-throughput sequence alignment using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Trapnell Cole

    2007-12-01

    Full Text Available Abstract Background The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies. Results This paper describes MUMmerGPU, an open-source high-throughput parallel pairwise local sequence alignment program that runs on commodity Graphics Processing Units (GPUs in common workstations. MUMmerGPU uses the new Compute Unified Device Architecture (CUDA from nVidia to align multiple query sequences against a single reference sequence stored as a suffix tree. By processing the queries in parallel on the highly parallel graphics card, MUMmerGPU achieves more than a 10-fold speedup over a serial CPU version of the sequence alignment kernel, and outperforms the exact alignment component of MUMmer on a high end CPU by 3.5-fold in total application time when aligning reads from recent sequencing projects using Solexa/Illumina, 454, and Sanger sequencing technologies. Conclusion MUMmerGPU is a low cost, ultra-fast sequence alignment program designed to handle the increasing volume of data produced by new, high-throughput sequencing technologies. MUMmerGPU demonstrates that even memory-intensive applications can run significantly faster on the relatively low-cost GPU than on the CPU.

  6. Seasonal and annual precipitation time series trend analysis in North Carolina, United States

    Science.gov (United States)

    Sayemuzzaman, Mohammad; Jha, Manoj K.

    2014-02-01

    The present study performs the spatial and temporal trend analysis of the annual and seasonal time-series of a set of uniformly distributed 249 stations precipitation data across the state of North Carolina, United States over the period of 1950-2009. The Mann-Kendall (MK) test, the Theil-Sen approach (TSA) and the Sequential Mann-Kendall (SQMK) test were applied to quantify the significance of trend, magnitude of trend, and the trend shift, respectively. Regional (mountain, piedmont and coastal) precipitation trends were also analyzed using the above-mentioned tests. Prior to the application of statistical tests, the pre-whitening technique was used to eliminate the effect of autocorrelation of precipitation data series. The application of the above-mentioned procedures has shown very notable statewide increasing trend for winter and decreasing trend for fall precipitation. Statewide mixed (increasing/decreasing) trend has been detected in annual, spring, and summer precipitation time series. Significant trends (confidence level ≥ 95%) were detected only in 8, 7, 4 and 10 nos. of stations (out of 249 stations) in winter, spring, summer, and fall, respectively. Magnitude of the highest increasing (decreasing) precipitation trend was found about 4 mm/season (- 4.50 mm/season) in fall (summer) season. Annual precipitation trend magnitude varied between - 5.50 mm/year and 9 mm/year. Regional trend analysis found increasing precipitation in mountain and coastal regions in general except during the winter. Piedmont region was found to have increasing trends in summer and fall, but decreasing trend in winter, spring and on an annual basis. The SQMK test on "trend shift analysis" identified a significant shift during 1960 - 70 in most parts of the state. Finally, the comparison between winter (summer) precipitations with the North Atlantic Oscillation (Southern Oscillation) indices concluded that the variability and trend of precipitation can be explained by the

  7. Time-series Oxygen-18 Precipitation Isoscapes for Canada and the Northern United States

    Science.gov (United States)

    Delavau, Carly J.; Chun, Kwok P.; Stadnyk, Tricia A.; Birks, S. Jean; Welker, Jeffrey M.

    2014-05-01

    The present and past hydrological cycle from the watershed to regional scale can be greatly enhanced using water isotopes (δ18O and δ2H), displayed today as isoscapes. The development of water isoscapes has both hydrological and ecological applications, such as ground water recharge and food web ecology, and can provide critical information when observations are not available due to spatial and temporal gaps in sampling and data networks. This study focuses on the creation of δ18O precipitation (δ18Oppt) isoscapes at a monthly temporal frequency across Canada and the northern United States (US) utilizing CNIP (Canadian Network for Isotopes in Precipitation) and USNIP (United States Network for Isotopes in Precipitation) measurements. Multiple linear stepwise regressions of CNIP and USNIP observations alongside NARR (North American Regional Reanalysis) climatological variables, teleconnection indices, and geographic indicators are utilized to create empirical models that predict the δ18O of monthly precipitation across Canada and the northern US. Pooling information from nearby locations within a region can be useful due to the similarity of processes and mechanisms controlling the variability of δ18O. We expect similarity in the controls on isotopic composition to strengthen the correlation between δ18Oppt and predictor variables, resulting in model simulation improvements. For this reason, three different regionalization approaches are used to separate the study domain into 'isotope zones' to explore the effect of regionalization on model performance. This methodology results in 15 empirical models, five within each regionalization. A split sample calibration and validation approach is employed for model development, and parameter selection is based on demonstrated improvement of the Akaike Information Criteria (AIC). Simulation results indicate the empirical models are generally able to capture the overall monthly variability in δ18Oppt. For the three

  8. Analysis of long-time operation of micro-cogeneration unit with fuel cell

    Directory of Open Access Journals (Sweden)

    Patsch Marek

    2015-01-01

    Full Text Available Micro-cogeneration is cogeneration with small performance, with maximal electric power up to 50 kWe. On the present, there are available small micro-cogeneration units with small electric performance, about 1 kWe, which are usable also in single family houses or flats. These micro-cogeneration units operate on principle of conventional combustion engine, Stirling engine, steam engine or fuel cell. Micro-cogeneration units with fuel cells are new progressive developing type of units for single family houses. Fuel cell is electrochemical device which by oxidation-reduction reaction turn directly chemical energy of fuel to electric power, secondary products are pure water and thermal energy. The aim of paper is measuring and evaluation of operation parameters of micro-cogeneration unit with fuel cell which uses natural gas as a fuel.

  9. Age and admission times as predictive factors for failure of admissions to discharge-stream short-stay units.

    Science.gov (United States)

    Shetty, Amith L; Shankar Raju, Savitha Banagar; Hermiz, Arsalan; Vaghasiya, Milan; Vukasovic, Matthew

    2015-02-01

    Discharge-stream emergency short-stay units (ESSU) improve ED and hospital efficiency. Age of patients and time of hospital presentations have been shown to correlate with increasing complexity of care. We aim to determine whether an age and time cut-off could be derived to subsequently improve short-stay unit success rates. We conducted a retrospective audit on 6703 (5522 inclusions) patients admitted to our discharge-stream short-stay unit. Patients were classified as appropriate or inappropriate admissions, and deemed successful if discharged out of the unit within 24 h; and failures if they needed inpatient admission into the hospital. We calculated short-stay unit length of stay for patients in each of these groups. A 15% failure rate was deemed as acceptable key performance indicator (KPI) for our unit. There were 197 out of 4621 (4.3%, 95% CI 3.7-4.9%) patients up to the age of 70 who failed admission to ESSU compared with 67 out of 901 (7.4%, 95% CI 5.9-9.3%, P 70 years of age have higher rates of failure after admission to discharge-stream ESSU. Although in appropriately selected discharge-stream patients, no age group or time-band of presentation was associated with increased failure rate beyond the stipulated KPI. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  10. Real-time Geomagnetic Data from a Raspberry Pi Magnetometer Network in the United Kingdom

    Science.gov (United States)

    Case, N.; Beggan, C.; Marple, S. R.

    2017-12-01

    In 2014, BGS and the University of Lancaster won an STFC Public Engagement grant to build and deploy 10 Raspberry Pi magnetometers to secondary schools across the UK to enable citizen science. The system uses a Raspberry Pi computer as a logging and data transfer device, connected to a set of three orthogonal miniature fluxgate magnetometers. The system has a nominal sensitivity of around 1 nanoTesla (nT), in each component direction (North, East and Down). This is around twenty times less sensitive than a current scientific-level instrument, but given its relatively low-cost, at about £250 ($325) per unit, this is an excellent price-to-performance ratio given we could not improve the sensitivity unless we spent a lot more money. The magnetic data are sampled at a 5 second cadence and sent to the AuroraWatch website at Lancaster University every 2 minutes. The data are freely available to view and download. The primary aim of the project is to encourage students from 14-18 years old to look at how sensors can be used to collect geophysical data and integrate it together to give a wider understanding of physical phenomena. A second aim is to provide useful data on the spatial variation of the magnetic field for analysis of geomagnetic storms, alongside data from the BGS observatory and University of Lancaster's SAMNET variometer network. We show results from the build, testing and running of the sensors including some recent storms and we reflect on our experiences in engaging schools and the general public with information about the magnetic field. The information to build the system and logging and analysis software for the Raspberry Pi is all freely available, allowing those interested to participate in the project as citizen scientists.

  11. Unintentional falls mortality among elderly in the United States: time for action.

    Science.gov (United States)

    Alamgir, Hasanat; Muazzam, Sana; Nasrullah, Muazzam

    2012-12-01

    Fall injury is a leading cause of death and disability among older adults. The objective of this study is to identify the groups among the ≥ 65 population by age, gender, race, ethnicity and state of residence which are most vulnerable to unintentional fall mortality and report the trends in falls mortality in the United States. Using mortality data from the Centers for Disease Control and Prevention, the age specific and age-adjusted fall mortality rates were calculated by gender, age, race, ethnicity and state of residence for a five year period (2003-2007). Annual percentage changes in rates were calculated and linear regression using natural logged rates were used for time-trend analysis. There were 79,386 fall fatalities (rate: 40.77 per 100,000 population) reported. The annual mortality rate varied from a low of 36.76 in 2003 to a high of 44.89 in 2007 with a 22.14% increase (p=0.002 for time-related trend) during 2003-2007. The rates among whites were higher compared to blacks (43.04 vs. 18.83; p=0.01). While comparing falls mortality rate for race by gender, white males had the highest mortality rate followed by white females. The rate was as low as 20.19 for Alabama and as high as 97.63 for New Mexico. The relative attribution of falls mortality among all unintentional injury mortality increased with age (23.19% for 65-69 years and 53.53% for 85+ years), and the proportion of falls mortality was significantly higher among females than males (46.9% vs. 40.7%: p<0.001) and among whites than blacks (45.3% vs. 24.7%: p<0.001). The burden of fall related mortality is very high and the rate is on the rise; however, the burden and trend varied by gender, age, race and ethnicity and also by state of residence. Strategies will be more effective in reducing fall-related mortality when high risk population groups are targeted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Atmospheric Nitrogen Deposition in the Western United States: Sources, Sinks and Changes over Time

    Science.gov (United States)

    Anderson, Sarah Marie

    Anthropogenic activities have greatly modified the way nitrogen moves through the atmosphere and terrestrial and aquatic environments. Excess reactive nitrogen generated through fossil fuel combustion, industrial fixation, and intensification of agriculture is not confined to anthropogenic systems but leaks into natural ecosystems with consequences including acidification, eutrophication, and biodiversity loss. A better understanding of where excess nitrogen originates and how that changes over time is crucial to identifying when, where, and to what degree environmental impacts occur. A major route into ecosystems for excess nitrogen is through atmospheric deposition. Excess nitrogen is emitted to the atmosphere where it can be transported great distances before being deposited back to the Earth's surface. Analyzing the composition of atmospheric nitrogen deposition and biological indicators that reflect deposition can provide insight into the emission sources as well as processes and atmospheric chemistry that occur during transport and what drives variation in these sources and processes. Chapter 1 provides a review and proof of concept of lichens to act as biological indicators and how their elemental and stable isotope composition can elucidate variation in amounts and emission sources of nitrogen over space and time. Information on amounts and emission sources of nitrogen deposition helps inform natural resources and land management decisions by helping to identify potentially impacted areas and causes of those impacts. Chapter 2 demonstrates that herbaria lichen specimens and field lichen samples reflect historical changes in atmospheric nitrogen deposition from urban and agricultural sources across the western United States. Nitrogen deposition increases throughout most of the 20 th century because of multiple types of emission sources until the implementation of the Clean Air Act Amendments of 1990 eventually decrease nitrogen deposition around the turn of

  13. Educational inequalities in parental care time: Cross-national evidence from Belgium, Denmark, Spain, and the United Kingdom.

    Science.gov (United States)

    Gracia, Pablo; Ghysels, Joris

    2017-03-01

    This study uses time-diary data for dual-earner couples from Belgium, Denmark, Spain, and the United Kingdom to analyze educational inequalities in parental care time in different national contexts. For mothers, education is significantly associated with parenting involvement only in Spain and the United Kingdom. In Spain these differences are largely explained by inequalities in mothers' time and monetary resources, but not in the United Kingdom, where less-educated mothers disproportionally work in short part-time jobs. For fathers, education is associated with parenting time in Denmark, and particularly in Spain, while the wife's resources substantially drive these associations. On weekends, the educational gradient in parental care time applies only to Spain and the United Kingdom, two countries with particularly large inequalities in parents' opportunities to engage in parenting. The study shows country variations in educational inequalities in parenting, suggesting that socioeconomic resources, especially from mothers, shape important variations in parenting involvement. Copyright © 2016. Published by Elsevier Inc.

  14. Laundry, energy and time: Insights from 20 years of time-use diary data in the United Kingdom

    OpenAIRE

    Anderson, Ben

    2016-01-01

    The uneven temporal distribution of domestic energy demand is a well-known phenomenon that is increasingly troublesome for energy infrastructures and sustainable or low carbon energy systems. People tend to demand energy, and especially electricity, at specific times of the day and they do not necessarily do so when the sun is shining or the wind is blowing. The potential value of demand response as a solution rests on understanding the nature of temporal energy demand and the timing of the i...

  15. An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms

    OpenAIRE

    Saha, Sonal

    2011-01-01

    Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage- ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS schedul...

  16. Porting of the transfer-matrix method for multilayer thin-film computations on graphics processing units

    Science.gov (United States)

    Limmer, Steffen; Fey, Dietmar

    2013-07-01

    Thin-film computations are often a time-consuming task during optical design. An efficient way to accelerate these computations with the help of graphics processing units (GPUs) is described. It turned out that significant speed-ups can be achieved. We investigate the circumstances under which the best speed-up values can be expected. Therefore we compare different GPUs among themselves and with a modern CPU. Furthermore, the effect of thickness modulation on the speed-up and the runtime behavior depending on the input data is examined.

  17. Towards 100,000 CPU Cycle-Scavenging by Genetic Algorithms

    Science.gov (United States)

    Globus, Al; Biegel, Bryan A. (Technical Monitor)

    2001-01-01

    We examine a web-centric design using standard tools such as web servers, web browsers, PHP, and mySQL. We also consider the applicability of Information Power Grid tools such as the Globus (no relation to the author) Toolkit. We intend to implement this architecture with JavaGenes running on at least two cycle-scavengers: Condor and United Devices. JavaGenes, a genetic algorithm code written in Java, will be used to evolve multi-species reactive molecular force field parameters.

  18. Invasive treatment of NSTEMI patients in German Chest Pain Units - Evidence for a treatment paradox.

    Science.gov (United States)

    Schmidt, Frank P; Schmitt, Claus; Hochadel, Matthias; Giannitsis, Evangelos; Darius, Harald; Maier, Lars S; Schmitt, Claus; Heusch, Gerd; Voigtländer, Thomas; Mudra, Harald; Gori, Tommaso; Senges, Jochen; Münzel, Thomas

    2018-03-15

    Patients with non ST-segment elevation myocardial infarction (NSTEMI) represent the largest fraction of patients with acute coronary syndrome in German Chest Pain units. Recent evidence on early vs. selective percutaneous coronary intervention (PCI) is ambiguous with respect to effects on mortality, myocardial infarction (MI) and recurrent angina. With the present study we sought to investigate the prognostic impact of PCI and its timing in German Chest Pain Unit (CPU) NSTEMI patients. Data from 1549 patients whose leading diagnosis was NSTEMI were retrieved from the German CPU registry for the interval between 3/2010 and 3/2014. Follow-up was available at median of 167days after discharge. The patients were grouped into a higher (Group A) and lower risk group (Group B) according to GRACE score and additional criteria on admission. Group A had higher Killip classes, higher BNP levels, reduced EF and significant more triple vessel disease (pGerman Chest Pain Units. This treatment paradox may worsen prognosis in patients who could derive the largest benefit from early revascularization. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Thermoeconomic cost analysis of CO_2 compression and purification unit in oxy-combustion power plants

    International Nuclear Information System (INIS)

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    Highlights: • Thermoeconomic cost analysis for CO_2 compression and purification unit is conducted. • Exergy cost and thermoeconomic cost occur in flash separation and mixing processes. • Unit exergy costs for flash separator and multi-stream heat exchanger are identical. • Multi-stage CO_2 compressor contributes to the minimum unit exergy cost. • Thermoeconomic performance for optimized CPU is enhanced. - Abstract: High CO_2 purity products can be obtained from oxy-combustion power plants through CO_2 compression and purification unit (CPU) based on phase separation method. To identify cost formation process and potential energy savings for CPU, detailed thermoeconomic cost analysis based on structure theory of thermoeconomics is applied to an optimized CPU (with double flash separators). It is found that the largest unit exergy cost occurs in the first separation process while the multi-stage CO_2 compressor contributes to the minimum unit exergy cost. In two flash separation processes, unit exergy costs for the flash separator and multi-stream heat exchanger are identical but their unit thermoeconomic costs are different once monetary cost for each device is considered. For cost inefficiency occurring in CPU, it mainly derives from large exergy costs and thermoeconomic costs in the flash separation and mixing processes. When compared with an unoptimized CPU, thermoeconomic performance for the optimized CPU is enhanced and the maximum reduction of 5.18% for thermoeconomic cost is attained. To achieve cost effective operation, measures should be taken to improve operations of the flash separation and mixing processes.

  20. Accelerating cardiac bidomain simulations using graphics processing units.

    Science.gov (United States)

    Neic, A; Liebmann, M; Hoetzl, E; Mitchell, L; Vigmond, E J; Haase, G; Plank, G

    2012-08-01

    Anatomically realistic and biophysically detailed multiscale computer models of the heart are playing an increasingly important role in advancing our understanding of integrated cardiac function in health and disease. Such detailed simulations, however, are computationally vastly demanding, which is a limiting factor for a wider adoption of in-silico modeling. While current trends in high-performance computing (HPC) hardware promise to alleviate this problem, exploiting the potential of such architectures remains challenging since strongly scalable algorithms are necessitated to reduce execution times. Alternatively, acceleration technologies such as graphics processing units (GPUs) are being considered. While the potential of GPUs has been demonstrated in various applications, benefits in the context of bidomain simulations where large sparse linear systems have to be solved in parallel with advanced numerical techniques are less clear. In this study, the feasibility of multi-GPU bidomain simulations is demonstrated by running strong scalability benchmarks using a state-of-the-art model of rabbit ventricles. The model is spatially discretized using the finite element methods (FEM) on fully unstructured grids. The GPU code is directly derived from a large pre-existing code, the Cardiac Arrhythmia Research Package (CARP), with very minor perturbation of the code base. Overall, bidomain simulations were sped up by a factor of 11.8 to 16.3 in benchmarks running on 6-20 GPUs compared to the same number of CPU cores. To match the fastest GPU simulation which engaged 20 GPUs, 476 CPU cores were required on a national supercomputing facility.

  1. Timing Is Everything: One Teacher's Exploration of the Best Time to Use Visual Media in a Science Unit

    Science.gov (United States)

    Drury, Debra

    2006-01-01

    Kids today are growing up with televisions, movies, videos and DVDs, so it's logical to assume that this type of media could be motivating and used to great effect in the classroom. But at what point should film and other visual media be used? Are there times in the inquiry process when showing a film or incorporating other visual media is more…

  2. Noniterative Multireference Coupled Cluster Methods on Heterogeneous CPU-GPU Systems

    Energy Technology Data Exchange (ETDEWEB)

    Bhaskaran-Nair, Kiran; Ma, Wenjing; Krishnamoorthy, Sriram; Villa, Oreste; van Dam, Hubertus JJ; Apra, Edoardo; Kowalski, Karol

    2013-04-09

    A novel parallel algorithm for non-iterative multireference coupled cluster (MRCC) theories, which merges recently introduced reference-level parallelism (RLP) [K. Bhaskaran-Nair, J.Brabec, E. Aprà, H.J.J. van Dam, J. Pittner, K. Kowalski, J. Chem. Phys. 137, 094112 (2012)] with the possibility of accelerating numerical calculations using graphics processing unit (GPU) is presented. We discuss the performance of this algorithm on the example of the MRCCSD(T) method (iterative singles and doubles and perturbative triples), where the corrections due to triples are added to the diagonal elements of the MRCCSD (iterative singles and doubles) effective Hamiltonian matrix. The performance of the combined RLP/GPU algorithm is illustrated on the example of the Brillouin-Wigner (BW) and Mukherjee (Mk) state-specific MRCCSD(T) formulations.

  3. CPU architecture for a fast and energy-saving calculation of convolution neural networks

    Science.gov (United States)

    Knoll, Florian J.; Grelcke, Michael; Czymmek, Vitali; Holtorf, Tim; Hussmann, Stephan

    2017-06-01

    One of the most difficult problem in the use of artificial neural networks is the computational capacity. Although large search engine companies own specially developed hardware to provide the necessary computing power, for the conventional user only remains the state of the art method, which is the use of a graphic processing unit (GPU) as a computational basis. Although these processors are well suited for large matrix computations, they need massive energy. Therefore a new processor on the basis of a field programmable gate array (FPGA) has been developed and is optimized for the application of deep learning. This processor is presented in this paper. The processor can be adapted for a particular application (in this paper to an organic farming application). The power consumption is only a fraction of a GPU application and should therefore be well suited for energy-saving applications.

  4. SUPERFUND TREATABILITY CLEARINGHOUSE: FINAL REPORT: ON-SITE INCINERATION OF SHIRCO INFRARED SYSTEMS PORTABLE PILOT TEST UNIT, TIMES BEACH, MISSOURI

    Science.gov (United States)

    During the period of July 8 - July 12, 1985, the Shirco Infrared Systems Portable Pilot Test Unit was in operation at the Times Beach Dioxin Research Facility to demonstrate the capability of Shirco's infrared technology to decontaminate silty soil laden with 2,3,7,8-tetrachlorod...

  5. Grammatical Planning Units during Real-Time Sentence Production in Speakers with Agrammatic Aphasia and Healthy Speakers

    Science.gov (United States)

    Lee, Jiyeon; Yoshida, Masaya; Thompson, Cynthia K.

    2015-01-01

    Purpose: Grammatical encoding (GE) is impaired in agrammatic aphasia; however, the nature of such deficits remains unclear. We examined grammatical planning units during real-time sentence production in speakers with agrammatic aphasia and control speakers, testing two competing models of GE. We queried whether speakers with agrammatic aphasia…

  6. The timing of marriage vis-à-vis coresidence and childbearing in Europe and the United States

    NARCIS (Netherlands)

    J.A. Holland (Jennifer)

    2017-01-01

    textabstractOBJECTIVE These descriptive findings extend Holland's (2013) marriage typology by linking the timing of marriage, childbearing, and cohabitation, and apply it to a range of European countries and the United States. The meaning of marriage is organized around six ideal types: Direct

  7. Productive Large Scale Personal Computing: Fast Multipole Methods on GPU/CPU Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — To be used naturally in design optimization, parametric study and achieve quick total time-to-solution, simulation must naturally and personally be available to the...

  8. Research on control law accelerator of digital signal process chip TMS320F28035 for real-time data acquisition and processing

    Science.gov (United States)

    Zhao, Shuangle; Zhang, Xueyi; Sun, Shengli; Wang, Xudong

    2017-08-01

    TI C2000 series digital signal process (DSP) chip has been widely used in electrical engineering, measurement and control, communications and other professional fields, DSP TMS320F28035 is one of the most representative of a kind. When using the DSP program, need data acquisition and data processing, and if the use of common mode C or assembly language programming, the program sequence, analogue-to-digital (AD) converter cannot be real-time acquisition, often missing a lot of data. The control low accelerator (CLA) processor can run in parallel with the main central processing unit (CPU), and the frequency is consistent with the main CPU, and has the function of floating point operations. Therefore, the CLA coprocessor is used in the program, and the CLA kernel is responsible for data processing. The main CPU is responsible for the AD conversion. The advantage of this method is to reduce the time of data processing and realize the real-time performance of data acquisition.

  9. Prospective Trial of House Staff Time to Response and Intervention in a Surgical Intensive Care Unit: Pager vs. Smartphone.

    Science.gov (United States)

    Tatum, James M; White, Terris; Kang, Christopher; Ley, Eric J; Melo, Nicolas; Bloom, Matthew; Alban, Rodrigo F

    The objective of the study was to characterize house staff time to response and intervention when notified of a patient care issue by pager vs. smartphone. We hypothesized that smartphones would reduce house staff time to response and intervention. Prospective study of all electronic communications was conducted between nurses and house staff between September 2015 and October 2015. The 4-week study period was randomly divided into two 2-week study periods where all electronic communications between intensive care unit nurses and intensive care unit house staff were exclusively by smartphone or by pager, respectively. Time of communication initiation, time of house staff response, and time from response to clinical intervention for each communication were recorded. Outcomes are time from nurse contact to house staff response and intervention. Single-center surgical intensive care unit of Cedars-Sinai Medical Center in Los Angeles, California, an academic tertiary care and level I trauma center. All electronic communications occurring between nurses and house staff in the study unit during the study period were considered. During the study period, 205 nurse-house staff electronic communications occurred, 100 in the phone group and 105 in the pager group. House staff response to communication time was significantly shorter in the phone group (0.5 [interquartile range = 1.7] vs. 2 [3]min, p house staff intervention after response was also significantly more rapid in the phone group (0.8 [1.7] vs. 1 [2]min, p = 0.003). Dedicated clinical smartphones significantly decrease time to house staff response after electronic nursing communications compared with pagers. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  10. Graphics Processing Unit Enhanced Parallel Document Flocking Clustering

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; ST Charles, Jesse Lee [ORNL

    2010-01-01

    Analyzing and clustering documents is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to generate results in a reasonable amount of time. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. In this paper, we have conducted research to exploit this archi- tecture and apply its strengths to the flocking based document clustering problem. Using the CUDA platform from NVIDIA, we developed a doc- ument flocking implementation to be run on the NVIDIA GEFORCE GPU. Performance gains ranged from thirty-six to nearly sixty times improvement of the GPU over the CPU implementation.

  11. Testing for seasonal unit roots in monthly panels of time series

    NARCIS (Netherlands)

    R.M. Kunst (Robert); Ph.H.B.F. Franses (Philip Hans)

    2009-01-01

    textabstractWe consider the problem of testing for seasonal unit roots in monthly panel data. To this aim, we generalize the quarterly CHEGY test to the monthly case. This parametric test is contrasted with a new nonparametric test, which is the panel counterpart to the univariate RURS test that

  12. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use over Time

    Science.gov (United States)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    2013-01-01

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson--Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semi-fixed multi-word units (MWUs), which comprise fixed parts with the potential…

  13. Second Language Learners' Contiguous and Discontiguous Multi-Word Unit Use Over Time

    NARCIS (Netherlands)

    Yuldashev, Aziz; Fernandez, Julieta; Thorne, Steven L.

    Research has described the key role of formulaic language use in both written and spoken communication (Schmitt, 2004; Wray, 2002), as well as in relation to L2 learning (Ellis, Simpson-Vlach, & Maynard, 2008). Relatively few studies have examined related fixed and semifixed multi-word units (MWUs),

  14. The United Nations, Peace, and Higher Education: Pedagogic Interventions in Neoliberal Times

    Science.gov (United States)

    Kester, Kevin

    2017-01-01

    Peace and conflict studies (PACS) education in recent decades has become a popular approach to social justice learning in higher education institutions (Harris, Fisk, and Rank 1998; Smith 2007; Carstarphen et al. 2010; Bajaj and Hantzopoulos 2016) and has been provided legitimacy through a number of different United Nations (UN) declarations…

  15. CLIM : A cross-level workload-aware timing error prediction model for functional units

    NARCIS (Netherlands)

    Jiao, Xun; Rahimi, Abbas; Jiang, Yu; Wang, Jianguo; Fatemi, Hamed; De Gyvez, Jose Pineda; Gupta, Rajesh K.

    2018-01-01

    Timing errors that are caused by the timing violations of sensitized circuit paths, have emerged as an important threat to the reliability of synchronous digital circuits. To protect circuits from these timing errors, designers typically use a conservative timing margin, which leads to operational

  16. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    International Nuclear Information System (INIS)

    He, Qingyun; Chen, Hongli; Feng, Jingchao

    2015-01-01

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  17. Acceleration of the OpenFOAM-based MHD solver using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    He, Qingyun; Chen, Hongli, E-mail: hlchen1@ustc.edu.cn; Feng, Jingchao

    2015-12-15

    Highlights: • A 3D PISO-MHD was implemented on Kepler-class graphics processing units (GPUs) using CUDA technology. • A consistent and conservative scheme is used in the code which was validated by three basic benchmarks in a rectangular and round ducts. • Parallelized of CPU and GPU acceleration were compared relating to single core CPU in MHD problems and non-MHD problems. • Different preconditions for solving MHD solver were compared and the results showed that AMG method is better for calculations. - Abstract: The pressure-implicit with splitting of operators (PISO) magnetohydrodynamics MHD solver of the couple of Navier–Stokes equations and Maxwell equations was implemented on Kepler-class graphics processing units (GPUs) using the CUDA technology. The solver is developed on open source code OpenFOAM based on consistent and conservative scheme which is suitable for simulating MHD flow under strong magnetic field in fusion liquid metal blanket with structured or unstructured mesh. We verified the validity of the implementation on several standard cases including the benchmark I of Shercliff and Hunt's cases, benchmark II of fully developed circular pipe MHD flow cases and benchmark III of KIT experimental case. Computational performance of the GPU implementation was examined by comparing its double precision run times with those of essentially the same algorithms and meshes. The resulted showed that a GPU (GTX 770) can outperform a server-class 4-core, 8-thread CPU (Intel Core i7-4770k) by a factor of 2 at least.

  18. MonetDB/X100 - A DBMS in the CPU cache

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); N.J. Nes (Niels); S. Héman (Sándor)

    2005-01-01

    textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and

  19. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease

    NARCIS (Netherlands)

    Shamonin, D.P.; Bron, E.E.; Lelieveldt, B.P.F.; Smits, M.; Klein, S.; Staring, M.

    2014-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial.

  20. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease

    NARCIS (Netherlands)

    D.P. Shamonin (Denis); E.E. Bron (Esther); B.P.F. Lelieveldt (Boudewijn); M. Smits (Marion); S. Klein (Stefan); M. Staring (Marius)

    2014-01-01

    textabstractNonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be

  1. The influence of time perspective on cervical cancer screening among Latinas in the United States.

    Science.gov (United States)

    Roncancio, Angelica M; Ward, Kristy K; Fernandez, Maria E

    2014-12-01

    To develop effective interventions to increase cervical cancer screening among Latinas, we should understand the role of cultural factors, such as time perspective, in the decision to be screened. We examined the relation between present time orientation, future time orientation, and self-reported cervical cancer screening among Latinas. A group of 206 Latinas completed a survey measuring factors associated with screening. Logistic regression analyses revealed that future time orientation was significantly associated with self-reported screening. Understanding the influence of time orientation on cervical cancer screening will assist us in developing interventions that effectively target time perspective and screening. © The Author(s) 2013.

  2. Academic Outcome Measures of a Dedicated Education Unit Over Time: Help or Hinder?

    Science.gov (United States)

    Smyer, Tish; Gatlin, Tricia; Tan, Rhigel; Tejada, Marianne; Feng, Du

    2015-01-01

    Critical thinking, nursing process, quality and safety measures, and standardized RN exit examination scores were compared between students (n = 144) placed in a dedicated education unit (DEU) and those in a traditional clinical model. Standardized test scores showed that differences between the clinical groups were not statistically significant. This study shows that the DEU model is 1 approach to clinical education that can enhance students' academic outcomes.

  3. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, they expect to be able to access close to ten processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  4. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, we expect to be able to access close to nine processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  5. Working on the Weekend: Fathers' Time with Family in the United Kingdom

    Science.gov (United States)

    Hook, Jennifer L.

    2012-01-01

    Whereas most resident fathers are able to spend more time with their children on weekends than on weekdays, many fathers work on the weekends, spending less time with their children on these days. There are conflicting findings about whether fathers are able to make up for lost weekend time on weekdays. Using unique features of the United…

  6. Maximizing the retention level for proportional reinsurance under  -regulation of the finite time surplus process with unit-equalized interarrival time

    Directory of Open Access Journals (Sweden)

    Sukanya Somprom

    2016-07-01

    Full Text Available The research focuses on an insurance model controlled by proportional reinsurance in the finite-time surplus process with a unit-equalized time interval. We prove the existence of the maximal retention level for independent and identically distributed claim processes under α-regulation, i.e., a model where the insurance company has to manage the probability of insolvency to be at most α. In addition, we illustrate the maximal retention level for exponential claims by applying the bisection technique.

  7. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    Science.gov (United States)

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  8. Heterogeneous real-time computing in radio astronomy

    Science.gov (United States)

    Ford, John M.; Demorest, Paul; Ransom, Scott

    2010-07-01

    Modern computer architectures suited for general purpose computing are often not the best choice for either I/O-bound or compute-bound problems. Sometimes the best choice is not to choose a single architecture, but to take advantage of the best characteristics of different computer architectures to solve your problems. This paper examines the tradeoffs between using computer systems based on the ubiquitous X86 Central Processing Units (CPU's), Field Programmable Gate Array (FPGA) based signal processors, and Graphical Processing Units (GPU's). We will show how a heterogeneous system can be produced that blends the best of each of these technologies into a real-time signal processing system. FPGA's tightly coupled to analog-to-digital converters connect the instrument to the telescope and supply the first level of computing to the system. These FPGA's are coupled to other FPGA's to continue to provide highly efficient processing power. Data is then packaged up and shipped over fast networks to a cluster of general purpose computers equipped with GPU's, which are used for floating-point intensive computation. Finally, the data is handled by the CPU and written to disk, or further processed. Each of the elements in the system has been chosen for its specific characteristics and the role it can play in creating a system that does the most for the least, in terms of power, space, and money.

  9. Analytical Call Center Model with Voice Response Unit and Wrap-Up Time

    Directory of Open Access Journals (Sweden)

    Petr Hampl

    2015-01-01

    Full Text Available The last twenty years of computer integration significantly changed the process of service in a call center service systems. Basic building modules of classical call centers – a switching system and a group of humans agents – was extended with other special modules such as skills-based routing module, automatic call distribution module, interactive voice response module and others to minimize the customer waiting time and wage costs. A calling customer of a modern call center is served in the first stage by the interactive voice response module without any human interaction. If the customer requirements are not satisfied in the first stage, the service continues to the second stage realized by the group of human agents. The service time of second stage – the average handle time – is divided into a conversation time and wrap-up time. During the conversation time, the agent answers customer questions and collects its requirements and during the wrap-up time (administrative time the agent completes the task without any customer interaction. The analytical model presented in this contribution is solved under the condition of statistical equilibrium and takes into account the interactive voice response module service time, the conversation time and the wrap-up time.

  10. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  11. The Full-Time Workweek in the United States, 1900-1970

    Science.gov (United States)

    Kniesner, Thomas J.

    1976-01-01

    The average workweek of full-time workers declined by 35 percent between 1900 and 1940, but has not changed significnatly since then, and the secular rigidity of the full-time workweek remains. An expanded model which incorporates the effects of growth in education and in the female wage explains the post-1940 secular trend. (Editor/HD)

  12. Economic and Sociological Correlates of Suicides: Multilevel Analysis of the Time Series Data in the United Kingdom.

    Science.gov (United States)

    Sun, Bruce Qiang; Zhang, Jie

    2016-03-01

    For the effects of social integration on suicides, there have been different and even contradictive conclusions. In this study, the selected economic and social risks of suicide for different age groups and genders in the United Kingdom were identified and the effects were estimated by the multilevel time series analyses. To our knowledge, there exist no previous studies that estimated a dynamic model of suicides on the time series data together with multilevel analysis and autoregressive distributed lags. The investigation indicated that unemployment rate, inflation rate, and divorce rate are all significantly and positively related to the national suicide rates in the United Kingdom from 1981 to 2011. Furthermore, the suicide rates of almost all groups above 40 years are significantly associated with the risk factors of unemployment and inflation rate, in comparison with the younger groups. © 2016 American Academy of Forensic Sciences.

  13. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    Energy Technology Data Exchange (ETDEWEB)

    Arbanas, G.; Dunn, M.E.; Wiarda, D., E-mail: arbanasg@ornl.gov, E-mail: dunnme@ornl.gov, E-mail: wiardada@ornl.gov [Oak Ridge National Laboratory, Oak Ridge, TN (United States)

    2011-07-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The {sup 235}U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  14. Computation of large covariance matrices by SAMMY on graphical processing units and multicore CPUs

    International Nuclear Information System (INIS)

    Arbanas, G.; Dunn, M.E.; Wiarda, D.

    2011-01-01

    Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The 235 U RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000×20,000 that had previously taken days, took approximately one minute on the GPU. Comparable performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms. (author)

  15. Exponential-time constitutive law for Palo Duro Unit 4 Salt from the J. Friemel No. 1 Well

    International Nuclear Information System (INIS)

    Senseny, P.E.; Pfeifle, T.W.; Mellegard, K.D.

    1986-07-01

    Values for the nine parameters in the exponential-time constitutive law are presented for Palo Duro Unit 4 salt. The values given for the thermal expansion and two elastic parameters are taken from previous laboratory studies. The six remaining constitutive parameters are evaluated by analyzing data from 12 triaxial compression creep tests. The specimens tested in this study are from the J. Friemel No. 1 well in Deaf County, Texas. 15 refs., 15 figs., 4 tabs

  16. Architectural design proposal for real time clock for wireless microcontroller unit

    Science.gov (United States)

    Alias, Muhammad Nor Azwan Mohd; Nizam Mohyar, Shaiful

    2017-11-01

    In this project, we are developing an Intellectual properties (IP) which is a dedicated real-time clock (RTC) system for a wireless microcontroller. This IP is developed using Verilog Hardware Description Language (Verilog HDL) and being simulated using Quartus II and Synopsys software. This RTC will be used in microcontroller system to provide precise time and date which can be used for various applications. It plays a very important role in the real-time systems like digital clock, attendance system, digital camera and more.

  17. Architectural design proposal for real time clock for wireless microcontroller unit

    Directory of Open Access Journals (Sweden)

    Mohd Alias Muhammad Nor Azwan

    2017-01-01

    Full Text Available In this project, we are developing an Intellectual properties (IP which is a dedicated real-time clock (RTC system for a wireless microcontroller. This IP is developed using Verilog Hardware Description Language (Verilog HDL and being simulated using Quartus II and Synopsys software. This RTC will be used in microcontroller system to provide precise time and date which can be used for various applications. It plays a very important role in the real-time systems like digital clock, attendance system, digital camera and more.

  18. Law-based arguments and messages to advocate for later school start time policies in the United States.

    Science.gov (United States)

    Lee, Clark J; Nolan, Dennis M; Lockley, Steven W; Pattison, Brent

    2017-12-01

    The increasing scientific evidence that early school start times are harmful to the health and safety of teenagers has generated much recent debate about changing school start times policies for adolescent students. Although efforts to promote and implement such changes have proliferated in the United States in recent years, they have rarely been supported by law-based arguments and messages that leverage the existing legal infrastructure regulating public education and child welfare in the United States. Furthermore, the legal bases to support or resist such changes have not been explored in detail to date. This article provides an overview of how law-based arguments and messages can be constructed and applied to advocate for later school start time policies in US public secondary schools. The legal infrastructure impacting school start time policies in the United States is briefly reviewed, including descriptions of how government regulates education, what legal obligations school officials have concerning their students' welfare, and what laws and public policies currently exist that address adolescent sleep health and safety. On the basis of this legal infrastructure, some hypothetical examples of law-based arguments and messages that could be applied to various types of advocacy activities (eg, litigation, legislative and administrative advocacy, media and public outreach) to promote later school start times are discussed. Particular consideration is given to hypothetical arguments and messages aimed at emphasizing the consistency of later school start time policies with existing child welfare law and practices, legal responsibilities of school officials and governmental authorities, and societal values and norms. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  19. Adulteration and Counterfeiting of Online Nutraceutical Formulations in the United States: Time for Intervention?

    Science.gov (United States)

    Nounou, Mohamed Ismail; Ko, Yamin; Helal, Nada A; Boltz, Jeremy F

    2017-10-11

    Global prevalence of nutraceuticals is noticeably high. The American market is flooded with nutraceuticals claiming to be of natural origin and sold with a therapeutic claim by major online retail stores such as Amazon and eBay. The objective of this commentary is to highlight the possible problems of online-sold nutraceuticals in the United States with respect to claim, adulterants, and safety. Furthermore, there is a lack of strict regulatory laws governing the sales, manufacturing, marketing, and label claims of nutraceutical formulations currently sold in the U.S. market. Major online retail stores and Internet pharmacies aid the widespread sale of nutraceuticals. Finally, according to the literature, many of these products were found to be either counterfeit or adulterated with active pharmaceutical ingredients (API) and mislabeled as being safe and natural. Therefore, regulatory authorities along with the research community should intervene to draw attention to these products and their possible effects.

  20. An Integrated Pipeline of Open Source Software Adapted for Multi-CPU Architectures: Use in the Large-Scale Identification of Single Nucleotide Polymorphisms

    Directory of Open Access Journals (Sweden)

    B. Jayashree

    2007-01-01

    Full Text Available The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs. In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT, the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  1. EnviroAtlas - Commute Time to Work by Census Block Group for the Conterminous United States

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset portrays the commute time of workers to their workplace for each Census Block Group (CBG) during 2008-2012. Data were compiled from the...

  2. Longitude Position in a Time Zone and Cancer Risk in the United States.

    Science.gov (United States)

    Gu, Fangyi; Xu, Shangda; Devesa, Susan S; Zhang, Fanni; Klerman, Elizabeth B; Graubard, Barry I; Caporaso, Neil E

    2017-08-01

    Background: Circadian disruption is a probable human carcinogen. From the eastern to western border of a time zone, social time is equal, whereas solar time is progressively delayed, producing increased discrepancies between individuals' social and biological circadian time. Accordingly, western time zone residents experience greater circadian disruption and may be at an increased risk of cancer. Methods: We examined associations between the position in a time zone and age-standardized county-level incidence rates for total cancers combined and 23 specific cancers by gender using the data of the Surveillance, Epidemiology, and End Results Program (2000-2012), including four million cancer diagnoses in white residents of 607 counties in 11 U.S. states. Log-linear regression was conducted, adjusting for latitude, poverty, cigarette smoking, and state. Bonferroni-corrected P values were used as the significance criteria. Results: Risk increased from east to west within a time zone for total and for many specific cancers, including chronic lymphocytic leukemia (both genders) and cancers of the stomach, liver, prostate, and non-Hodgkin lymphoma in men and cancers of the esophagus, colorectum, lung, breast, and corpus uteri in women. Conclusions: Risk increased from the east to the west in a time zone for total and many specific cancers, in accord with the circadian disruption hypothesis. Replications in analytic epidemiologic studies are warranted. Impact: Our findings suggest that circadian disruption may not be a rare phenomenon affecting only shift workers, but is widespread in the general population with broader implications for public health than generally appreciated. Cancer Epidemiol Biomarkers Prev; 26(8); 1306-11. ©2017 AACR . ©2017 American Association for Cancer Research.

  3. Stochastic first passage time accelerated with CUDA

    Science.gov (United States)

    Pierro, Vincenzo; Troiano, Luigi; Mejuto, Elena; Filatrella, Giovanni

    2018-05-01

    The numerical integration of stochastic trajectories to estimate the time to pass a threshold is an interesting physical quantity, for instance in Josephson junctions and atomic force microscopy, where the full trajectory is not accessible. We propose an algorithm suitable for efficient implementation on graphical processing unit in CUDA environment. The proposed approach for well balanced loads achieves almost perfect scaling with the number of available threads and processors, and allows an acceleration of about 400× with a GPU GTX980 respect to standard multicore CPU. This method allows with off the shell GPU to challenge problems that are otherwise prohibitive, as thermal activation in slowly tilted potentials. In particular, we demonstrate that it is possible to simulate the switching currents distributions of Josephson junctions in the timescale of actual experiments.

  4. Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2016-01-01

    Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.

  5. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  6. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    International Nuclear Information System (INIS)

    Leggett, C; Jackson, K; Tatarkhanov, M; Yao, Y; Binet, S; Levinthal, D

    2011-01-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  7. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  8. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  9. A time to voltage converter and analog memory unit for straw tracking detectors

    International Nuclear Information System (INIS)

    Callewaert, L.; Eyckmans, W.; Sansen, W.; Stevens, A.; Van der Spiegel, J.; Van Berg, R.; Williams, H.H.; Yau, T.Y.

    1990-01-01

    In a high precision drift tube or straw tracking system, one measures the time of arrival of the first electron at the anode. While many possible schemes exist, the authors initial judgment was that an analog time measurement would offer both lower power and greater resolution than an equally complex digital system. In addition, they believe that it will be necessary to incorporate all of the system features such as connection to the trigger and DAQ systems in any usable design in order to keep the power, mass and complexity of the final system under control. A low power, sub-nanosecond accuracy, quick recovery, data-driven, multiple sample Time to Voltage Converter suitable for use on high rate straw tracking detectors is described. The described TVC includes virtual storage of analog data in both Level 1 and Level 2 queues and an on board ADC with first order correction for capacitance variations and non-linearities

  10. The economic implications of later school start times in the United States.

    Science.gov (United States)

    Hafner, Marco; Stepanek, Martin; Troxel, Wendy M

    2017-12-01

    Numerous studies have shown that later school start times (SST) are associated with positive student outcomes, including improvements in academic performance, mental and physical health, and public safety. While the benefits of later SST are very well documented in the literature, in practice there is opposition against delaying SST. A major argument against later SST is the claim that delaying SST will result in significant additional costs for schools due to changes in bussing strategies. However, to date, there has only been one published study that has quantified the potential economic benefits of later SST in relation to potential costs. The current study investigates the economic implications of later school start times by examining a policy experiment and its subsequent state-wide economic effects of a state-wide universal shift in school start times to 8.30AM. Using a novel macroeconomic modeling approach, the study estimates changes in the economic performance of 47 US states following a delayed school start time, which includes the benefits of higher academic performance of students and reduced car crash rates. The benefit-cost projections of this study suggest that delaying school start times is a cost-effective, population-level strategy, which could have a significant impact on public health and the US economy. From a policy perspective, these findings are crucial as they demonstrate that significant economic gains resulting from the delay in SST accrue over a relatively short period of time following the adoption of the policy shift. Copyright © 2017 National Sleep Foundation. Published by Elsevier Inc. All rights reserved.

  11. Time use of parents in the United States: What difference did the Great Recession make?

    OpenAIRE

    Kongar, Ebru; Berik, Günseli

    2014-01-01

    Feminist and institutionalist literature has challenged the "Mancession" narrative of the 2007-09 recession and produced nuanced and gender-aware analyses of the labor market and well-being outcomes of the recession. Using American Time Use Survey (ATUS) data for 2003-12, this paper examines the recession's impact on gendered patterns of time use over the course of the 2003-12 business cycle. We find that the gender disparity in paid and unpaid work hours followed a U-shaped pattern, narrowin...

  12. Length of time to first job for immigrants in the United Kingdom: An exploratory analysis

    Directory of Open Access Journals (Sweden)

    JuYin (Helen Wong

    2013-05-01

    Full Text Available This study explores whether ethnicity affects immigrants’ time to first employment. Many studies on labour/social inequalities focus on modeling cross-sectional or panel data when comparing ethnic minority to majority groups in terms of their employment patterns. Results from these models, however, do not measure the degree of transition-duration penalties experienced by immigrant groups. Because time itself is an important variable, and to bridge the gap between literature and methodology, a lifecourse perspective and a duration model are employed to examine the length of transition that immigrants require to find first employment.

  13. Cost and benefit including value of life, health and environmental damage measured in time units

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Friis-Hansen, Peter

    2009-01-01

    Key elements of the authors' work on money equivalent time allocation to costs and benefits in risk analysis are put together as an entity. This includes the data supported dimensionless analysis of an equilibrium relation between total population work time and gross domestic product leading...... of this societal value over the actual costs, used by the owner for economically optimizing an activity, motivates a simple risk accept criterion suited to be imposed on the owner by the public. An illustration is given concerning allocation of economical means for mitigation of loss of life and health on a ferry...

  14. Influenza mortality in the United States, 2009 pandemic: burden, timing and age distribution.

    Directory of Open Access Journals (Sweden)

    Ann M Nguyen

    Full Text Available BACKGROUND: In April 2009, the most recent pandemic of influenza A began. We present the first estimates of pandemic mortality based on the newly-released final data on deaths in 2009 and 2010 in the United States. METHODS: We obtained data on influenza and pneumonia deaths from the National Center for Health Statistics (NCHS. Age- and sex-specific death rates, and age-standardized death rates, were calculated. Using negative binomial Serfling-type methods, excess mortality was calculated separately by sex and age groups. RESULTS: In many age groups, observed pneumonia and influenza cause-specific mortality rates in October and November 2009 broke month-specific records since 1959 when the current series of detailed US mortality data began. Compared to the typical pattern of seasonal flu deaths, the 2009 pandemic age-specific mortality, as well as influenza-attributable (excess mortality, skewed much younger. We estimate 2,634 excess pneumonia and influenza deaths in 2009-10; the excess death rate in 2009 was 0.79 per 100,000. CONCLUSIONS: Pandemic influenza mortality skews younger than seasonal influenza. This can be explained by a protective effect due to antigenic cycling. When older cohorts have been previously exposed to a similar antigen, immune memory results in lower death rates at older ages. Age-targeted vaccination of younger people should be considered in future pandemics.

  15. Technological developments in real-time operational hydrologic forecasting in the United States

    Science.gov (United States)

    Hudlow, Michael D.

    1988-09-01

    The hydrologic forecasting service of the United States spans applications and scales ranging from those associated with the issuance of flood and flash warnings to those pertaining to seasonal water supply forecasts. New technological developments (underway in or planned by the National Weather Service (NWS) in support of the Hydrologic Program) are carried out as combined efforts by NWS headquarters and field personnel in cooperation with other organizations. These developments fall into two categories: hardware and software systems technology, and hydrometeorological analysis and prediction technology. Research, development, and operational implementation in progress in both of these areas are discussed. Cornerstones of an overall NWS modernization effort include implementation of state-of-the-art data acquisition systems (including the Next Generation Weather Radar) and communications and computer processing systems. The NWS Hydrologic Service will capitalize on these systems and will incorporate results from specific hydrologic projects including collection and processing of multivariate data sets, conceptual hydrologic modeling systems, integrated hydrologic modeling systems with meteorological interfaces and automatic updating of model states, and extended streamflow prediction techniques. The salient aspects of ongoing work in these areas are highlighted in this paper, providing some perspective on the future U.S. hydrologic forecasting service and its transitional period into the 1990s.

  16. Parenthood, Gender and Work-Family Time in the United States, Australia, Italy, France, and Denmark

    Science.gov (United States)

    Craig, Lyn; Mullan, Killian

    2010-01-01

    Research has associated parenthood with greater daily time commitments for fathers and mothers than for childless men and women, and with deeper gendered division of labor in households. How do these outcomes vary across countries with different average employment hours, family and social policies, and cultural attitudes to family care provision?…

  17. Clocking in: The Organization of Work Time and Health in the United States

    Science.gov (United States)

    Kleiner, Sibyl; Pavalko, Eliza K.

    2010-01-01

    This article assesses the health implications of emerging patterns in the organization of work time. Using data from the National Longitudinal Survey of Youth 1979, we examine general mental and physical health (SF-12 scores), psychological distress (CESD score), clinical levels of obesity, and the presence of medical conditions, at age 40.…

  18. But science is international! Finding time and space to encourage intercultural learning in a content-driven physiology unit.

    Science.gov (United States)

    Etherington, Sarah J

    2014-06-01

    Internationalization of the curriculum is central to the strategic direction of many modern universities and has widespread benefits for student learning. However, these clear aspirations for internationalization of the curriculum have not been widely translated into more internationalized course content and teaching methods in the classroom, particularly in scientific disciplines. This study addressed one major challenge to promoting intercultural competence among undergraduate science students: finding time to scaffold such learning within the context of content-heavy, time-poor units. Small changes to enhance global and intercultural awareness were incorporated into existing assessments and teaching activities within a second-year biomedical physiology unit. Interventions were designed to start a conversation about global and intercultural perspectives on physiology, to embed the development of global awareness into the assessment and to promote cultural exchanges through peer interactions. In student surveys, 40% of domestic and 60% of international student respondents articulated specific learning about interactions in cross-cultural groups resulting from unit activities. Many students also identified specific examples of how cultural beliefs would impact on the place of biomedical physiology within the global community. In addition, staff observed more widespread benefits for student engagement and learning. It is concluded that a significant development of intercultural awareness and a more global perspective on scientific understanding can be supported among undergraduates with relatively modest, easy to implement adaptations to course content.

  19. Annihilating time and space: The electrification of the United States Army, 1875--1920

    Science.gov (United States)

    Brown, Shannon Allen

    2000-10-01

    The United States Army embraced electrical technology in the 1870s as part of a wider initiative to meet the challenge of the coastal defense mission. As commercial power storage, generation, and transmission technology improved and the army came to recognize the value of the energy source as a means and method of improving command and control, localized electrical networks were integrated into the active service of the military. New vulnerabilities emerged as the army became ever more reliant upon electric power, however, and electrification---the institutional adoption and adaptation of electrical technologies---emerged as a very expensive and contentious process guided by technical, political, and economic pressures, and influenced by conflicting personalities within the service. This study considers the institutional evolution of the U.S. Army before and during World War I with respect to the adoption and application of electrical technology. The changing relationships between the military and electrical manufacturing and utilities industries during the period 1875--1920 are also explored. Using a combination of military archival sources and published primary materials, this study traces the effects of electrification on the army. In the end, this study proves that electrification was, at first, a symptom of, and later, a partial solution to the army's struggle to modernize and centralize during the period under consideration. Electrification produced a set of conditions that encouraged a new maturity within the ranks of the army, in technical, doctrinal, and administrative terms. This growth eventually led to the development of new capabilities, new forms of military organization, new missions, and new approaches to warfare.

  20. The rise of global warming skepticism: exploring affective image associations in the United States over time.

    Science.gov (United States)

    Smith, Nicholas; Leiserowitz, Anthony

    2012-06-01

    This article explores how affective image associations to global warming have changed over time. Four nationally representative surveys of the American public were conducted between 2002 and 2010 to assess public global warming risk perceptions, policy preferences, and behavior. Affective images (positive or negative feelings and cognitive representations) were collected and content analyzed. The results demonstrate a large increase in "naysayer" associations, indicating extreme skepticism about the issue of climate change. Multiple regression analyses found that holistic affect and "naysayer" associations were more significant predictors of global warming risk perceptions than cultural worldviews or sociodemographic variables, including political party and ideology. The results demonstrate the important role affective imagery plays in judgment and decision-making processes, how these variables change over time, and how global warming is currently perceived by the American public. © 2012 Society for Risk Analysis.

  1. Risk-based assessment of the allowable outage times for the unit 1 leningrad nuclear power plant ECCS components

    International Nuclear Information System (INIS)

    Koukhar, Sergey; Vinnikov, Bronislav

    2009-01-01

    Present paper describes a method for risk - informed assessment of the Allowable Outage Times (AOTs). The AOT is the time, when components of a safety system allowed to be out of service during power operation or during shutdown operation off a plant. If the components are not restored during the time, the plant in operation must be shut down or the plant in a given shutdown mode has to go to safer shutdown mode. Application of the method is also provided for the equipment of the Unit 1 Leningrad NPP ECCS components. For solution of the problem it is necessary to carry out two series of computations using a Living PSA model, level 1. In the first series of the computations the core damage frequency (CDFb) for the base configuration of the plant is determined (there is no equipment out of service). Here the symbol 'b' means the base configuration of a plant. In the second series of the computations the core damage frequency (CDFi) for the configuration of the plant with the component (which is out of service) is calculated. That is here CDFi is determined for the failure probability of the component equal to 1.0 (component 'i' is unavailable). Then it is necessary to determine so called Risk Increase Factor (RIF) using the following ratio: RIFi = CDFi / CDFb. At last the AOT is calculated with the help of the ratio: AOTi = Tppr / RIFi, where Tppr is a period of time between two Planned Preventive Repairs (PPRs). 1. Using the risk based approach the AOTs were calculated for a set of the components of the Unit 1 Leningrad NPP ECCS components. 2. The main conclusion from the analysis is that the current deterministic AOTs for the ECCS components are conservative and should be extended. 3. The risk based extension of the AOTs for the ECCS components can prevent the Unit 1 Leningrad NPP to enter into the operating modes with increased risk. (author)

  2. Gender and time allocation of cohabiting and married women and men in France, Italy, and the United States.

    Science.gov (United States)

    Bianchi, Suzanne; Lesnard, Laurent; Nazio, Tiziana; Raley, Sara

    2014-07-11

    Women, who generally do more unpaid and less paid work than men, have greater incentives to stay in marriages than cohabiting unions, which generally carry fewer legal protections for individuals that wish to dissolve their relationship. The extent to which cohabitation is institutionalized, however, is a matter of policy and varies substantially by country. The gender gap in paid and unpaid work between married and cohabiting individuals should be larger in countries where cohabitation is less institutionalized and where those in cohabiting relationships have relatively fewer legal protections should the relationship dissolve, yet few studies have explored this variation. Using time diary data from France, Italy, and the United States, we assess the time men and women devote to paid and unpaid work in cohabiting and married couples. These three countries provide a useful diversity in marital regimes for examining these expectations: France, where cohabitation is most "marriage like" and where partnerships can be registered and carry legal rights; the United States, where cohabitation is common but is short-lived and unstable and where legal protections vary across states; and Italy, where cohabitation is not common and where such unions are not legally acknowledged and less socially approved than in either France or the United States. Cohabitating men's and women's time allocated to market and nonmarket work is generally more similar than married men and women. Our expectations about country differences are only partially borne out by the findings. Greater gender differences in the time allocated to market and nonmarket work are found in Italy relative to either France or the U.S.

  3. Detecting Forest Disturbance Events from MODIS and Landsat Time Series for the Conterminous United States

    Science.gov (United States)

    Zhang, G.; Ganguly, S.; Saatchi, S. S.; Hagen, S. C.; Harris, N.; Yu, Y.; Nemani, R. R.

    2013-12-01

    Spatial and temporal patterns of forest disturbance and regrowth processes are key for understanding aboveground terrestrial vegetation biomass and carbon stocks at regional-to-continental scales. The NASA Carbon Monitoring System (CMS) program seeks key input datasets, especially information related to impacts due to natural/man-made disturbances in forested landscapes of Conterminous U.S. (CONUS), that would reduce uncertainties in current carbon stock estimation and emission models. This study provides a end-to-end forest disturbance detection framework based on pixel time series analysis from MODIS (Moderate Resolution Imaging Spectroradiometer) and Landsat surface spectral reflectance data. We applied the BFAST (Breaks for Additive Seasonal and Trend) algorithm to the Normalized Difference Vegetation Index (NDVI) data for the time period from 2000 to 2011. A harmonic seasonal model was implemented in BFAST to decompose the time series to seasonal and interannual trend components in order to detect abrupt changes in magnitude and direction of these components. To apply the BFAST for whole CONUS, we built a parallel computing setup for processing massive time-series data using the high performance computing facility of the NASA Earth Exchange (NEX). In the implementation process, we extracted the dominant deforestation events from the magnitude of abrupt changes in both seasonal and interannual components, and estimated dates for corresponding deforestation events. We estimated the recovery rate for deforested regions through regression models developed between NDVI values and time since disturbance for all pixels. A similar implementation of the BFAST algorithm was performed over selected Landsat scenes (all Landsat cloud free data was used to generate NDVI from atmospherically corrected spectral reflectances) to demonstrate the spatial coherence in retrieval layers between MODIS and Landsat. In future, the application of this largely parallel disturbance

  4. An Algorithm of Traffic Perception of DDoS Attacks against SOA Based on Time United Conditional Entropy

    Directory of Open Access Journals (Sweden)

    Yuntao Zhao

    2016-01-01

    Full Text Available DDoS attacks can prevent legitimate users from accessing the service by consuming resource of the target nodes, whose availability of network and service is exposed to a significant threat. Therefore, DDoS traffic perception is the premise and foundation of the whole system security. In this paper the method of DDoS traffic perception for SOA network based on time united conditional entropy was proposed. According to many-to-one relationship mapping between the source IP address and destination IP addresses of DDoS attacks, traffic characteristics of services are analyzed based on conditional entropy. The algorithm is provided with perception ability of DDoS attacks on SOA services by introducing time dimension. Simulation results show that the novel method can realize DDoS traffic perception with analyzing abrupt variation of conditional entropy in time dimension.

  5. A comparison of methods to predict historical daily streamflow time series in the southeastern United States

    Science.gov (United States)

    Farmer, William H.; Archfield, Stacey A.; Over, Thomas M.; Hay, Lauren E.; LaFontaine, Jacob H.; Kiang, Julie E.

    2015-01-01

    Effective and responsible management of water resources relies on a thorough understanding of the quantity and quality of available water. Streamgages cannot be installed at every location where streamflow information is needed. As part of its National Water Census, the U.S. Geological Survey is planning to provide streamflow predictions for ungaged locations. In order to predict streamflow at a useful spatial and temporal resolution throughout the Nation, efficient methods need to be selected. This report examines several methods used for streamflow prediction in ungaged basins to determine the best methods for regional and national implementation. A pilot area in the southeastern United States was selected to apply 19 different streamflow prediction methods and evaluate each method by a wide set of performance metrics. Through these comparisons, two methods emerged as the most generally accurate streamflow prediction methods: the nearest-neighbor implementations of nonlinear spatial interpolation using flow duration curves (NN-QPPQ) and standardizing logarithms of streamflow by monthly means and standard deviations (NN-SMS12L). It was nearly impossible to distinguish between these two methods in terms of performance. Furthermore, neither of these methods requires significantly more parameterization in order to be applied: NN-SMS12L requires 24 regional regressions—12 for monthly means and 12 for monthly standard deviations. NN-QPPQ, in the application described in this study, required 27 regressions of particular quantiles along the flow duration curve. Despite this finding, the results suggest that an optimal streamflow prediction method depends on the intended application. Some methods are stronger overall, while some methods may be better at predicting particular statistics. The methods of analysis presented here reflect a possible framework for continued analysis and comprehensive multiple comparisons of methods of prediction in ungaged basins (PUB

  6. Designing time-of-use program based on stochastic security constrained unit commitment considering reliability index

    International Nuclear Information System (INIS)

    Nikzad, Mehdi; Mozafari, Babak; Bashirvand, Mahdi; Solaymani, Soodabeh; Ranjbar, Ali Mohamad

    2012-01-01

    Recently in electricity markets, a massive focus has been made on setting up opportunities for participating demand side. Such opportunities, also known as demand response (DR) options, are triggered by either a grid reliability problem or high electricity prices. Two important challenges that market operators are facing are appropriate designing and reasonable pricing of DR options. In this paper, time-of-use program (TOU) as a prevalent time-varying program is modeled linearly based on own and cross elasticity definition. In order to decide on TOU rates, a stochastic model is proposed in which the optimum TOU rates are determined based on grid reliability index set by the operator. Expected Load Not Supplied (ELNS) is used to evaluate reliability of the power system in each hour. The proposed stochastic model is formulated as a two-stage stochastic mixed-integer linear programming (SMILP) problem and solved using CPLEX solver. The validity of the method is tested over the IEEE 24-bus test system. In this regard, the impact of the proposed pricing method on system load profile; operational costs and required capacity of up- and down-spinning reserve as well as improvement of load factor is demonstrated. Also the sensitivity of the results to elasticity coefficients is investigated. -- Highlights: ► Time-of-use demand response program is linearly modeled. ► A stochastic model is proposed to determine the optimum TOU rates based on ELNS index set by the operator. ► The model is formulated as a short-term two-stage stochastic mixed-integer linear programming problem.

  7. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  8. Accuracy and computational time of a hierarchy of growth rate definitions for breeder reactor fuel

    International Nuclear Information System (INIS)

    Maudlin, P.J.; Borg, R.C.; Ott, K.O.

    1979-01-01

    For a hierarchy of four logically different definitions for calculating the asymptotic growth of fast breeder reactor fuel, an investigation is performed concerning the comparative accuracy and computational effort associated with each definition. The definition based on detailed calculation of the accumulating fuel in an expanding park of reactors asymptotically yields the most accurate value of the infinite time growth rate, γ/sup infinity/, which is used as a reference value. The computational effort involved with the park definition is very large. The definition based on the single reactor calculation of the equilibrium surplus production rate and fuel inventory gives a value for γ/sup infinity of comparable accuracy to the park definition and uses significantly less central processor unit (CPU) time. The third definition is based on a continuous treatment of the reactor fuel cycle for a single reactor and gives a value for γ/sup infinity/ that accurately approximates the second definition. The continuous definition requires very little CPU time. The fourth definition employs the isotopic breeding worths, w/sub i//sup */, for a projection of the asymptotic growth rate. The CPU time involved in this definition is practically nil if its calculation is based on the few-cycle depletion calculation normally performed for core design and critical enrichment evaluations. The small inaccuracy (approx. = 1%) of the breeding-worth-based definition is well within the inaccuracy range that results unavoidably from other sources such as nuclear cross sections, group constants, and flux calculations. This fully justifies the use of this approach in routine calculations

  9. The magnitude and timing of grandparental coresidence during childhood in the United States

    Directory of Open Access Journals (Sweden)

    Mariana Amorim

    2017-12-01

    Full Text Available Background: The likelihood that a US child will live with a grandparent has increased over time. In 2015, nearly 12Š of children lived with a grandparent. However, the likelihood that a child will ever live with a grandparent is not known. Objective: We calculate the cumulative and age-specific probabilities of coresidence with grandparents during childhood. We stratify our analyses by types of grandparent-grandchild living arrangements (grandfamilies and three-generation households and by race and ethnicity. Methods: We use two data sets - the pooled 2010-2015 American Community Surveys (ACS and the 1997 National Longitudinal Survey of Youth (NLSY-97 - and produce estimates using life tables techniques. Results: Results indicate that nearly 30Š of US children ever coreside with grandparents. Both three-generation and grandfamily living arrangements are more prevalent among racial and ethnic minority groups, with three-generation coresidence particularly common among Asian children. Black children are nearly two times as likely to ever live in a grandfamily as compared to Hispanic and white children, respectively. Children are much more likely to experience grandparental coresidence during their first year of life than in any other year. Conclusions: This paper suggests that the magnitude of grandparental coresidence is greater than previously known, particularly in early childhood. Contribution: This is the first study to calculate age-specific and cumulative probabilities of coresidence with grandparents during the whole childhood. Doing so allows us to better craft public policies and guide new research on family complexity.

  10. Comparative response time and fault logging with a PLC and supervisory software and a standalone unit developed for recording

    International Nuclear Information System (INIS)

    Baldaconi, Ricardo H.; Costa, Fabio E. da

    2017-01-01

    The Cobalt-60 irradiator of IPEN / CNEN, a category IV facility, has a security system for inter locking doors or exposure of radioactive sources made simultaneously by a programmable logic controller (PLC) model S7-200 from Siemens and a relay logic. From a set of information, both systems work together opening doors or exposing the sources. All incoming and outgoing information are sent serially via EIA232 communication to a personal computer with Windows® platform for a supervisory program which provides the monitoring of the entire process by a synoptic table on the computer screen and is also intended to keep records of all events on the computer's hard drive. A deficiency was found for the process of sending events via serial communication (EIA232) from PLC to the supervisory program. When failure occurred in a very short time, the PLC always took the right decision, but the registration process that had to go through the Windows® timeshare lost the information. In the previous work developed a standalone electronics unit connected to the inputs and outputs of the security system, fully optocoupled to avoid any interference to the security system that records each event on a memory card. In this work, for checking the unit developed record time ability, transients incoming signals for simulating failures, were injected at security system inputs and the response time of security system, supervisory program and the autonomous unity were measured and compared. (author)

  11. Comparative response time and fault logging with a PLC and supervisory software and a standalone unit developed for recording

    Energy Technology Data Exchange (ETDEWEB)

    Baldaconi, Ricardo H., E-mail: ricardohovacker@hotmail.com [Escola Senai Roberto Simonsen, Educação e Tecnologia, Sao Paulo, SP (Brazil); Costa, Fabio E. da, E-mail: fecosta@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The Cobalt-60 irradiator of IPEN / CNEN, a category IV facility, has a security system for inter locking doors or exposure of radioactive sources made simultaneously by a programmable logic controller (PLC) model S7-200 from Siemens and a relay logic. From a set of information, both systems work together opening doors or exposing the sources. All incoming and outgoing information are sent serially via EIA232 communication to a personal computer with Windows® platform for a supervisory program which provides the monitoring of the entire process by a synoptic table on the computer screen and is also intended to keep records of all events on the computer's hard drive. A deficiency was found for the process of sending events via serial communication (EIA232) from PLC to the supervisory program. When failure occurred in a very short time, the PLC always took the right decision, but the registration process that had to go through the Windows® timeshare lost the information. In the previous work developed a standalone electronics unit connected to the inputs and outputs of the security system, fully optocoupled to avoid any interference to the security system that records each event on a memory card. In this work, for checking the unit developed record time ability, transients incoming signals for simulating failures, were injected at security system inputs and the response time of security system, supervisory program and the autonomous unity were measured and compared. (author)

  12. Unit Roots in Economic and Financial Time Series: A Re-Evaluation at the Decision-Based Significance Levels

    Directory of Open Access Journals (Sweden)

    Jae H. Kim

    2017-09-01

    Full Text Available This paper re-evaluates key past results of unit root tests, emphasizing that the use of a conventional level of significance is not in general optimal due to the test having low power. The decision-based significance levels for popular unit root tests, chosen using the line of enlightened judgement under a symmetric loss function, are found to be much higher than conventional ones. We also propose simple calibration rules for the decision-based significance levels for a range of unit root tests. At the decision-based significance levels, many time series in Nelson and Plosser’s (1982 (extended data set are judged to be trend-stationary, including real income variables, employment variables and money stock. We also find that nearly all real exchange rates covered in Elliott and Pesavento’s (2006 study are stationary; and that most of the real interest rates covered in Rapach and Weber’s (2004 study are stationary. In addition, using a specific loss function, the U.S. nominal interest rate is found to be stationary under economically sensible values of relative loss and prior belief for the null hypothesis.

  13. Cultural Heritage Through Time: a Case Study at Hadrian's Wall, United Kingdom

    Science.gov (United States)

    Fieber, K. D.; Mills, J. P.; Peppa, M. V.; Haynes, I.; Turner, S.; Turner, A.; Douglas, M.; Bryan, P. G.

    2017-02-01

    Diachronic studies are central to cultural heritage research for the investigation of change, from landscape to architectural scales. Temporal analyses and multi-temporal 3D reconstruction are fundamental for maintaining and safeguarding all forms of cultural heritage. Such studies form the basis for any kind of decision regarding intervention on cultural heritage, helping assess the risks and issues involved. This article introduces a European-wide project, entitled "Cultural Heritage Through Time", and the case study research carried out as a component of the project in the UK. The paper outlines the initial stages of the case study of landscape change at three locations on Hadrian's Wall, namely Beckfoot Roman Fort, Birdoswald Roman Fort and Corbridge Roman Station, all once part of the Roman Empire's north-west frontier. The main aim of the case study is to integrate heterogeneous information derived from a range of sources to help inform understanding of temporal aspects of landscape change. In particular, the study sites are at risk from natural hazards, notably erosion and flooding. The paper focuses on data collection and collation aspects, including an extensive archive search and field survey, as well as the methodology and preliminary data processing.

  14. Impact of reduced working time on surgical training in the United Kingdom and Ireland.

    Science.gov (United States)

    Canter, Richard

    2011-01-01

    The European Working Time Directive (EWTD) 48 h working week has been law in European countries since 1998. A phased approach to implementation was agreed for doctors in training, which steadily brought down working hours to 58 in 2004, 56 in 2007 and 48 in 2009. Medical trainees can "opt out" to a 54 h working week but this has to be voluntary and rotas cannot be constructed that assume an opt out is taking place. A key component of the working week arrangements is that the maximum period of work for a resident doctor without rest is 13 h. Shorter sessions of work have led to complex rotas, frequent handovers with difficulties maintaining continuity of care with implications for patient safety. Although there has been over 10 years notice of the changes to the working week and progress has up to now been reasonable (helped, in part by a steady increase in consultant numbers) this latest reduction from 56 h to 48 h seems to have been the most difficult to manage. Copyright © 2010 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  15. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  16. Performance comparison of next generation controller and MPC in real time for a SISO process with low cost DAQ unit

    Directory of Open Access Journals (Sweden)

    V. Bagyaveereswaran

    2016-09-01

    Full Text Available In this paper, a brief overview of real time implementation of next generation Robust, Tracking, Disturbance rejecting, Aggressive (RTDA controller and Model Predictive Control (MPC is provided. The control algorithm is implemented through MATLAB. The plant model used in controller design is obtained using system identification tool and integral response method. The controller model is developed in Simulink using jMPC tool, which will be executed in real time. The outputs obtained are tested for various constraint values to obtain the desirable results. The implementation of Hardware in Loop is done by interfacing it with MATLAB using Arduino as data acquisition unit. The performance of RTDA is compared with those of MPC and Proportional Integral controller.

  17. Review of registration requirements for new part-time doctors in New Zealand, Australia, the United Kingdom, Ireland and Canada.

    Science.gov (United States)

    Leitch, Sharon; Dovey, Susan M

    2010-12-01

    By the time medical students graduate many wish to work part-time while accommodating other lifestyle interests. To review flexibility of medical registration requirements for provisional registrants in New Zealand, Australia, the United Kingdom, Ireland and Canada. Internet-based review of registration bodies of each country, and each state or province in Australia and Canada, supplemented by emails and phone calls seeking clarification of missing or obscure information. Data from 20 regions were examined. Many similarities were found between study countries in their approaches to the registration of new doctors, although there are some regional differences. Most regions (65%) have a provisional registration period of one year. Extending this period was possible in 91% of regions. Part-time options were possible in 75% of regions. All regions required trainees to work in approved practice settings. Only the UK provided comprehensive documentation of their requirements in an accessible format and clearly explaining the options for part-time work. Australia appeared to be more flexible than other countries with respect to part- and full-time work requirements. All countries need to examine their registration requirements to introduce more flexibility wherever possible, as a strategy for addressing workforce shortages.

  18. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    International Nuclear Information System (INIS)

    Badal, Andreu; Badano, Aldo

    2009-01-01

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  19. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)

    2009-11-15

    Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  20. Accelerating Monte Carlo simulations of photon transport in a voxelized geometry using a massively parallel graphics processing unit.

    Science.gov (United States)

    Badal, Andreu; Badano, Aldo

    2009-11-01

    It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDATM programming model (NVIDIA Corporation, Santa Clara, CA). An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.

  1. Time-of-flight data acquisition unit (DAU) for neutron scattering experiments. Specification of the requirements and design concept. Version 3.1

    International Nuclear Information System (INIS)

    Herdam, G.; Klessmann, H.; Wawer, W.; Adebayo, J.; David, G.; Szatmari, F.

    1989-12-01

    This specification describes the requirements for the Data Acquisition Unit (DAU) and defines the design concept for the functional units involved. The Data Acquisition Unit will be used in the following neutron scattering experiments: Time-of-Flight Spectrometer NEAT, Time-of-Flight Spectrometer SPAN. In addition, the data of the SPAN spectrometer in Spin Echo experiments will be accumulated. The Data Acquisition Unit can be characterised by the following requirements: Time-of-flight measurement with high time resolution (125 ns), sorting the time-of-flight in up to 4096 time channels (channel width ≥ 1 μs), selection of different time channel widths for peak and background, on-line time-of-flight correction for neutron flight paths of different lengths, sorting the detector position information in up to 4096 position channels, accumulation of two-dimensional spectra in a 32 Mbyte RAM memory (4 K time channels*4 K position channels*16 bits). Because of the stringent timing requirements the functional units of the DAU are hardware controlled via tables. The DAU is part of a process control system which has access to the functional units via the VMEbus in order to initialise, to load tables and control information, and to read status information and spectra. (orig.) With 18 figs

  2. Acceleration for 2D time-domain elastic full waveform inversion using a single GPU card

    Science.gov (United States)

    Jiang, Jinpeng; Zhu, Peimin

    2018-05-01

    Full waveform inversion (FWI) is a challenging procedure due to the high computational cost related to the modeling, especially for the elastic case. The graphics processing unit (GPU) has become a popular device for the high-performance computing (HPC). To reduce the long computation time, we design and implement the GPU-based 2D elastic FWI (EFWI) in time domain using a single GPU card. We parallelize the forward modeling and gradient calculations using the CUDA programming language. To overcome the limitation of relatively small global memory on GPU, the boundary saving strategy is exploited to reconstruct the forward wavefield. Moreover, the L-BFGS optimization method used in the inversion increases the convergence of the misfit function. A multiscale inversion strategy is performed in the workflow to obtain the accurate inversion results. In our tests, the GPU-based implementations using a single GPU device achieve >15 times speedup in forward modeling, and about 12 times speedup in gradient calculation, compared with the eight-core CPU implementations optimized by OpenMP. The test results from the GPU implementations are verified to have enough accuracy by comparing the results obtained from the CPU implementations.

  3. Monte Carlo methods for neutron transport on graphics processing units using Cuda - 015

    International Nuclear Information System (INIS)

    Nelson, A.G.; Ivanov, K.N.

    2010-01-01

    This work examined the feasibility of utilizing Graphics Processing Units (GPUs) to accelerate Monte Carlo neutron transport simulations. First, a clean-sheet MC code was written in C++ for an x86 CPU and later ported to run on GPUs using NVIDIA's CUDA programming language. After further optimization, the GPU ran 21 times faster than the CPU code when using single-precision floating point math. This can be further increased with no additional effort if accuracy is sacrificed for speed: using a compiler flag, the speedup was increased to 22x. Further, if double-precision floating point math is desired for neutron tracking through the geometry, a speedup of 11x was obtained. The GPUs have proven to be useful in this study, but the current generation does have limitations: the maximum memory currently available on a single GPU is only 4 GB; the GPU RAM does not provide error-checking and correction; and the optimization required for large speedups can lead to confusing code. (authors)

  4. Assessment of full-time faculty preceptors by colleges and schools of pharmacy in the United States and Puerto Rico.

    Science.gov (United States)

    Kirschenbaum, Harold L; Zerilli, Tina

    2012-10-12

    To identify the manner in which colleges and schools of pharmacy in the United States and Puerto Rico assess full-time faculty preceptors. Directors of pharmacy practice (or equivalent title) were invited to complete an online, self-administered questionnaire. Seventy of the 75 respondents (93.3%) confirmed that their college or school assessed full-time pharmacy faculty members based on activities related to precepting students at a practice site. The most commonly reported assessment components were summative student evaluations (98.5%), type of professional service provided (92.3%), scholarly accomplishments (86.2%), and community service (72.3%). Approximately 42% of respondents indicated that a letter of evaluation provided by a site-based supervisor was included in their assessment process. Some colleges and schools also conducted onsite assessment of faculty members. Most colleges and schools of pharmacy assess full-time faculty-member preceptors via summative student assessments, although other strategies are used. Given the important role of preceptors in ensuring students are prepared for pharmacy practice, colleges and schools of pharmacy should review their assessment strategies for full-time faculty preceptors, keeping in mind the methodologies used by other institutions.

  5. Designing and evaluating an automated system for real-time medication administration error detection in a neonatal intensive care unit.

    Science.gov (United States)

    Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S

    2018-05-01

    Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P patient exposure to potential harm following MAE events.

  6. Real-time acquisition and display of flow contrast using speckle variance optical coherence tomography in a graphics processing unit.

    Science.gov (United States)

    Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V

    2014-02-01

    In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.

  7. Training for the future NHS: training junior doctors in the United Kingdom within the 48-hour European working time directive.

    Science.gov (United States)

    Datta, Shreelatta T; Davies, Sally J

    2014-01-01

    Since August 2009, the National Health Service of the United Kingdom has faced the challenge of delivering training for junior doctors within a 48-hour working week, as stipulated by the European Working Time Directive and legislated in the UK by the Working Time Regulations 1998. Since that time, widespread concern has been expressed about the impact of restricted duty hours on the quality of postgraduate medical training in the UK, particularly in the "craft" specialties--that is, those disciplines in which trainees develop practical skills that are best learned through direct experience with patients. At the same time, specialist training in the UK has experienced considerable change since 2007 with the introduction of competency-based specialty curricula, workplace-based assessment, and the annual review of competency progression. The challenges presented by the reduction of duty hours include increased pressure on doctors-in-training to provide service during evening and overnight hours, reduced interaction with supervisors, and reduced opportunities for learning. This paper explores these challenges and proposes potential responses with respect to the reorganization of training and service provision.

  8. Testing the Feasibility of Skype and FaceTime Updates With Parents in the Neonatal Intensive Care Unit.

    Science.gov (United States)

    Epstein, Elizabeth Gingell; Sherman, Jessica; Blackman, Amy; Sinkin, Robert A

    2015-07-01

    Effective provider-parent relationships are essential during critical illness when treatment decisions are complex, the environment is crowded and unfamiliar, and outcomes are uncertain. To evaluate the feasibility of daily Skype or FaceTime updates with parents of patients in the neonatal intensive care unit (NICU) and to assess the intervention's potential for improving parent-provider relationships. A pre/post mixed-methods approach was used. NICU parent participants received daily Skype or FaceTime updates for 5 days and completed demographic and feasibility surveys. Parents also completed Penticuff's Parents' Understanding survey before and after the intervention. Nurses and physicians completed feasibility surveys after each update. Twenty-six parents were enrolled and 15 completed the study. More than 90% of providers and parents perceived the intervention to be reliable and easy to use, and about 80% of parents and providers rated video and audio quality as either excellent or good. Frozen screens and missed updates due to scheduling problems were challenges. Two of the 4 subscores on the Parents' Understanding survey improved significantly. Qualitative data favor the intervention as meaningful for parents. Real-time videoconferencing via Skype or FaceTime is feasible for providing updates for parents when they cannot be present in the NICU and can be used to include parents in bedside rounds. Videoconferencing updates may improve relationships between parents and the health care team. ©2015 American Association of Critical-Care Nurses.

  9. Massively Parallel Signal Processing using the Graphics Processing Unit for Real-Time Brain-Computer Interface Feature Extraction.

    Science.gov (United States)

    Wilson, J Adam; Williams, Justin C

    2009-01-01

    The clock speeds of modern computer processors have nearly plateaued in the past 5 years. Consequently, neural prosthetic systems that rely on processing large quantities of data in a short period of time face a bottleneck, in that it may not be possible to process all of the data recorded from an electrode array with high channel counts and bandwidth, such as electrocorticographic grids or other implantable systems. Therefore, in this study a method of using the processing capabilities of a graphics card [graphics processing unit (GPU)] was developed for real-time neural signal processing of a brain-computer interface (BCI). The NVIDIA CUDA system was used to offload processing to the GPU, which is capable of running many operations in parallel, potentially greatly increasing the speed of existing algorithms. The BCI system records many channels of data, which are processed and translated into a control signal, such as the movement of a computer cursor. This signal processing chain involves computing a matrix-matrix multiplication (i.e., a spatial filter), followed by calculating the power spectral density on every channel using an auto-regressive method, and finally classifying appropriate features for control. In this study, the first two computationally intensive steps were implemented on the GPU, and the speed was compared to both the current implementation and a central processing unit-based implementation that uses multi-threading. Significant performance gains were obtained with GPU processing: the current implementation processed 1000 channels of 250 ms in 933 ms, while the new GPU method took only 27 ms, an improvement of nearly 35 times.

  10. Fast-GPU-PCC: A GPU-Based Technique to Compute Pairwise Pearson's Correlation Coefficients for Time Series Data-fMRI Study.

    Science.gov (United States)

    Eslami, Taban; Saeed, Fahad

    2018-04-20

    Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique, which has been regularly used for studying brain’s functional activities in the past few years. A very well-used measure for capturing functional associations in brain is Pearson’s correlation coefficient. Pearson’s correlation is widely used for constructing functional network and studying dynamic functional connectivity of the brain. These are useful measures for understanding the effects of brain disorders on connectivities among brain regions. The fMRI scanners produce huge number of voxels and using traditional central processing unit (CPU)-based techniques for computing pairwise correlations is very time consuming especially when large number of subjects are being studied. In this paper, we propose a graphics processing unit (GPU)-based algorithm called Fast-GPU-PCC for computing pairwise Pearson’s correlation coefficient. Based on the symmetric property of Pearson’s correlation, this approach returns N ( N − 1 ) / 2 correlation coefficients located at strictly upper triangle part of the correlation matrix. Storing correlations in a one-dimensional array with the order as proposed in this paper is useful for further usage. Our experiments on real and synthetic fMRI data for different number of voxels and varying length of time series show that the proposed approach outperformed state of the art GPU-based techniques as well as the sequential CPU-based versions. We show that Fast-GPU-PCC runs 62 times faster than CPU-based version and about 2 to 3 times faster than two other state of the art GPU-based methods.

  11. School Start Times for Middle School and High School Students - United States, 2011-12 School Year.

    Science.gov (United States)

    Wheaton, Anne G; Ferro, Gabrielle A; Croft, Janet B

    2015-08-07

    Adolescents who do not get enough sleep are more likely to be overweight; not engage in daily physical activity; suffer from depressive symptoms; engage in unhealthy risk behaviors such as drinking, smoking tobacco, and using illicit drugs; and perform poorly in school. However, insufficient sleep is common among high school students, with less than one third of U.S. high school students sleeping at least 8 hours on school nights. In a policy statement published in 2014, the American Academy of Pediatrics (AAP) urged middle and high schools to modify start times as a means to enable students to get adequate sleep and improve their health, safety, academic performance, and quality of life. AAP recommended that "middle and high schools should aim for a starting time of no earlier than 8:30 a.m.". To assess state-specific distributions of public middle and high school start times and establish a pre-recommendation baseline, CDC and the U.S. Department of Education analyzed data from the 2011-12 Schools and Staffing Survey (SASS). Among an estimated 39,700 public middle, high, and combined schools* in the United States, the average start time was 8:03 a.m. Overall, only 17.7% of these public schools started school at 8:30 a.m. or later. The percentage of schools with 8:30 a.m. or later start times varied greatly by state, ranging from 0% in Hawaii, Mississippi, and Wyoming to more than three quarters of schools in Alaska (76.8%) and North Dakota (78.5%). A school system start time policy of 8:30 a.m. or later provides teenage students the opportunity to achieve the 8.5-9.5 hours of sleep recommended by AAP and the 8-10 hours recommended by the National Sleep Foundation.

  12. Flocking-based Document Clustering on the Graphics Processing Unit

    Energy Technology Data Exchange (ETDEWEB)

    Cui, Xiaohui [ORNL; Potok, Thomas E [ORNL; Patton, Robert M [ORNL; ST Charles, Jesse Lee [ORNL

    2008-01-01

    Abstract?Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the flocking behavior of birds. Each bird represents a single document and flies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly difficult to receive results in a reasonable amount of time. However, flocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have found increased performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefit the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NIVIDA? we developed a document flocking implementation to be run on the NIVIDA?GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3000 documents. The results of these tests were very significant. Performance gains ranged from three to nearly five times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  13. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Rath, N., E-mail: Nikolaus@rath.org; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q. [Department of Applied Physics and Applied Mathematics, Columbia University, 500 W 120th St, New York, New York 10027 (United States); Kato, S. [Department of Information Engineering, Nagoya University, Nagoya (Japan)

    2014-04-15

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules.

  14. Fast, multi-channel real-time processing of signals with microsecond latency using graphics processing units

    International Nuclear Information System (INIS)

    Rath, N.; Levesque, J. P.; Mauel, M. E.; Navratil, G. A.; Peng, Q.; Kato, S.

    2014-01-01

    Fast, digital signal processing (DSP) has many applications. Typical hardware options for performing DSP are field-programmable gate arrays (FPGAs), application-specific integrated DSP chips, or general purpose personal computer systems. This paper presents a novel DSP platform that has been developed for feedback control on the HBT-EP tokamak device. The system runs all signal processing exclusively on a Graphics Processing Unit (GPU) to achieve real-time performance with latencies below 8 μs. Signals are transferred into and out of the GPU using PCI Express peer-to-peer direct-memory-access transfers without involvement of the central processing unit or host memory. Tests were performed on the feedback control system of the HBT-EP tokamak using forty 16-bit floating point inputs and outputs each and a sampling rate of up to 250 kHz. Signals were digitized by a D-TACQ ACQ196 module, processing done on an NVIDIA GTX 580 GPU programmed in CUDA, and analog output was generated by D-TACQ AO32CPCI modules

  15. Volumization of the Brow at the Time of Blepharoplasty: Treating the Eyebrow Fat Pad as an Independent Unit.

    Science.gov (United States)

    Vrcek, Ivan; Chou, Eva; Somogyi, Marie; Shore, John W

    Loss of volume in the sub-brow fat pad with associated descent of the eyebrow is a common anatomical finding resulting in both functional and aesthetic consequences. A variety of techniques have been described to address brow position at the time of blepharoplasty. To our knowledge, none of these techniques treat the sub-brow fat pad as an isolated unit. Doing so enables the surgeon to stabilize and volumize the brow without resultant tension on the blepharoplasty wound. The authors describe a technique for addressing volume loss in the eyebrow with associated brow descent that treats the sub-brow fat pad as an isolated unit. A retrospective review of all patients undergoing brow ptosis repair by a single surgeon (J.W.S.) over an 11-month period was performed. Eighteen patients and 33 brows underwent the technique described. Patients were followed for an average of 11 weeks (range: 4 weeks to 20 weeks). All patients preoperatively displayed both visually significant dermatochalasis and brow descent below the orbital rim. Evaluation of pre- and postoperative photos demonstrates successful volumization of the brow with skin redraping without focal dimpling or undue tension on the eyelid wound. Performing a dissection that allows the sub-brow fat pad to be elevated in isolation from the overlying orbicularis and underlying periosteum allows for volumization and of the brow without compromising closure. This technique is a safe and effective means of volumizing the brow and treating secondary brow descent.

  16. Conceptual design of the X-IFU Instrument Control Unit on board the ESA Athena mission

    Science.gov (United States)

    Corcione, L.; Ligori, S.; Capobianco, V.; Bonino, D.; Valenziano, L.; Guizzo, G. P.

    2016-07-01

    Athena is one of L-class missions selected in the ESA Cosmic Vision 2015-2025 program for the science theme of the Hot and Energetic Universe. The Athena model payload includes the X-ray Integral Field Unit (X-IFU), an advanced actively shielded X-ray microcalorimeter spectrometer for high spectral resolution imaging, utilizing cooled Transition Edge Sensors. This paper describes the preliminary architecture of Instrument Control Unit (ICU), which is aimed at operating all XIFU's subsystems, as well as at implementing the main functional interfaces of the instrument with the S/C control unit. The ICU functions include the TC/TM management with S/C, science data formatting and transmission to S/C Mass Memory, housekeeping data handling, time distribution for synchronous operations and the management of the X-IFU components (i.e. CryoCoolers, Filter Wheel, Detector Readout Electronics Event Processor, Power Distribution Unit). ICU functions baseline implementation for the phase-A study foresees the usage of standard and Space-qualified components from the heritage of past and current space missions (e.g. Gaia, Euclid), which currently encompasses Leon2/Leon3 based CPU board and standard Space-qualified interfaces for the exchange commands and data between ICU and X-IFU subsystems. Alternative architecture, arranged around a powerful PowerPC-based CPU, is also briefly presented, with the aim of endowing the system with enhanced hardware resources and processing power capability, for the handling of control and science data processing tasks not defined yet at this stage of the mission study.

  17. The Scales of Time, Length, Mass, Energy, and Other Fundamental Physical Quantities in the Atomic World and the Use of Atomic Units in Quantum Mechanical Calculations

    Science.gov (United States)

    Teo, Boon K.; Li, Wai-Kee

    2011-01-01

    This article is divided into two parts. In the first part, the atomic unit (au) system is introduced and the scales of time, space (length), and speed, as well as those of mass and energy, in the atomic world are discussed. In the second part, the utility of atomic units in quantum mechanical and spectroscopic calculations is illustrated with…

  18. Summary of the Second Workshop on Liquid Argon Time Projection Chamber Research and Development in the United States

    CERN Document Server

    Acciarri, R; Artrip, D; Baller, B; Bromberg, C; Cavanna, F; Carls, B; Chen, H; Deptuch, G; Epprecht, L; Dharmapalan, R; Foreman, W; Hahn, A; Johnson, M; Jones, B J P; Junk, T; Lang, K; Lockwitz, S; Marchionni, A; Mauger, C; Montanari, C; Mufson, S; Nessi, M; Back, H Olling; Petrillo, G; Pordes, S; Raaf, J; Rebel, B; Sinins, G; Soderberg, M; Spooner, N J C; Stancari, M; Strauss, T; Terao, K; Thorn, C; Tope, T; Toups, M; Urheim, J; Van de Water, R; Wang, H; Wasserman, R; Weber, M; Whittington, D; Yang, T

    2015-01-01

    The second workshop to discuss the development of liquid argon time projection chambers (LArTPCs) in the United States was held at Fermilab on July 8-9, 2014. The workshop was organized under the auspices of the Coordinating Panel for Advanced Detectors, a body that was initiated by the American Physical Society Division of Particles and Fields. All presentations at the workshop were made in six topical plenary sessions: $i)$ Argon Purity and Cryogenics, $ii)$ TPC and High Voltage, $iii)$ Electronics, Data Acquisition and Triggering, $iv)$ Scintillation Light Detection, $v)$ Calibration and Test Beams, and $vi)$ Software. This document summarizes the current efforts in each of these areas. It primarily focuses on the work in the US, but also highlights work done elsewhere in the world.

  19. Park availability and physical activity, TV time, and overweight and obesity among women: Findings from Australia and the United States.

    Science.gov (United States)

    Veitch, Jenny; Abbott, Gavin; Kaczynski, Andrew T; Wilhelm Stanis, Sonja A; Besenyi, Gina M; Lamb, Karen E

    2016-03-01

    This study examined relationships between three measures of park availability and self-reported physical activity (PA), television viewing (TV) time, and overweight/obesity among women from Australia and the United States. Having more parks near home was the only measure of park availability associated with an outcome. Australian women (n=1848) with more parks near home had higher odds of meeting PA recommendations and lower odds of being overweight/obese. In the US sample (n=489), women with more parks near home had lower odds of watching >4h TV per day. A greater number of parks near home was associated with lower BMI among both Australian and US women. Evidence across diverse contexts provides support to improve park availability to promote PA and other health behaviors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Summary of the Second Workshop on Liquid Argon Time Projection Chamber Research and Development in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); et al.

    2015-04-21

    The second workshop to discuss the development of liquid argon time projection chambers (LArTPCs) in the United States was held at Fermilab on July 8-9, 2014. The workshop was organized under the auspices of the Coordinating Panel for Advanced Detectors, a body that was initiated by the American Physical Society Division of Particles and Fields. All presentations at the workshop were made in six topical plenary sessions: i) Argon Purity and Cryogenics, ii) TPC and High Voltage, iii) Electronics, Data Acquisition and Triggering, iv) Scintillation Light Detection, v) Calibration and Test Beams, and vi) Software. This document summarizes the current efforts in each of these areas. It primarily focuses on the work in the US, but also highlights work done elsewhere in the world.

  1. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    Science.gov (United States)

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  2. Effect of nocturnal sound reduction on the incidence of delirium in intensive care unit patients: An interrupted time series analysis.

    Science.gov (United States)

    van de Pol, Ineke; van Iterson, Mat; Maaskant, Jolanda

    2017-08-01

    Delirium in critically-ill patients is a common multifactorial disorder that is associated with various negative outcomes. It is assumed that sleep disturbances can result in an increased risk of delirium. This study hypothesized that implementing a protocol that reduces overall nocturnal sound levels improves quality of sleep and reduces the incidence of delirium in Intensive Care Unit (ICU) patients. This interrupted time series study was performed in an adult mixed medical and surgical 24-bed ICU. A pre-intervention group of 211 patients was compared with a post-intervention group of 210 patients after implementation of a nocturnal sound-reduction protocol. Primary outcome measures were incidence of delirium, measured by the Intensive Care Delirium Screening Checklist (ICDSC) and quality of sleep, measured by the Richards-Campbell Sleep Questionnaire (RCSQ). Secondary outcome measures were use of sleep-inducing medication, delirium treatment medication, and patient-perceived nocturnal noise. A significant difference in slope in the percentage of delirium was observed between the pre- and post-intervention periods (-3.7% per time period, p=0.02). Quality of sleep was unaffected (0.3 per time period, p=0.85). The post-intervention group used significantly less sleep-inducing medication (psound-reduction protocol. However, reported sleep quality did not improve. Copyright © 2017. Published by Elsevier Ltd.

  3. Time-dependent conversion of a methacrylate-based sealer polymerized with different light-curing units.

    Science.gov (United States)

    Beriat, Nilufer C; Ertan, Atilla; Cehreli, Zafer C; Gulsahi, Kamran

    2009-01-01

    The purpose of this study was to investigate the degree of conversion of a methacrylate-based sealer (Epiphany; Pentron Clinical Technologies, Wallingford, CT) with regard to the method of photoactivation, distance from the light-curing unit (LCU), and post-curing time. Freshly mixed Epiphany sealer was dispensed into half-pipe-shaped silicone moulds (n = 48), after which the specimens were photoactivated with one of the following LCUs from the coronal aspect: (1) quartz tungsten halogen/40 seconds and (2) light-emitting diode/20 seconds. In each specimen, the degree of conversion was measured at three different locations (coronal, middle, and apical) using Fourier transform infrared spectroscopy before and after photoactivation. The amount of conversion was approximately 50% after photoactivation and improved by approximately 10% after 15 days. Conversion of Epiphany was not affected by the type of LCU (p > 0.001) or the distance from the LCU (p > 0.001) but showed a significant increase within time (p < 0.001). These results indicate incomplete polymerization of Epiphany, despite a post-curing time of as long as 2 weeks in vitro.

  4. Effect of just-in-time simulation training on tracheal intubation procedure safety in the pediatric intensive care unit.

    Science.gov (United States)

    Nishisaki, Akira; Donoghue, Aaron J; Colborn, Shawn; Watson, Christine; Meyer, Andrew; Brown, Calvin A; Helfaer, Mark A; Walls, Ron M; Nadkarni, Vinay M

    2010-07-01

    Tracheal intubation-associated events (TIAEs) are common (20%) and life threatening (4%) in pediatric intensive care units. Physician trainees are required to learn tracheal intubation during intensive care unit rotations. The authors hypothesized that "just-in-time" simulation-based intubation refresher training would improve resident participation, success, and decrease TIAEs. For 14 months, one of two on-call residents, nurses, and respiratory therapists received 20-min multidisciplinary simulation-based tracheal intubation training and 10-min resident skill refresher training at the beginning of their on-call period in addition to routine residency education. The rate of first attempt and overall success between refresher-trained and concurrent non-refresher-trained residents (controls) during the intervention phase was compared. The incidence of TIAEs between preintervention and intervention phase was also compared. Four hundred one consecutive primary orotracheal intubations were evaluated: 220 preintervention and 181 intervention. During intervention phase, neither first-attempt success nor overall success rate differed between refresher-trained residents versus concurrent non-refresher-trained residents: 20 of 40 (50%) versus 15 of 24 (62.5%), P = 0.44 and 23 of 40 (57.5%) versus 18 of 24 (75.0%), P = 0.19, respectively. The resident's first attempt and overall success rate did not differ between preintervention and intervention phases. The incidence of TIAE during preintervention and intervention phases was similar: 22.0% preintervention versus 19.9% intervention, P = 0.62, whereas resident participation increased from 20.9% preintervention to 35.4% intervention, P = 0.002. Resident participation continued to be associated with TIAE even after adjusting for the phase and difficult airway condition: odds ratio 2.22 (95% CI 1.28-3.87, P = 0.005). Brief just-in-time multidisciplinary simulation-based intubation refresher training did not improve the resident

  5. First Update of the Criteria for Certification of Chest Pain Units in Germany: Facelift or New Model?

    Science.gov (United States)

    Breuckmann, Frank; Rassaf, Tienush

    2016-03-01

    In an effort to provide a systematic and specific standard-of-care for patients with acute chest pain, the German Cardiac Society introduced criteria for certification of specialized chest pain units (CPUs) in 2008, which have been replaced by a recent update published in 2015. We reviewed the development of CPU establishment in Germany during the past 7 years and compared and commented the current update of the certification criteria. As of October 2015, 228 CPUs in Germany have been successfully certified by the German Cardiac Society; 300 CPUs are needed for full coverage closing gaps in rural regions. Current changes of the criteria mainly affect guideline-adherent adaptions of diagnostic work-ups, therapeutic strategies, risk stratification, in-hospital timing and education, and quality measures, whereas the overall structure remained unchanged. Benchmarking by participation within the German CPU registry is encouraged. Even though the history is short, the concept of certified CPUs in Germany is accepted and successful underlined by its recent implementation in national and international guidelines. First registry data demonstrated a high standard of quality-of-care. The current update provides rational adaptions to new guidelines and developments without raising the level for successful certifications. A periodic release of fast-track updates with shorter time frames and an increase of minimum requirements should be considered.

  6. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    Science.gov (United States)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  7. Timing of low tidal volume ventilation and intensive care unit mortality in acute respiratory distress syndrome. A prospective cohort study.

    Science.gov (United States)

    Needham, Dale M; Yang, Ting; Dinglas, Victor D; Mendez-Tellez, Pedro A; Shanholtz, Carl; Sevransky, Jonathan E; Brower, Roy G; Pronovost, Peter J; Colantuoni, Elizabeth

    2015-01-15

    Reducing tidal volume decreases mortality in acute respiratory distress syndrome (ARDS). However, the effect of the timing of low tidal volume ventilation is not well understood. To evaluate the association of intensive care unit (ICU) mortality with initial tidal volume and with tidal volume change over time. Multivariable, time-varying Cox regression analysis of a multisite, prospective study of 482 patients with ARDS with 11,558 twice-daily tidal volume assessments (evaluated in milliliter per kilogram of predicted body weight [PBW]) and daily assessment of other mortality predictors. An increase of 1 ml/kg PBW in initial tidal volume was associated with a 23% increase in ICU mortality risk (adjusted hazard ratio, 1.23; 95% confidence interval [CI], 1.06-1.44; P = 0.008). Moreover, a 1 ml/kg PBW increase in subsequent tidal volumes compared with the initial tidal volume was associated with a 15% increase in mortality risk (adjusted hazard ratio, 1.15; 95% CI, 1.02-1.29; P = 0.019). Compared with a prototypical patient receiving 8 days with a tidal volume of 6 ml/kg PBW, the absolute increase in ICU mortality (95% CI) of receiving 10 and 8 ml/kg PBW, respectively, across all 8 days was 7.2% (3.0-13.0%) and 2.7% (1.2-4.6%). In scenarios with variation in tidal volume over the 8-day period, mortality was higher when a larger volume was used earlier. Higher tidal volumes shortly after ARDS onset were associated with a greater risk of ICU mortality compared with subsequent tidal volumes. Timely recognition of ARDS and adherence to low tidal volume ventilation is important for reducing mortality. Clinical trial registered with www.clinicaltrials.gov (NCT 00300248).

  8. Extending total parenteral nutrition hang time in the neonatal intensive care unit: is it safe and cost effective?

    Science.gov (United States)

    Balegar V, Kiran Kumar; Azeem, Mohammad Irfan; Spence, Kaye; Badawi, Nadia

    2013-01-01

    To investigate the effects of prolonging hang time of total parenteral nutrition (TPN) fluid on central line-associated blood stream infection (CLABSI), TPN-related cost and nursing workload. A before-after observational study comparing the practice of hanging TPN bags for 48 h (6 February 2009-5 February 2010) versus 24 h (6 February 2008-5 February 2009) in a tertiary neonatal intensive care unit was conducted. The main outcome measures were CLABSI, TPN-related expenses and nursing workload. One hundred thirty-six infants received 24-h TPN bags and 124 received 48-h TPN bags. Median (inter-quartile range) gestation (37 weeks (33,39) vs. 36 weeks (33,39)), mean (±standard deviation) admission weight of 2442 g (±101) versus 2476 g (±104) and TPN duration (9.7 days (±12.7) vs. 9.9 days (±13.4)) were similar (P > 0.05) between the 24- and 48-h TPN groups. There was no increase in CLABSI with longer hang time (0.8 vs. 0.4 per 1000 line days in the 24-h vs. 48-h group; P < 0.05). Annual cost saving using 48-h TPN was AUD 97,603.00. By using 48-h TPN, 68.3% of nurses indicated that their workload decreased and 80.5% indicated that time spent changing TPN reduced. Extending TPN hang time from 24 to 48 h did not alter CLABSI rate and was associated with a reduced TPN-related cost and perceived nursing workload. Larger randomised controlled trials are needed to more clearly delineate these effects. © 2012 The Authors. Journal of Paediatrics and Child Health © 2012 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  9. Utility of Ambulance Data for Real-Time Syndromic Surveillance: A Pilot in the West Midlands Region, United Kingdom.

    Science.gov (United States)

    Todkill, Dan; Loveridge, Paul; Elliot, Alex J; Morbey, Roger A; Edeghere, Obaghe; Rayment-Bishop, Tracy; Rayment-Bishop, Chris; Thornes, John E; Smith, Gillian

    2017-12-01

    Introduction The Public Health England (PHE; United Kingdom) Real-Time Syndromic Surveillance Team (ReSST) currently operates four national syndromic surveillance systems, including an emergency department system. A system based on ambulance data might provide an additional measure of the "severe" end of the clinical disease spectrum. This report describes the findings and lessons learned from the development and preliminary assessment of a pilot syndromic surveillance system using ambulance data from the West Midlands (WM) region in England. Hypothesis/Problem Is an Ambulance Data Syndromic Surveillance System (ADSSS) feasible and of utility in enhancing the existing suite of PHE syndromic surveillance systems? An ADSSS was designed, implemented, and a pilot conducted from September 1, 2015 through March 1, 2016. Surveillance cases were defined as calls to the West Midlands Ambulance Service (WMAS) regarding patients who were assigned any of 11 specified chief presenting complaints (CPCs) during the pilot period. The WMAS collected anonymized data on cases and transferred the dataset daily to ReSST, which contained anonymized information on patients' demographics, partial postcode of patients' location, and CPC. The 11 CPCs covered a broad range of syndromes. The dataset was analyzed descriptively each week to determine trends and key epidemiological characteristics of patients, and an automated statistical algorithm was employed daily to detect higher than expected number of calls. A preliminary assessment was undertaken to assess the feasibility, utility (including quality of key indicators), and timeliness of the system for syndromic surveillance purposes. Lessons learned and challenges were identified and recorded during the design and implementation of the system. The pilot ADSSS collected 207,331 records of individual ambulance calls (daily mean=1,133; range=923-1,350). The ADSSS was found to be timely in detecting seasonal changes in patterns of respiratory

  10. Dependence of the mean time to failure of a hydraulic balancing machine unit on different factors for sectional pumps of the Alrosa JSC

    Science.gov (United States)

    Ovchinnikov, N. P.; Portnyagina, V. V.; Sobakina, M. P.

    2017-12-01

    This paper presents factors that have a greater impact on the mean time to failure of a hydraulic balancing machine unit working in underground kimberlite mines of the Alrosa JSC, the hydraulic balancing machine unit being the least reliable structural elements in terms of error-free operation. In addition, a multifactor linear dependence of mean time to failure of a hydraulic balancing machine unit is shown regarding it being parts of stage sectional pumps in the underground kimberlite mines of the Alrosa JSC. In prospect, this diagram can allow us to predict the durability of the least reliable structural element of a sectional pump.

  11. A Real-Time Accurate Model and Its Predictive Fuzzy PID Controller for Pumped Storage Unit via Error Compensation

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2017-12-01

    Full Text Available Model simulation and control of pumped storage unit (PSU are essential to improve the dynamic quality of power station. Only under the premise of the PSU models reflecting the actual transient process, the novel control method can be properly applied in the engineering. The contributions of this paper are that (1 a real-time accurate equivalent circuit model (RAECM of PSU via error compensation is proposed to reconcile the conflict between real-time online simulation and accuracy under various operating conditions, and (2 an adaptive predicted fuzzy PID controller (APFPID based on RAECM is put forward to overcome the instability of conventional control under no-load conditions with low water head. Respectively, all hydraulic factors in pipeline system are fully considered based on equivalent lumped-circuits theorem. The pretreatment, which consists of improved Suter-transformation and BP neural network, and online simulation method featured by two iterative loops are synthetically proposed to improve the solving accuracy of the pump-turbine. Moreover, the modified formulas for compensating error are derived with variable-spatial discretization to improve the accuracy of the real-time simulation further. The implicit RadauIIA method is verified to be more suitable for PSUGS owing to wider stable domain. Then, APFPID controller is constructed based on the integration of fuzzy PID and the model predictive control. Rolling prediction by RAECM is proposed to replace rolling optimization with its computational speed guaranteed. Finally, the simulation and on-site measurements are compared to prove trustworthy of RAECM under various running conditions. Comparative experiments also indicate that APFPID controller outperforms other controllers in most cases, especially low water head conditions. Satisfying results of RAECM have been achieved in engineering and it provides a novel model reference for PSUGS.

  12. Wound Botulism in Injection Drug Users: Time to Antitoxin Correlates with Intensive Care Unit Length of Stay

    Directory of Open Access Journals (Sweden)

    Offerman, Steven R

    2009-11-01

    Full Text Available Objectives: We sought to identify factors associated with need for mechanical ventilation (MV, length of intensive care unit (ICU stay, length of hospital stay, and poor outcome in injection drug users (IDUs with wound botulism (WB.Methods: This is a retrospective review of WB patients admitted between 1991-2005. IDUs were included if they had symptoms of WB and diagnostic confirmation. Primary outcome variables were the need for MV, length of ICU stay, length of hospital stay, hospital-related complications, and death.Results: Twenty-nine patients met inclusion criteria. Twenty-two (76% admitted to heroin use only and seven (24% admitted to heroin and methamphetamine use. Chief complaints on initial presentation included visual changes, 13 (45%; weakness, nine (31%; and difficulty swallowing, seven (24%. Skin wounds were documented in 22 (76%. Twenty-one (72% patients underwent mechanical ventilation (MV. Antitoxin (AT was administered to 26 (90% patients but only two received antitoxin in the emergency department (ED. The time from ED presentation to AT administration was associated with increased length of ICU stay (Regression coefficient = 2.5; 95% CI 0.45, 4.5. The time from ED presentation to wound drainage was also associated with increased length of ICU stay (Regression coefficient = 13.7; 95% CI = 2.3, 25.2. There was no relationship between time to antibiotic administration and length of ICU stay.Conclusion: MV and prolonged ICU stays are common in patients identified with WB. Early AT administration and wound drainage are recommended as these measures may decrease ICU length of stay.[West J Emerg Med. 2009;10(4:251-256.

  13. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    Science.gov (United States)

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  14. Automatic Optimization for Large-Scale Real-Time Coastal Water Simulation

    Directory of Open Access Journals (Sweden)

    Shunli Wang

    2016-01-01

    Full Text Available We introduce an automatic optimization approach for the simulation of large-scale coastal water. To solve the singular problem of water waves obtained with the traditional model, a hybrid deep-shallow-water model is estimated by using an automatic coupling algorithm. It can handle arbitrary water depth and different underwater terrain. As a certain feature of coastal terrain, coastline is detected with the collision detection technology. Then, unnecessary water grid cells are simplified by the automatic simplification algorithm according to the depth. Finally, the model is calculated on Central Processing Unit (CPU and the simulation is implemented on Graphics Processing Unit (GPU. We show the effectiveness of our method with various results which achieve real-time rendering on consumer-level computer.

  15. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  16. Comparative Time Series Analysis of Aerosol Optical Depth over Sites in United States and China Using ARIMA Modeling

    Science.gov (United States)

    Li, X.; Zhang, C.; Li, W.

    2017-12-01

    Long-term spatiotemporal analysis and modeling of aerosol optical depth (AOD) distribution is of paramount importance to study radiative forcing, climate change, and human health. This study is focused on the trends and variations of AOD over six stations located in United States and China during 2003 to 2015, using satellite-retrieved Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 retrievals and ground measurements derived from Aerosol Robotic NETwork (AERONET). An autoregressive integrated moving average (ARIMA) model is applied to simulate and predict AOD values. The R2, adjusted R2, Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Bayesian Information Criterion (BIC) are used as indices to select the best fitted model. Results show that there is a persistent decreasing trend in AOD for both MODIS data and AERONET data over three stations. Monthly and seasonal AOD variations reveal consistent aerosol patterns over stations along mid-latitudes. Regional differences impacted by climatology and land cover types are observed for the selected stations. Statistical validation of time series models indicates that the non-seasonal ARIMA model performs better for AERONET AOD data than for MODIS AOD data over most stations, suggesting the method works better for data with higher quality. By contrast, the seasonal ARIMA model reproduces the seasonal variations of MODIS AOD data much more precisely. Overall, the reasonably predicted results indicate the applicability and feasibility of the stochastic ARIMA modeling technique to forecast future and missing AOD values.

  17. Energy-efficient optical network units for OFDM PON based on time-domain interleaved OFDM technique.

    Science.gov (United States)

    Hu, Xiaofeng; Cao, Pan; Zhang, Liang; Jiang, Lipeng; Su, Yikai

    2014-06-02

    We propose and experimentally demonstrate a new scheme to reduce the energy consumption of optical network units (ONUs) in orthogonal frequency division multiplexing passive optical networks (OFDM PONs) by using time-domain interleaved OFDM (TI-OFDM) technique. In a conventional OFDM PON, each ONU has to process the complete downstream broadcast OFDM signal with a high sampling rate and a large FFT size to retrieve its required data, even if it employs a portion of OFDM subcarriers. However, in our scheme, the ONU only needs to sample and process one data group from the downlink TI-OFDM signal, effectively reducing the sampling rate and the FFT size of the ONU. Thus, the energy efficiency of ONUs in OFDM PONs can be greatly improved. A proof-of-concept experiment is conducted to verify the feasibility of the proposed scheme. Compared to the conventional OFDM PON, our proposal can save 17.1% and 26.7% energy consumption of ONUs by halving and quartering the sampling rate and the FFT size of ONUs with the use of the TI-OFDM technology.

  18. A RWMAC commentary on the Science Policy Research Unit Report: UK Nuclear Decommissioning Policy: time for decision

    International Nuclear Information System (INIS)

    Anon.

    1994-04-01

    The Radioactive Waste Management Advisory Committee (RWMAC) is an independent body which advises the Secretaries of State for the Environment, Scotland and Wales, on civil radioactive waste management issues. Chapter 4 of the RWMAC's Twelfth Annual Report discussed nuclear power plant decommissioning strategy. One of the RWMAC's conclusions was that the concept of financial provisioning for power station decommissioning liabilities, which might be passed on to society several generations into the future, deserved further study. A specification for such a study was duly written (Annex 2) and, following consideration of tendered responses, the Science Policy Research Unit (SPRU) at Sussex University, was contracted to carry out the work. The SPRU report stands as a SPRU analysis of the subject. This separate short RWMAC report, which is being released at the same time as the SPRU report, presents the RWMAC's own commentary on the SPRU study. The RWMAC has identified five main issues which should be addressed when deciding on a nuclear plant decommissioning strategy. These are: the technical approach to decommissioning, the basis of financial provisions, treatment of risk, segregation of management of funds, and the need for a wider environmental view. (author)

  19. A RWMAC commentary on the Science Policy Research Unit report: UK nuclear decommissioning policy: time for decision

    International Nuclear Information System (INIS)

    1994-04-01

    Chapter 4 of the RWMAC's Twelfth Annual Report discussed nuclear power plant decommissioning strategy. One of the RWMAC's conclusions was that the concept of financial provisioning for power station decommissioning liabilities, which might be passed on to society several generations into the future, deserved further study. A specification for such a study was duly written (Annex 2) and, following consideration of tendered responses, the Science Policy Research Unit (SPRU) at Sussex University, was contracted to carry out the work. The SPRU report stands as a SPRU analysis of the subject. This separate short RWMAC report, which is being released at the same time as the SPRU report, presents the RWMAC's own commentary on the SPRU study. The RWMAC has identified five main issues which should be addressed when deciding on a nuclear plant decommissioning strategy. These are: the technical approach to decommissioning, the basis of financial provisions, treatment of risk, segregation of management of funds, and the need for a wider environmental view. These issues are addressed in this RWMAC report. (author)

  20. Effect of light-curing units, post-cured time and shade of resin cement on knoop hardness.

    Science.gov (United States)

    Reges, Rogério Vieira; Costa, Ana Rosa; Correr, Américo Bortolazzo; Piva, Evandro; Puppin-Rontani, Regina Maria; Sinhoreti, Mário Alexandre Coelho; Correr-Sobrinho, Lourenço

    2009-01-01

    The aim of this study was to evaluate the Knoop hardness after 15 min and 24 h of different shades of a dual-cured resin-based cement after indirect photoactivation (ceramic restoration) with 2 light-curing units (LCUs). The resin cement Variolink II (Ivoclar Vivadent) shade XL, A2, A3 and opaque were mixed with the catalyst paste and inserted into a black Teflon mold (5 mm diameter x 1 mm high). A transparent strip was placed over the mold and a ceramic disc (Duceram Plus, shade A3) was positioned over the resin cement. Light-activation was performed through the ceramic for 40 s using quartz-tungsten-halogen (QTH) (XL 2500; 3M ESPE) or light-emitting diode (LED) (Ultrablue Is, DMC) LCUs with power density of 615 and 610 mW/cm(2), respectively. The Koop hardness was measured using a microhardness tester HMV 2 (Shimadzu) after 15 min or 24 h. Four indentations were made in each specimen. Data were subjected to ANOVA and Tukey's test (alpha=0.05). The QTH LCU provided significantly higher (pcement showed lower Knoop hardness than the other shades for both LCUs and post-cure times.

  1. Identifying modules of coexpressed transcript units and their organization of Saccharopolyspora erythraea from time series gene expression profiles.

    Directory of Open Access Journals (Sweden)

    Xiao Chang

    Full Text Available BACKGROUND: The Saccharopolyspora erythraea genome sequence was released in 2007. In order to look at the gene regulations at whole transcriptome level, an expression microarray was specifically designed on the S. erythraea strain NRRL 2338 genome sequence. Based on these data, we set out to investigate the potential transcriptional regulatory networks and their organization. METHODOLOGY/PRINCIPAL FINDINGS: In view of the hierarchical structure of bacterial transcriptional regulation, we constructed a hierarchical coexpression network at whole transcriptome level. A total of 27 modules were identified from 1255 differentially expressed transcript units (TUs across time course, which were further classified in to four groups. Functional enrichment analysis indicated the biological significance of our hierarchical network. It was indicated that primary metabolism is activated in the first rapid growth phase (phase A, and secondary metabolism is induced when the growth is slowed down (phase B. Among the 27 modules, two are highly correlated to erythromycin production. One contains all genes in the erythromycin-biosynthetic (ery gene cluster and the other seems to be associated with erythromycin production by sharing common intermediate metabolites. Non-concomitant correlation between production and expression regulation was observed. Especially, by calculating the partial correlation coefficients and building the network based on Gaussian graphical model, intrinsic associations between modules were found, and the association between those two erythromycin production-correlated modules was included as expected. CONCLUSIONS: This work created a hierarchical model clustering transcriptome data into coordinated modules, and modules into groups across the time course, giving insight into the concerted transcriptional regulations especially the regulation corresponding to erythromycin production of S. erythraea. This strategy may be extendable to studies

  2. Using simulated historical time series to prioritize fuel treatments on landscapes across the United States: The LANDFIRE prototype project

    Science.gov (United States)

    Keane, Robert E.; Rollins, Matthew; Zhu, Zhi-Liang

    2007-01-01

    Canopy and surface fuels in many fire-prone forests of the United States have increased over the last 70 years as a result of modern fire exclusion policies, grazing, and other land management activities. The Healthy Forest Restoration Act and National Fire Plan establish a national commitment to reduce fire hazard and restore fire-adapted ecosystems across the USA. The primary index used to prioritize treatment areas across the nation is Fire Regime Condition Class (FRCC) computed as departures of current conditions from the historical fire and landscape conditions. This paper describes a process that uses an extensive set of ecological models to map FRCC from a departure statistic computed from simulated time series of historical landscape composition. This mapping process uses a data-driven, biophysical approach where georeferenced field data, biogeochemical simulation models, and spatial data libraries are integrated using spatial statistical modeling to map environmental gradients that are then used to predict vegetation and fuels characteristics over space. These characteristics are then fed into a landscape fire and succession simulation model to simulate a time series of historical landscape compositions that are then compared to the composition of current landscapes to compute departure, and the FRCC values. Intermediate products from this process are then used to create ancillary vegetation, fuels, and fire regime layers that are useful in the eventual planning and implementation of fuel and restoration treatments at local scales. The complex integration of varied ecological models at different scales is described and problems encountered during the implementation of this process in the LANDFIRE prototype project are addressed.

  3. Identifying modules of coexpressed transcript units and their organization of Saccharopolyspora erythraea from time series gene expression profiles.

    Science.gov (United States)

    Chang, Xiao; Liu, Shuai; Yu, Yong-Tao; Li, Yi-Xue; Li, Yuan-Yuan

    2010-08-12

    The Saccharopolyspora erythraea genome sequence was released in 2007. In order to look at the gene regulations at whole transcriptome level, an expression microarray was specifically designed on the S. erythraea strain NRRL 2338 genome sequence. Based on these data, we set out to investigate the potential transcriptional regulatory networks and their organization. In view of the hierarchical structure of bacterial transcriptional regulation, we constructed a hierarchical coexpression network at whole transcriptome level. A total of 27 modules were identified from 1255 differentially expressed transcript units (TUs) across time course, which were further classified in to four groups. Functional enrichment analysis indicated the biological significance of our hierarchical network. It was indicated that primary metabolism is activated in the first rapid growth phase (phase A), and secondary metabolism is induced when the growth is slowed down (phase B). Among the 27 modules, two are highly correlated to erythromycin production. One contains all genes in the erythromycin-biosynthetic (ery) gene cluster and the other seems to be associated with erythromycin production by sharing common intermediate metabolites. Non-concomitant correlation between production and expression regulation was observed. Especially, by calculating the partial correlation coefficients and building the network based on Gaussian graphical model, intrinsic associations between modules were found, and the association between those two erythromycin production-correlated modules was included as expected. This work created a hierarchical model clustering transcriptome data into coordinated modules, and modules into groups across the time course, giving insight into the concerted transcriptional regulations especially the regulation corresponding to erythromycin production of S. erythraea. This strategy may be extendable to studies on other prokaryotic microorganisms.

  4. Multi-GPU based acceleration of a list-mode DRAMA toward real-time OpenPET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kinouchi, Shoko [Chiba Univ. (Japan); National Institute of Radiological Sciences, Chiba (Japan); Yamaya, Taiga; Yoshida, Eiji; Tashima, Hideaki [National Institute of Radiological Sciences, Chiba (Japan); Kudo, Hiroyuki [Tsukuba Univ., Ibaraki (Japan); Suga, Mikio [Chiba Univ. (Japan)

    2011-07-01

    OpenPET, which has a physical gap between two detector rings, is our new PET geometry. In order to realize future radiation therapy guided by OpenPET, real-time imaging is required. Therefore we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. Therefore, in our previous study, we developed a new system model which was suited for the GPU implementation. In this paper, we implemented our image reconstruction method using 4 GPUs to get further acceleration. We applied the developed reconstruction method to a small OpenPET prototype. We obtained calculation times of total iteration using 4 GPUs that were 3.4 times faster than using a single GPU. Compared to using a single CPU, we achieved the reconstruction time speed-up of 142 times using 4 GPUs. (orig.)

  5. Musrfit-Real Time Parameter Fitting Using GPUs

    Science.gov (United States)

    Locans, Uldis; Suter, Andreas

    High transverse field μSR (HTF-μSR) experiments typically lead to a rather large data sets, since it is necessary to follow the high frequencies present in the positron decay histograms. The analysis of these data sets can be very time consuming, usually due to the limited computational power of the hardware. To overcome the limited computing resources rotating reference frame transformation (RRF) is often used to reduce the data sets that need to be handled. This comes at a price typically the μSR community is not aware of: (i) due to the RRF transformation the fitting parameter estimate is of poorer precision, i.e., more extended expensive beamtime is needed. (ii) RRF introduces systematic errors which hampers the statistical interpretation of χ2 or the maximum log-likelihood. We will briefly discuss these issues in a non-exhaustive practical way. The only and single purpose of the RRF transformation is the sluggish computer power. Therefore during this work GPU (Graphical Processing Units) based fitting was developed which allows to perform real-time full data analysis without RRF. GPUs have become increasingly popular in scientific computing in recent years. Due to their highly parallel architecture they provide the opportunity to accelerate many applications with considerably less costs than upgrading the CPU computational power. With the emergence of frameworks such as CUDA and OpenCL these devices have become more easily programmable. During this work GPU support was added to Musrfit- a data analysis framework for μSR experiments. The new fitting algorithm uses CUDA or OpenCL to offload the most time consuming parts of the calculations to Nvidia or AMD GPUs. Using the current CPU implementation in Musrfit parameter fitting can take hours for certain data sets while the GPU version can allow to perform real-time data analysis on the same data sets. This work describes the challenges that arise in adding the GPU support to t as well as results obtained

  6. Likelihood of treatment in a coronary care unit for a first-time myocardial infarction in relation to sex, country of birth and socioeconomic position in Sweden.

    Science.gov (United States)

    Yang, Dong; James, Stefan; de Faire, Ulf; Alfredsson, Lars; Jernberg, Tomas; Moradi, Tahereh

    2013-01-01

    To examine the relationship between sex, country of birth, level of education as an indicator of socioeconomic position, and the likelihood of treatment in a coronary care unit (CCU) for a first-time myocardial infarction. Nationwide register based study. Sweden. 199 906 patients (114 387 men and 85,519 women) of all ages who were admitted to hospital for first-time myocardial infarction between 2001 and 2009. Admission to a coronary care unit due to myocardial infarction. Despite the observed increasing access to coronary care units over time, the proportion of women treated in a coronary care unit was 13% less than for men. As compared with men, the multivariable adjusted odds ratio among women was 0.80 (95% confidence interval 0.77 to 0.82). This lower proportion of women treated in a CCU varied by age and year of diagnosis and country of birth. Overall, there was no evidence of a difference in likelihood of treatment in a coronary care unit between Sweden-born and foreign-born patients. As compared with patients with high education, the adjusted odds ratio among patients with a low level of education was 0.93 (95% confidence interval 0.89 to 0.96). Foreign-born and Sweden-born first-time myocardial infarction patients had equal opportunity of being treated in a coronary care unit in Sweden; this is in contrast to the situation in many other countries with large immigrant populations. However, the apparent lower rate of coronary care unit admission after first-time myocardial infarction among women and patients with low socioeconomic position warrants further investigation.

  7. Discrete typing units of Trypanosoma cruzi detected by real-time PCR in Chilean patients with chronic Chagas cardiomyopathy.

    Science.gov (United States)

    Muñoz-San Martín, Catalina; Zulantay, Inés; Saavedra, Miguel; Fuentealba, Cristián; Muñoz, Gabriela; Apt, Werner

    2018-05-07

    Chagas disease is a major public health problem in Latin America and has spread to other countries due to immigration of infected persons. 10-30% of patients with chronic Chagas disease will develop cardiomyopathy. Chagas cardiomyopathy is the worst form of the disease, due to its high morbidity and mortality. Because of its prognostic value and adequate medical monitoring, it is very important to identify infected people who could develop Chagas cardiomyopathy. The aim of this study was to determine if discrete typing units (DTUs) of Trypanosoma cruzi are related to the presence of heart disease in patients with chronic Chagas disease. A total of 86 untreated patients, 41 with cardiomyopathy and 45 without heart involvement were submitted to clinical study. Electrocardiograms and echocardiograms were performed on the group of cardiopaths, in which all important known causes of cardiomyopathy were discarded. Sinus bradycardia and prolonged QTc interval were the most frequent electrocardiographic alterations and patients were classified in group I (46%) and group II (54%) of New York Hearth Association. In all cases real-time PCR genotyping assays were performed. In the group with cardiomyopathy, the most frequent DTU was TcI (56.1%), followed by TcII (19.5%). Mixed infections TcI + TcII were observed in 7.3% of the patients. In the group without cardiac pathologies, TcI and TcII were found at similar rates (28.9 and 31.1%, respectively) and mixed infections TcI + TcII in 17.8% of the cases. TcIII and TcIV were not detected in any sample. Taken together, our data indicate that chronic Chagas cardiomyopathy in Chile can be caused by strains belonging to TcI and TcII. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Evaluation of Selected Resource Allocation and Scheduling Methods in Heterogeneous Many-Core Processors and Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ciznicki Milosz

    2014-12-01

    Full Text Available Heterogeneous many-core computing resources are increasingly popular among users due to their improved performance over homogeneous systems. Many developers have realized that heterogeneous systems, e.g. a combination of a shared memory multi-core CPU machine with massively parallel Graphics Processing Units (GPUs, can provide significant performance opportunities to a wide range of applications. However, the best overall performance can only be achieved if application tasks are efficiently assigned to different types of processor units in time taking into account their specific resource requirements. Additionally, one should note that available heterogeneous resources have been designed as general purpose units, however, with many built-in features accelerating specific application operations. In other words, the same algorithm or application functionality can be implemented as a different task for CPU or GPU. Nevertheless, from the perspective of various evaluation criteria, e.g. the total execution time or energy consumption, we may observe completely different results. Therefore, as tasks can be scheduled and managed in many alternative ways on both many-core CPUs or GPUs and consequently have a huge impact on the overall computing resources performance, there are needs for new and improved resource management techniques. In this paper we discuss results achieved during experimental performance studies of selected task scheduling methods in heterogeneous computing systems. Additionally, we present a new architecture for resource allocation and task scheduling library which provides a generic application programming interface at the operating system level for improving scheduling polices taking into account a diversity of tasks and heterogeneous computing resources characteristics.

  9. Healthy hospital food initiatives in the United States: time to ban sugar sweetened beverages to reduce childhood obesity

    OpenAIRE

    Wojcicki, Janet M

    2013-01-01

    While childhood obesity is a global problem, the extent and severity of the problem in United States, has resulted in a number of new initiatives, including recent hospital initiatives to limit the sale of sweetened beverages and other high calorie drinks in hospital vending machines and cafeterias. These proposed policy changes are not unique to United States, but are more comprehensive in the number of proposed hospitals that they will impact. Meanwhile, however, it is advised, that these i...

  10. Real-time Measurements of an Optical Reconfigurable Radio Access Unit for 5G Wireless Access Networks

    DEFF Research Database (Denmark)

    Rodríguez, Sebastián; Morales Vicente, Alvaro; Rommel, Simon

    2017-01-01

    A reconfigurable radio access unit able to switch wavelength, RF carrier frequency and optical path is experimentally demonstrated. The system is able to do the switching processes correctly, while achieving BER values below FEC limit.......A reconfigurable radio access unit able to switch wavelength, RF carrier frequency and optical path is experimentally demonstrated. The system is able to do the switching processes correctly, while achieving BER values below FEC limit....

  11. Real-time performance assessment and adaptive control for a water chiller unit in an HVAC system

    Science.gov (United States)

    Bai, Jianbo; Li, Yang; Chen, Jianhao

    2018-02-01

    The paper proposes an adaptive control method for a water chiller unit in a HVAC system. Based on the minimum variance evaluation, the adaptive control method was used to realize better control of the water chiller unit. To verify the performance of the adaptive control method, the proposed method was compared with an a conventional PID controller, the simulation results showed that adaptive control method had superior control performance to that of the conventional PID controller.

  12. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    Science.gov (United States)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    six different trap symmetries: axially and radially symmetric traps in 3d, circularly symmetric traps in 2d, fully isotropic (spherically symmetric) and fully anisotropic traps in 2d and 3d, as well as 1d traps, where no spatial symmetry is considered. Solution method: We employ the split-step Crank-Nicolson algorithm to discretize the time-dependent GP equation in space and time. The discretized equation is then solved by imaginary- or real-time propagation, employing adequately small space and time steps, to yield the solution of stationary and non-stationary problems, respectively. Reasons for the new version: Previously published Fortran programs [1,2] have now become popular tools [3] for solving the GP equation. These programs have been translated to the C programming language [4] and later extended to the more complex scenario of dipolar atoms [5]. Now virtually all computers have multi-core processors and some have motherboards with more than one physical computer processing unit (CPU), which may increase the number of available CPU cores on a single computer to several tens. The C programs have been adopted to be very fast on such multi-core modern computers using general-purpose graphic processing units (GPGPU) with Nvidia CUDA and computer clusters using Message Passing Interface (MPI) [6]. Nevertheless, previously developed Fortran programs are also commonly used for scientific computation and most of them use a single CPU core at a time in modern multi-core laptops, desktops, and workstations. Unless the Fortran programs are made aware and capable of making efficient use of the available CPU cores, the solution of even a realistic dynamical 1d problem, not to mention the more complicated 2d and 3d problems, could be time consuming using the Fortran programs. Previously, we published auto-parallel Fortran programs [2] suitable for Intel (but not GNU) compiler for solving the GP equation. Hence, a need for the full OpenMP version of the Fortran programs to

  13. Endoplasmic reticulum stress mediating downregulated StAR and 3-beta-HSD and low plasma testosterone caused by hypoxia is attenuated by CPU86017-RS and nifedipine

    Directory of Open Access Journals (Sweden)

    Liu Gui-Lai

    2012-01-01

    Full Text Available Abstract Background Hypoxia exposure initiates low serum testosterone levels that could be attributed to downregulated androgen biosynthesizing genes such as StAR (steroidogenic acute regulatory protein and 3-beta-HSD (3-beta-hydroxysteroid dehydrogenase in the testis. It was hypothesized that these abnormalities in the testis by hypoxia are associated with oxidative stress and an increase in chaperones of endoplasmic reticulum stress (ER stress and ER stress could be modulated by a reduction in calcium influx. Therefore, we verify that if an application of CPU86017-RS (simplified as RS, a derivative to berberine could alleviate the ER stress and depressed gene expressions of StAR and 3-beta-HSD, and low plasma testosterone in hypoxic rats, these were compared with those of nifedipine. Methods Adult male Sprague-Dawley rats were randomly divided into control, hypoxia for 28 days, and hypoxia treated (mg/kg, p.o. during the last 14 days with nifedipine (Nif, 10 and three doses of RS (20, 40, 80, and normal rats treated with RS isomer (80. Serum testosterone (T and luteinizing hormone (LH were measured. The testicular expressions of biomarkers including StAR, 3-beta-HSD, immunoglobulin heavy chain binding protein (Bip, double-strand RNA-activated protein kinase-like ER kinase (PERK and pro-apoptotic transcription factor C/EBP homologous protein (CHOP were measured. Results In hypoxic rats, serum testosterone levels decreased and mRNA and protein expressions of the testosterone biosynthesis related genes, StAR and 3-beta-HSD were downregulated. These changes were linked to an increase in oxidants and upregulated ER stress chaperones: Bip, PERK, CHOP and distorted histological structure of the seminiferous tubules in the testis. These abnormalities were attenuated significantly by CPU86017-RS and nifedipine. Conclusion Downregulated StAR and 3-beta-HSD significantly contribute to low testosterone in hypoxic rats and is associated with ER stress

  14. Novel web-based real-time dashboard to optimize recycling and use of red cell units at a large multi-site transfusion service.

    Science.gov (United States)

    Sharpe, Christopher; Quinn, Jason G; Watson, Stephanie; Doiron, Donald; Crocker, Bryan; Cheng, Calvino

    2014-01-01

    Effective blood inventory management reduces outdates of blood products. Multiple strategies have been employed to reduce the rate of red blood cell (RBC) unit outdate. We designed an automated real-time web-based dashboard interfaced with our laboratory information system to effectively recycle red cell units. The objective of our approach is to decrease RBC outdate rates within our transfusion service. The dashboard was deployed in August 2011 and is accessed by a shortcut that was placed on the desktops of all blood transfusion services computers in the Capital District Health Authority region. It was designed to refresh automatically every 10 min. The dashboard provides all vital information on RBC units, and implemented a color coding scheme to indicate an RBC unit's proximity to expiration. The overall RBC unit outdate rate in the 7 months period following implementation of the dashboard (September 2011-March 2012) was 1.24% (123 units outdated/9763 units received), compared to similar periods in 2010-2011 and 2009-2010: 2.03% (188/9395) and 2.81% (261/9220), respectively. The odds ratio of a RBC unit outdate postdashboard (2011-2012) compared with 2010-2011 was 0.625 (95% confidence interval: 0.497-0.786; P dashboard system is an inexpensive and novel blood inventory management system which was associated with a significant reduction in RBC unit outdate rates at our institution over a period of 7 months. This system, or components of it, could be a useful addition to existing RBC management systems at other institutions.

  15. A real-time material control concept for safeguarding special nuclear material in United States licensed processing facilities

    International Nuclear Information System (INIS)

    Shea, T.E.

    1976-01-01

    This paper describes general safeguards research being undertaken by the United States Nuclear Regulatory Commission. Efforts to improve the ability of United States licensed plants to contend with the perceived threat of covert material theft are emphasized. The framework for this improvement is to break down the internal control and accounting system into subsystems to achieve material isolation, inventory control, inventory characterization, and inventory containment analysis. A general programme is outlined to develop and evaluate appropriate mechanisms, integrate selected mechanisms into subsystems, and evaluate the subsystems in the context of policy requirements. (author)

  16. Bridging FPGA and GPU technologies for AO real-time control

    Science.gov (United States)

    Perret, Denis; Lainé, Maxime; Bernard, Julien; Gratadour, Damien; Sevin, Arnaud

    2016-07-01

    Our team has developed a common environment for high performance simulations and real-time control of AO systems based on the use of Graphics Processors Units in the context of the COMPASS project. Such a solution, based on the ability of the real time core in the simulation to provide adequate computing performance, limits the cost of developing AO RTC systems and makes them more scalable. A code developed and validated in the context of the simulation may be injected directly into the system and tested on sky. Furthermore, the use of relatively low cost components also offers significant advantages for the system hardware platform. However, the use of GPUs in an AO loop comes with drawbacks: the traditional way of offloading computation from CPU to GPUs - involving multiple copies and unacceptable overhead in kernel launching - is not well suited in a real time context. This last application requires the implementation of a solution enabling direct memory access (DMA) to the GPU memory from a third party device, bypassing the operating system. This allows this device to communicate directly with the real-time core of the simulation feeding it with the WFS camera pixel stream. We show that DMA between a custom FPGA-based frame-grabber and a computation unit (GPU, FPGA, or Coprocessor such as Xeon-phi) across PCIe allows us to get latencies compatible with what will be needed on ELTs. As a fine-grained synchronization mechanism is not yet made available by GPU vendors, we propose the use of memory polling to avoid interrupts handling and involvement of a CPU. Network and Vision protocols are handled by the FPGA-based Network Interface Card (NIC). We present the results we obtained on a complete AO loop using camera and deformable mirror simulators.

  17. Use of Jigsaw Technique to Teach the Unit "Science within Time" in Secondary 7th Grade Social Sciences Course and Students' Views on This Technique

    Science.gov (United States)

    Yapici, Hakki

    2016-01-01

    The aim of this study is to apply the jigsaw technique in Social Sciences teaching and to unroll the effects of this technique on learning. The unit "Science within Time" in the secondary 7th grade Social Sciences text book was chosen for the research. It is aimed to compare the jigsaw technique with the traditional teaching method in…

  18. Open problems in CEM: Porting an explicit time-domain volume-integral- equation solver on GPUs with OpenACC

    KAUST Repository

    Ergül, Özgür

    2014-04-01

    Graphics processing units (GPUs) are gradually becoming mainstream in high-performance computing, as their capabilities for enhancing performance of a large spectrum of scientific applications to many fold when compared to multi-core CPUs have been clearly identified and proven. In this paper, implementation and performance-tuning details for porting an explicit marching-on-in-time (MOT)-based time-domain volume-integral-equation (TDVIE) solver onto GPUs are described in detail. To this end, a high-level approach, utilizing the OpenACC directive-based parallel programming model, is used to minimize two often-faced challenges in GPU programming: developer productivity and code portability. The MOT-TDVIE solver code, originally developed for CPUs, is annotated with compiler directives to port it to GPUs in a fashion similar to how OpenMP targets multi-core CPUs. In contrast to CUDA and OpenCL, where significant modifications to CPU-based codes are required, this high-level approach therefore requires minimal changes to the codes. In this work, we make use of two available OpenACC compilers, CAPS and PGI. Our experience reveals that different annotations of the code are required for each of the compilers, due to different interpretations of the fairly new standard by the compiler developers. Both versions of the OpenACC accelerated code achieved significant performance improvements, with up to 30× speedup against the sequential CPU code using recent hardware technology. Moreover, we demonstrated that the GPU-accelerated fully explicit MOT-TDVIE solver leveraged energy-consumption gains of the order of 3× against its CPU counterpart. © 2014 IEEE.

  19. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  20. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  1. Manifest Destiny: The Relationship between the United States and the International Criminal Court in a Time of International Upheaval

    NARCIS (Netherlands)

    Sabharwal, Prashant

    2012-01-01

    Ever since the negotiations that culminated in the signing of the Rome Statute of the International Criminal Court ("ICC" or "the Court"), the approach taken by various Administrations in the United States has been a reflection of domestic politics and a skeptical foreign policy establishment. In

  2. Using simulated historical time series to prioritize fuel treatments on landscapes across the United States: The LANDFIRE prototype project

    Science.gov (United States)

    Robert E. Keane; Matthew Rollins; Zhi-Liang Zhu

    2007-01-01

    Canopy and surface fuels in many fire-prone forests of the United States have increased over the last 70 years as a result of modern fire exclusion policies, grazing, and other land management activities. The Healthy Forest Restoration Act and National Fire Plan establish a national commitment to reduce fire hazard and restore fire-adapted ecosystems across the USA....

  3. A Critical Challenge: The Engagement and Assessment of Contingent, Part-Time Adjunct Faculty Professors in United States Community Colleges

    Science.gov (United States)

    Jolley, Michael R.; Cross, Emily; Bryant, Miles

    2014-01-01

    In 2011, according to a National Center for Education Statistics report, part-time instructional staff in all higher education institutions exceeded full-time faculty members for the first time, accounting for 50% of all instructional staff (National Center for Education Statistics [NCES], 2012). The same report indicates part-time faculty in…

  4. A parallel approximate string matching under Levenshtein distance on graphics processing units using warp-shuffle operations.

    Directory of Open Access Journals (Sweden)

    ThienLuan Ho

    Full Text Available Approximate string matching with k-differences has a number of practical applications, ranging from pattern recognition to computational biology. This paper proposes an efficient memory-access algorithm for parallel approximate string matching with k-differences on Graphics Processing Units (GPUs. In the proposed algorithm, all threads in the same GPUs warp share data using warp-shuffle operation instead of accessing the shared memory. Moreover, we implement the proposed algorithm by exploiting the memory structure of GPUs to optimize its performance. Experiment results for real DNA packages revealed that the performance of the proposed algorithm and its implementation archived up to 122.64 and 1.53 times compared to that of sequential algorithm on CPU and previous parallel approximate string matching algorithm on GPUs, respectively.

  5. Novel web-based real-time dashboard to optimize recycling and use of red cell units at a large multi-site transfusion service

    Directory of Open Access Journals (Sweden)

    Christopher Sharpe

    2014-01-01

    Full Text Available Background: Effective blood inventory management reduces outdates of blood products. Multiple strategies have been employed to reduce the rate of red blood cell (RBC unit outdate. We designed an automated real-time web-based dashboard interfaced with our laboratory information system to effectively recycle red cell units. The objective of our approach is to decrease RBC outdate rates within our transfusion service. Methods: The dashboard was deployed in August 2011 and is accessed by a shortcut that was placed on the desktops of all blood transfusion services computers in the Capital District Health Authority region. It was designed to refresh automatically every 10 min. The dashboard provides all vital information on RBC units, and implemented a color coding scheme to indicate an RBC unit′s proximity to expiration. Results: The overall RBC unit outdate rate in the 7 months period following implementation of the dashboard (September 2011-March 2012 was 1.24% (123 units outdated/9763 units received, compared to similar periods in 2010-2011 and 2009-2010: 2.03% (188/9395 and 2.81% (261/9220, respectively. The odds ratio of a RBC unit outdate postdashboard (2011-2012 compared with 2010-2011 was 0.625 (95% confidence interval: 0.497-0.786; P < 0.0001. Conclusion: Our dashboard system is an inexpensive and novel blood inventory management system which was associated with a significant reduction in RBC unit outdate rates at our institution over a period of 7 months. This system, or components of it, could be a useful addition to existing RBC management systems at other institutions.

  6. Technique to increase performance of C-program for control systems. Compiler technique for low-cost CPU; Seigyoyo C gengo program no kosokuka gijutsu. Tei cost CPU no tame no gengo compiler gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Y [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    The software of automotive control systems has become increasingly large and complex. High level languages (primarily C) and the compilers become more important to reduce coding time. Most compilers represent real number in the floating point format specified by IEEE standard 754. Most microprocessors in the automotive industry have no hardware for the operation using the IEEE standard due to the cost requirements, resulting in the slow execution speed and large code size. Alternative formats to increase execution speed and reduce code size are proposed. Experimental results for the alternative formats show the improvement in execution speed and code size. 4 refs., 3 figs., 2 tabs.

  7. FLOCKING-BASED DOCUMENT CLUSTERING ON THE GRAPHICS PROCESSING UNIT [Book Chapter

    Energy Technology Data Exchange (ETDEWEB)

    Charles, J S; Patton, R M; Potok, T E; Cui, X

    2008-01-01

    Analyzing and grouping documents by content is a complex problem. One explored method of solving this problem borrows from nature, imitating the fl ocking behavior of birds. Each bird represents a single document and fl ies toward other documents that are similar to it. One limitation of this method of document clustering is its complexity O(n2). As the number of documents grows, it becomes increasingly diffi cult to receive results in a reasonable amount of time. However, fl ocking behavior, along with most naturally inspired algorithms such as ant colony optimization and particle swarm optimization, are highly parallel and have experienced improved performance on expensive cluster computers. In the last few years, the graphics processing unit (GPU) has received attention for its ability to solve highly-parallel and semi-parallel problems much faster than the traditional sequential processor. Some applications see a huge increase in performance on this new platform. The cost of these high-performance devices is also marginal when compared with the price of cluster machines. In this paper, we have conducted research to exploit this architecture and apply its strengths to the document flocking problem. Our results highlight the potential benefi t the GPU brings to all naturally inspired algorithms. Using the CUDA platform from NVIDIA®, we developed a document fl ocking implementation to be run on the NVIDIA® GEFORCE 8800. Additionally, we developed a similar but sequential implementation of the same algorithm to be run on a desktop CPU. We tested the performance of each on groups of news articles ranging in size from 200 to 3,000 documents. The results of these tests were very signifi cant. Performance gains ranged from three to nearly fi ve times improvement of the GPU over the CPU implementation. This dramatic improvement in runtime makes the GPU a potentially revolutionary platform for document clustering algorithms.

  8. Gender differences in time-use over the life-course. A comparative analysis of France, Italy, Sweden and the United States

    OpenAIRE

    Dominique Anxo; Letizia Mencarini; Ariane Paihlé; Anne Solaz; Maria Letizia Tanturri; Lennard Flood

    2011-01-01

    The main objective of this paper is to analyse how men and women in France, Italy, Sweden and United States use their time over the life cycle and the extent to which the societal and institutional contexts influence the gender division of labour. Our central hypothesis is that contextual factors play a crucial role in shaping men’s and women’s time allocation across the life course. Countries that diverge significantly in terms of welfare state regime, employment and working time systems, fa...

  9. An extra dimension to decision-making in animals: the three-way trade-off between speed, effort per-unit-time and accuracy.

    Science.gov (United States)

    de Froment, Adrian J; Rubenstein, Daniel I; Levin, Simon A

    2014-12-01

    The standard view in biology is that all animals, from bumblebees to human beings, face a trade-off between speed and accuracy as they search for resources and mates, and attempt to avoid predators. For example, the more time a forager spends out of cover gathering information about potential food sources the more likely it is to make accurate decisions about which sources are most rewarding. However, when the cost of time spent out of cover rises (e.g. in the presence of a predator) the optimal strategy is for the forager to spend less time gathering information and to accept a corresponding decline in the accuracy of its decisions. We suggest that this familiar picture is missing a crucial dimension: the amount of effort an animal expends on gathering information in each unit of time. This is important because an animal that can respond to changing time costs by modulating its level of effort per-unit-time does not have to accept the same decrease in accuracy that an animal limited to a simple speed-accuracy trade-off must bear in the same situation. Instead, it can direct additional effort towards (i) reducing the frequency of perceptual errors in the samples it gathers or (ii) increasing the number of samples it gathers per-unit-time. Both of these have the effect of allowing it to gather more accurate information within a given period of time. We use a modified version of a canonical model of decision-making (the sequential probability ratio test) to show that this ability to substitute effort for time confers a fitness advantage in the face of changing time costs. We predict that the ability to modulate effort levels will therefore be widespread in nature, and we lay out testable predictions that could be used to detect adaptive modulation of effort levels in laboratory and field studies. Our understanding of decision-making in all species, including our own, will be improved by this more ecologically-complete picture of the three-way tradeoff between time

  10. Maternal employment, acculturation, and time spent in food-related behaviors among Hispanic mothers in the United States. Evidence from the American Time Use Survey.

    Science.gov (United States)

    Sliwa, Sarah A; Must, Aviva; Peréa, Flavia; Economos, Christina D

    2015-04-01

    Employment is a major factor underlying im/migration patterns. Unfortunately, lower diet quality and higher rates of obesity appear to be unintended consequences of moving to the US. Changes in food preparation practices may be a factor underlying dietary acculturation. The relationships between employment, acculturation, and food-related time use in Hispanic families have received relatively little attention. We used cross-sectional data collected from Hispanic mothers (ages 18-65) with at least one child employment, acculturation (US-born vs. im/migrant), and time spent in food preparation and family dinner. Regression models were estimated separately for the employed and the non-working and were adjusted for Hispanic origin group, socio-demographic and household characteristics. Working an eight-hour day was associated with spending 38 fewer minutes in food preparation (-38.0 ± SE 4.8, p < 001). Although being US-born was associated with spending fewer minutes in food preparation, this relationship varied by origin group. Acculturation did not appear to modify the relationship between hours worked and time spent in food preparation or family dinner. Mothers who worked late hours spent less time eating the evening meal with their families (-9.8 ± SE 1.3). Although an eight-hour workday was associated with a significant reduction in food preparation time, an unexpected result is that, for working mothers, additional time spent in paid work is not associated with the duration of family dinner later that day. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Constraints or Preferences? Identifying Answers from Part-time Workers’ Transitions in Denmark, France and the United-Kingdom

    OpenAIRE

    Gash, V.

    2008-01-01

    This article investigates whether women work part-time through preference or constraint and argues that different countries provide different opportunities for preference attainment. It argues that women with family responsibilities are unlikely to have their working preferences met without national policies supportive of maternal employment. Using event history analysis the article tracks part-time workers' transitions to both full-time employment and to labour market drop-out.The article co...

  12. Monitoring of mass flux of catalyst FCC in a Cold Pilot Unit by gamma radiation transmission

    International Nuclear Information System (INIS)

    Brito, Marcio Fernando Paixao de

    2014-01-01

    This paper proposes a model for monitoring the mass flow of catalyst FCC - Fluid Catalytic Cracking - in a CPU - Cold Pilot unit - due to the injection of air and solid by gamma radiation transmission. The CPU simplifies the process of FCC, which is represented by the catalyst cycle, and it was constructed of acrylic, so that the flow can be visualized. The CPU consists of riser separation chamber and return column, and simulates the riser reactor of the FCC process. The catalyst is injected into the column back to the base of the riser, an inclined tube, where the compressed air means that there fluidization along the riser. When the catalyst comes in the separation chamber, the solid phase is sent to the return column, and the gas phase exits the system through one of the four cyclones at the top of the separation chamber. The transmission gamma of measures will be made by means of three test sections that have source and detector shielded. Pressure drop in the riser measurements are made through three pressure gauges positioned on the riser. The source used was Am-241 gamma ray with energy of 60 keV, and detector used was a scintillator of NaI (Tl) of 2 x 2 . Measures the mass flow of catalyst are made by varying the seal of the catalyst, and density of solid in the riser because with the combination of these measures can determine the speed of the catalyst in the riser. The results show that the transmission gamma is a suitable technique for monitoring the flow of catalyst, flow model in CPU is annular, tomography third generation is more appropriate to study the CPU and the density variation in circulation in the CPU decreases linearly with increasing air flow. (author)

  13. Healthy hospital food initiatives in the United States: time to ban sugar sweetened beverages to reduce childhood obesity.

    Science.gov (United States)

    Wojcicki, Janet M

    2013-06-01

    While childhood obesity is a global problem, the extent and severity of the problem in United States, has resulted in a number of new initiatives, including recent hospital initiatives to limit the sale of sweetened beverages and other high calorie drinks in hospital vending machines and cafeterias. These proposed policy changes are not unique to United States, but are more comprehensive in the number of proposed hospitals that they will impact. Meanwhile, however, it is advised, that these initiatives should focus on banning sugar sweetened beverages, including sodas, 100% fruit juice and sports drinks, from hospital cafeterias and vending machines instead of limiting their presence, so as to ensure the success of these programs in reducing the prevalence of childhood obesity. If US hospitals comprehensively remove sugar sweetened beverages from their cafeterias and vending machines, these programs could subsequently become a model for efforts to address childhood obesity in other areas of the world. Hospitals should be a model for health care reform in their communities and removing sugar sweetened beverages is a necessary first step. ©2013 Foundation Acta Paediatrica. Published by Blackwell Publishing Ltd.

  14. Ahead of his time: Jacob Lipman's 1930 estimate of atmospheric sulfur deposition for the conterminous United States

    Science.gov (United States)

    Landa, Edward R.; Shanley, James B.

    2015-01-01

    A 1936 New Jersey Agricultural Experiment Station Bulletin provided an early quantitative assessment of atmospheric deposition of sulfur for the United States that has been compared in this study with more recent assessments. In the early 20th century, anthropogenic sulfur additions from the atmosphere to the soil by the combustion of fossil fuels were viewed as part of the requisite nutrient supply of crops. Jacob G. Lipman, the founding editor of Soil Science, and his team at Rutgers University, made an inventory of such additions to soils of the conterminous United States during the economic depression of the 1930s as part of a federally funded project looking at nutrient balances in soils. Lipman's team gathered data compiled by the US Bureau of Mines on coal and other fuel consumption by state and calculated the corresponding amounts of sulfur emitted. Their work pioneered a method of assessment that became the norm in the 1970s to 1980s—when acid rain emerged as a national issue. Lipman's estimate of atmospheric sulfur deposition in the 1930 is in reasonable agreement with recent historic reconstructions.

  15. 75 FR 27798 - Notice of Issuance of Final Determination Concerning Certain Commodity-Based Clustered Storage Units

    Science.gov (United States)

    2010-05-18

    ...) with instructions on it that allows it to perform certain functions of preventing piracy of software... and HDD canisters usually include a disk array controller frame which effects the interface between the subsystem's storage units and a CPU. In this case, the software effects the interconnection...

  16. Evaluating the Human Damage of Tsunami at Each Time Frame in Aggregate Units Based on GPS data

    Directory of Open Access Journals (Sweden)

    Y. Ogawa

    2016-06-01

    Full Text Available Assessments of the human damage caused by the tsunami are required in order to consider disaster prevention at such a regional level. Hence, there is an increasing need for the assessments of human damage caused by earthquakes. However, damage assessments in japan currently usually rely on static population distribution data, such as statistical night time population data obtained from national census surveys. Therefore, human damage estimation that take into consideration time frames have not been assessed yet. With these backgrounds, the objectives of this study are: to develop a method for estimating the population distribution of the for each time frame, based on location positioning data observed with mass GPS loggers of mobile phones, to use a evacuation and casualties models for evaluating human damage due to the tsunami, and evaluate each time frame by using the data developed in the first objective, and 3 to discuss the factors which cause the differences in human damage for each time frame. By visualizing the results, we clarified the differences in damage depending on time frame, day and area. As this study enables us to assess damage for any time frame in and high resolution, it will be useful to consider provision for various situations when an earthquake may hit, such as during commuting hours or working hours and week day or holiday.

  17. Impact of mobile intensive care unit use on total ischemic time and clinical outcomes in ST-elevation myocardial infarction patients - real-world data from the Acute Coronary Syndrome Israeli Survey.

    Science.gov (United States)

    Koifman, Edward; Beigel, Roy; Iakobishvili, Zaza; Shlomo, Nir; Biton, Yitschak; Sabbag, Avi; Asher, Elad; Atar, Shaul; Gottlieb, Shmuel; Alcalai, Ronny; Zahger, Doron; Segev, Amit; Goldenberg, Ilan; Strugo, Rafael; Matetzky, Shlomi

    2017-01-01

    Ischemic time has prognostic importance in ST-elevation myocardial infarction patients. Mobile intensive care unit use can reduce components of total ischemic time by appropriate triage of ST-elevation myocardial infarction patients. Data from the Acute Coronary Survey in Israel registry 2000-2010 were analyzed to evaluate factors associated with mobile intensive care unit use and its impact on total ischemic time and patient outcomes. The study comprised 5474 ST-elevation myocardial infarction patients enrolled in the Acute Coronary Survey in Israel registry, of whom 46% ( n=2538) arrived via mobile intensive care units. There was a significant increase in rates of mobile intensive care unit utilization from 36% in 2000 to over 50% in 2010 ( pcare unit use were Killip>1 (odds ratio=1.32, pcare units benefitted from increased rates of primary reperfusion therapy (odds ratio=1.58, pcare unit benefitted from shorter median total ischemic time compared with non-mobile intensive care unit patients (175 (interquartile range 120-262) vs 195 (interquartile range 130-333) min, respectively ( pcare unit use was the most important predictor in achieving door-to-balloon time care unit group (odds ratio=0.79, 95% confidence interval (0.66-0.94), p=0.01). Among patients with ST-elevation myocardial infarction, the utilization of mobile intensive care units is associated with increased rates of primary reperfusion, a reduction in the time interval to reperfusion, and a reduction in one-year adjusted mortality.

  18. NASA Ames DEVELOP Interns: Helping the Western United States Manage Natural Resources One Project at a Time

    Science.gov (United States)

    Justice, Erin; Newcomer, Michelle

    2010-01-01

    The western half of the United States is made up of a number of diverse ecosystems ranging from arid desert to coastal wetlands and rugged forests. Every summer for the past 7 years students ranging from high school to graduate level gather at NASA Ames Research Center (ARC) as part of the DEVELOP Internship Program. Under the guidance of Jay Skiles [Ames Research Center (ARC) - Ames DEVELOP Manager] and Cindy Schmidt [ARC/San Jose State University Ames DEVELOP Coordinator] they work as a team on projects exploring topics including: invasive species, carbon flux, wetland restoration, air quality monitoring, storm visualizations, and forest fires. The study areas for these projects have been in Washington, Utah, Oregon, Nevada, Hawaii, Alaska and California. Interns combine data from NASA and partner satellites with models and in situ measurements to complete prototype projects demonstrating how NASA data and resources can help communities tackle their Earth Science related problems.

  19. Use of Flumazenil to Provide Adequate Recovery Time Post-Midazolom Infusion in a General Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    MOJTABA MOJTAHEDZADEH

    1999-08-01

    Full Text Available Sedation permits patients to tolerate the various treatment modalities to which they are subjected. However it may sometimes cause prolonged sedation in critically ill patients. Flumazenil, a benzo¬diazepine antagonist, reverses midazolam-induced sedation and amnesia. We prospectively designed a double-blind randomized study to evaluate the effects of flumazenil on thirty (30 Iranian General Intensive Care Unit (ICU patients. They were requiring mechanical ventilation for more than 12 hours and they were sedated by midazolam infusions. Sedation levels were measured hourly during the infusion, at the end of the infusion, and at 5, 15, 30, 60, and 120 min after cessation of the mida¬zolam infusion. Reversal of sedation was observed in all patients who received flumazenil, and re-sedation occurred in seven of these patients. Reversal was not seen in any of the patients who receiv-ed placebo.

  20. Premigration School Quality, Time Spent in the United States, and the Math Achievement of Immigrant High School Students.

    Science.gov (United States)

    Bozick, Robert; Malchiodi, Alessandro; Miller, Trey

    2016-10-01

    Using a nationally representative sample of 1,189 immigrant youth in American high schools, we examine whether the quality of education in their country of origin is related to post-migration math achievement in the 9th grade. To measure the quality of their education in the country of origin, we use country-specific average test scores from two international assessments: the Programme for International Student Assessment (PISA) and the Trends in International Mathematics and Science Study (TIMSS). We find that the average PISA or TIMSS scores for immigrant youth's country of origin are positively associated with their performance on the 9th grade post-migration math assessment. We also find that each year spent in the United States is positively associated with performance on the 9th grade post-migration math assessment, but this effect is strongest for immigrants from countries with low PISA/TIMSS scores.

  1. Mathematical modelling and optimization of a large-scale combined cooling, heat, and power system that incorporates unit changeover and time-of-use electricity price

    International Nuclear Information System (INIS)

    Zhu, Qiannan; Luo, Xianglong; Zhang, Bingjian; Chen, Ying

    2017-01-01

    Highlights: • We propose a novel superstructure for the design and optimization of LSCCHP. • A multi-objective multi-period MINLP model is formulated. • The unit start-up cost and time-of-use electricity prices are involved. • Unit size discretization strategy is proposed to linearize the original MINLP model. • A case study is elaborated to demonstrate the effectiveness of the proposed method. - Abstract: Building energy systems, particularly large public ones, are major energy consumers and pollutant emission contributors. In this study, a superstructure of large-scale combined cooling, heat, and power system is constructed. The off-design unit, economic cost, and CO_2 emission models are also formulated. Moreover, a multi-objective mixed integer nonlinear programming model is formulated for the simultaneous system synthesis, technology selection, unit sizing, and operation optimization of large-scale combined cooling, heat, and power system. Time-of-use electricity price and unit changeover cost are incorporated into the problem model. The economic objective is to minimize the total annual cost, which comprises the operation and investment costs of large-scale combined cooling, heat, and power system. The environmental objective is to minimize the annual global CO_2 emission of large-scale combined cooling, heat, and power system. The augmented ε–constraint method is applied to achieve the Pareto frontier of the design configuration, thereby reflecting the set of solutions that represent optimal trade-offs between the economic and environmental objectives. Sensitivity analysis is conducted to reflect the impact of natural gas price on the combined cooling, heat, and power system. The synthesis and design of combined cooling, heat, and power system for an airport in China is studied to test the proposed synthesis and design methodology. The Pareto curve of multi-objective optimization shows that the total annual cost varies from 102.53 to 94.59 M

  2. Studying the development of asynchronous rolling of the rotor over the stator with the turbine unit protection systems having different response times

    Science.gov (United States)

    Shatokhin, V. F.

    2014-07-01

    The possibility to stabilize the developing asynchronous rolling of the rotor over the stator under the conditions of power unit protections coming in action with different response times is considered. Asynchronous rolling of the rotor over the stator may develop when the rotating rotor comes in contact with the stator at high amplitudes of vibration caused by an abrupt loss of rotor balancing, by forced or self-excited vibration of the rotor, and by other factors. The danger of asynchronous rolling is connected with almost instantaneous development of self-excited vibration of the rotor when it comes in contact with the stator and with the rotor vibration amplitudes and forces of interaction between the rotor and stator dangerous for the turbine unit integrity. It is assumed that the turbine unit protection systems come in action after the arrival of signal of exceeding the permissible vibration level and produce commands to disconnect the generator from the grid, and to stop the supply of working fluid into the flow path, due to which an accelerating torque ceases to act on the turbine unit shaft. The protection system response speed is determined by a certain time t = ABtime that is taken for its components to come in action from the commencement of the event (application of the signal) to closure of the stop valves. The time curves of the main rolling parameters as functions of the ABtime value are presented. It is shown that the response time of existing protection systems is not sufficient for efficiently damping the rolling phenomenon, although the use of an electrical protection system (with the response time equal to 0.40-0.45 s) may have a positive effect on stabilizing the vibration amplitudes to a certain extent during the rolling and on smoothing its dangerous consequences. The consequences of rotor rolling over the stator can be efficiently mitigated by increasing the energy losses in the rotor-stator system (especially in the stator) and by

  3. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  4. Substance-use disorders and poverty as prospective predictors of first-time homelessness in the United States.

    Science.gov (United States)

    Thompson, Ronald G; Wall, Melanie M; Greenstein, Eliana; Grant, Bridget F; Hasin, Deborah S

    2013-12-01

    We examined whether substance-use disorders and poverty predicted first-time homelessness over 3 years. We analyzed longitudinal data from waves 1 (2001-2002) and 2 (2004-2005) of the National Epidemiologic Survey on Alcohol and Related Conditions to determine the main and interactive effects of wave 1 substance use disorders and poverty on first-time homelessness by wave 2, among those who were never homeless at wave 1 (n = 30,558). First-time homelessness was defined as having no regular place to live or having to live with others for 1 month or more as a result of having no place of one's own since wave 1. Alcohol-use disorders (adjusted odds ratio [AOR] = 1.34), drug-use disorders (AOR = 2.51), and poverty (AOR = 1.34) independently increased prospective risk for first-time homelessness, after adjustment for ecological variables. Substance-use disorders and poverty interacted to differentially influence risk for first-time homelessness (P homelessness, and can serve as a benchmark for future studies. Substance abuse treatment should address financial status and risk of future homelessness.

  5. Do time of birth, unit volume, and staff seniority affect neonatal outcome in deliveries at ≥34+0 weeks of gestation?

    Science.gov (United States)

    Reif, P; Pichler, G; Griesbacher, A; Lehner, G; Schöll, W; Lang, U; Hofmann, H; Ulrich, D

    2018-06-01

    We investigated whether time of birth, unit volume, and staff seniority affect neonatal outcome in neonates born at ≥34 +0 weeks of gestation. Population-based prospective cohort study. Ten public hospitals in the Austrian province of Styria. A total of 87 065 neonates delivered in the period 2004-2015. Based on short-term outcome data, generalised linear mixed models were used to calculate the risk for adverse and severely adverse neonatal outcomes according to time of birth, unit volume, and staff seniority. Neonatal composite adverse and severely adverse outcome measures. The odds ratio for severely adverse events during the night-time (22:01-07:29 hours) compared with the daytime (07:30-15:00 hours) was 1.35 (95% confidence interval, 95% CI 1.13-1.61). There were no significant differences in neonatal outcome comparing weekdays and weekends, and comparing office hours and shifts. Units with 500-1000 deliveries per year had the lowest risk for adverse events. Adverse and severely adverse neonatal outcomes were least common for midwife-guided deliveries, and became more frequent with the level of experience of the doctors attending the delivery. With increasing pregnancy risks, senior staff attending delivery and delivering in a tertiary centre reduce the odds ratio for adverse events. Different times of delivery were associated with increased adverse neonatal outcomes. The management of uncomplicated deliveries by less experienced staff showed no negative impact on perinatal outcome. In contrast, riskier pregnancies delivered by senior staff in a tertiary centre favour a better outcome. Achieving a better balance in the total number of labour ward staff during the day and the night appears to be a greater priority than increasing the continuous presence of senior obstetrical staff on the labour ward during the out-of-hours period. Deliveries during night time lead to a greater number of neonates experiencing severely adverse events. © 2017 Royal College of

  6. A preliminary estimate of the EUVE cumulative distribution of exposure time on the unit sphere. [Extreme Ultra-Violet Explorer

    Science.gov (United States)

    Tang, C. C. H.

    1984-01-01

    A preliminary study of an all-sky coverage of the EUVE mission is given. Algorithms are provided to compute the exposure of the celestial sphere under the spinning telescopes, taking into account that during part of the exposure time the telescopes are blocked by the earth. The algorithms are used to give an estimate of exposure time at different ecliptic latitudes as a function of the angle of field of view of the telescope. Sample coverage patterns are also given for a 6-month mission.

  7. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  8. Time Trends and Predictors of Abnormal Postoperative Body Temperature in Infants Transported to the Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Hedwig Schroeck

    2016-01-01

    Full Text Available Background. Despite increasing adoption of active warming methods over the recent years, little is known about the effectiveness of these interventions on the occurrence of abnormal postoperative temperatures in sick infants. Methods. Preoperative and postoperative temperature readings, patient characteristics, and procedural factors of critically ill infants at a single institution were retrieved retrospectively from June 2006 until May 2014. The primary endpoints were the incidence and trend of postoperative hypothermia and hyperthermia on arrival at the intensive care units. Univariate and adjusted analyses were performed to identify factors independently associated with abnormal postoperative temperatures. Results. 2,350 cases were included. 82% were normothermic postoperatively, while hypothermia and hyperthermia each occurred in 9% of cases. During the study period, hypothermia decreased from 24% to 2% (p<0.0001 while hyperthermia remained unchanged (13% in 2006, 8% in 2014, p=0.357. Factors independently associated with hypothermia were higher ASA status (p=0.02, lack of intraoperative convective warming (p<0.001 and procedure date before 2010 (p<0.001. Independent associations for postoperative hyperthermia included lower body weight (p=0.01 and procedure date before 2010 (p<0.001. Conclusions. We report an increase in postoperative normothermia rates in critically ill infants from 2006 until 2014. Careful monitoring to avoid overcorrection and hyperthermia is recommended.

  9. Real-time PCR and microscopy: Are the two methods measuring the same unit of arbuscular mycorrhizal fungal abundance?

    NARCIS (Netherlands)

    Gamper, H.A.; Young, J.P.W.; Jones, D.L.; Hodge, A.

    2008-01-01

    To enable quantification of mycelial abundance in mixed-species environments, eight new TaqMan® real-time PCR assays were developed for five arbuscular mycorrhizal fungal (AMF, Glomeromycota) taxa. The assays targeted genes encoding 18S rRNA or actin, and were tested on DNA from cloned gene

  10. Digital versus analog complete-arch impressions for single-unit premolar implant crowns : Operating time and patient preference

    NARCIS (Netherlands)

    Schepke, Ulf; Meijer, Henny J. A.; Kerdijk, Wouter; Cune, Marco S.

    Statement of problem. Digital impression-making techniques are supposedly more patient friendly and less time-consuming than analog techniques, but evidence is lacking to substantiate this assumption. Purpose. The purpose of this in vivo within-subject comparison study was to examine patient

  11. The Toxic Exposure Surveillance System (TESS): Risk assessment and real-time toxicovigilance across United States poison centers

    International Nuclear Information System (INIS)

    Watson, William A.; Litovitz, Toby L.; Belson, Martin G.; Funk Wolkin, Amy B.; Patel, Manish; Schier, Joshua G.; Reid, Nicole E.; Kilbourne, Edwin; Rubin, Carol

    2005-01-01

    The Toxic Exposure Surveillance System (TESS) is a uniform data set of US poison centers cases. Categories of information include the patient, the caller, the exposure, the substance(s), clinical toxicity, treatment, and medical outcome. The TESS database was initiated in 1985, and provides a baseline of more than 36.2 million cases through 2003. The database has been utilized for a number of safety evaluations. Consideration of the strengths and limitations of TESS data must be incorporated into data interpretation. Real-time toxicovigilance was initiated in 2003 with continuous uploading of new cases from all poison centers to a central database. Real-time toxicovigilance utilizing general and specific approaches is systematically run against TESS, further increasing the potential utility of poison center experiences as a means of early identification of potential public health threats

  12. The Toxic Exposure Surveillance System (TESS): risk assessment and real-time toxicovigilance across United States poison centers.

    Science.gov (United States)

    Watson, William A; Litovitz, Toby L; Belson, Martin G; Wolkin, Amy B Funk; Patel, Manish; Schier, Joshua G; Reid, Nicole E; Kilbourne, Edwin; Rubin, Carol

    2005-09-01

    The Toxic Exposure Surveillance System (TESS) is a uniform data set of US poison centers cases. Categories of information include the patient, the caller, the exposure, the substance(s), clinical toxicity, treatment, and medical outcome. The TESS database was initiated in 1985, and provides a baseline of more than 36.2 million cases through 2003. The database has been utilized for a number of safety evaluations. Consideration of the strengths and limitations of TESS data must be incorporated into data interpretation. Real-time toxicovigilance was initiated in 2003 with continuous uploading of new cases from all poison centers to a central database. Real-time toxicovigilance utilizing general and specific approaches is systematically run against TESS, further increasing the potential utility of poison center experiences as a means of early identification of potential public health threats.

  13. Ozone time scale decomposition and trend assessment from surface observations in National Parks of the United States

    Science.gov (United States)

    Mao, H.; McGlynn, D. F.; Wu, Z.; Sive, B. C.

    2017-12-01

    A time scale decomposition technique, the Ensemble Empirical Mode Decomposition (EEMD), has been employed to decompose the time scales in long-term ozone measurement data at 24 US National Park Service sites. Time scales of interest include the annual cycle, variability by large scale climate oscillations, and the long-term trend. The implementation of policy regulations was found to have had a greater effect on sites nearest to urban regions. Ozone daily mean values increased until around the late 1990s followed by decreasing trends during the ensuing decades for sites in the East, southern California, and northwestern Washington. Sites in the Midwest did not experience a reversal of trends from positive to negative until the mid- to late 2000s. The magnitude of the annual amplitude decreased for nine sites and increased for three sites. Stronger decreases in the annual amplitude occurred in the East, with more sites in the East experiencing decreases in annual amplitude than in the West. The date of annual ozone peaks and minimums has changed for 12 sites in total, but those with a shift in peak date did not necessarily have a shift in the trough date. There appeared to be a link between peak dates occurring earlier and a decrease in the annual amplitude. This is likely related to a decrease in ozone titration due to NOx emission reductions. Furthermore, it was found that the shift in the Pacific Decadal Oscillation (PDO) regime from positive to negative in 1998-1999 resulting in an increase in occurrences of La Niña-like conditions had the effect of directing more polluted air masses from East Asia to higher latitudes over North America. This change in PDO regime was likely one main factor causing the increase in ozone concentrations on all time scales at an Alaskan site DENA-HQ.

  14. Digital versus analog complete-arch impressions for single-unit premolar implant crowns: Operating time and patient preference.

    Science.gov (United States)

    Schepke, Ulf; Meijer, Henny J A; Kerdijk, Wouter; Cune, Marco S

    2015-09-01

    Digital impression-making techniques are supposedly more patient friendly and less time-consuming than analog techniques, but evidence is lacking to substantiate this assumption. The purpose of this in vivo within-subject comparison study was to examine patient perception and time consumption for 2 complete-arch impression-making methods: a digital and an analog technique. Fifty participants with a single missing premolar were included. Treatment consisted of implant therapy. Three months after implant placement, complete-arch digital (Cerec Omnicam; Sirona) and analog impressions (semi-individual tray, Impregum; 3M ESPE) were made, and the participant's opinion was evaluated with a standard questionnaire addressing several domains (inconvenience, shortness of breath, fear of repeating the impression, and feelings of helplessness during the procedure) with the visual analog scale. All participants were asked which procedure they preferred. Operating time was measured with a stopwatch. The differences between impressions made for maxillary and mandibular implants were also compared. The data were analyzed with paired and independent sample t tests, and effect sizes were calculated. Statistically significant differences were found in favor of the digital procedure regarding all subjective domains (P<.001), with medium to large effect sizes. Of all the participants, over 80% preferred the digital procedure to the analog procedure. The mean duration of digital impression making was 6 minutes and 39 seconds (SD=1:51) versus 12 minutes and 13 seconds (SD=1:24) for the analog impression (P<.001, effect size=2.7). Digital impression making for the restoration of a single implant crown takes less time than analog impression making. Furthermore, participants preferred the digital scan and reported less inconvenience, less shortness of breath, less fear of repeating the impression, and fewer feelings of helplessness during the procedure. Copyright © 2015 Editorial Council

  15. New-generation curing units and short irradiation time: the degree of conversion of microhybrid composite resin.

    Science.gov (United States)

    Scotti, Nicolla; Venturello, Alberto; Migliaretti, Giuseppe; Pera, Francesco; Pasqualini, Damiano; Geobaldo, Francesco; Berutti, Elio

    2011-09-01

    This in vitro study investigated the depth of cure of a microhybrid composite resin when cured with reduced times of exposure to three commercially available curing lights. Different sample thicknesses (1, 2, and 3 mm) were light cured in high intensity polymerization mode (2,400 mW/cm² for 5, 10, 15, and 20 seconds; 1,100 mW/cm² for 10, 20, 30, and 40 seconds; and 1,100 mW/cm² for 10, 20, 30, and 40 seconds, respectively). The degree of conversion (%) at the bottom of each sample was measured by Attenuated Total Reflection Fourier Transform Infrared (ATR F-TIR) analysis after each polymerization step. Data were analyzed by ANOVA for repeated measures, showing the degree of conversion was not influenced by the curing light employed (P = .622) but was significantly influenced by the thickness of composite resin (P conversion vs the shorter irradiation time permitted (T1) were not significant among different lamps but were significant among different thicknesses. The depth of cure of microhybrid composite resin appears not to be influenced by the curing light employed. Increased irradiation time significantly increases the degree of conversion. Thickness strongly influences depth of cure.

  16. Self-organization comprehensive real-time state evaluation model for oil pump unit on the basis of operating condition classification and recognition

    Science.gov (United States)

    Liang, Wei; Yu, Xuchao; Zhang, Laibin; Lu, Wenqing

    2018-05-01

    In oil transmission station, the operating condition (OC) of an oil pump unit sometimes switches accordingly, which will lead to changes in operating parameters. If not taking the switching of OCs into consideration while performing a state evaluation on the pump unit, the accuracy of evaluation would be largely influenced. Hence, in this paper, a self-organization Comprehensive Real-Time State Evaluation Model (self-organization CRTSEM) is proposed based on OC classification and recognition. However, the underlying model CRTSEM is built through incorporating the advantages of Gaussian Mixture Model (GMM) and Fuzzy Comprehensive Evaluation Model (FCEM) first. That is to say, independent state models are established for every state characteristic parameter according to their distribution types (i.e. the Gaussian distribution and logistic regression distribution). Meanwhile, Analytic Hierarchy Process (AHP) is utilized to calculate the weights of state characteristic parameters. Then, the OC classification is determined by the types of oil delivery tasks, and CRTSEMs of different standard OCs are built to constitute the CRTSEM matrix. On the other side, the OC recognition is realized by a self-organization model that is established on the basis of Back Propagation (BP) model. After the self-organization CRTSEM is derived through integration, real-time monitoring data can be inputted for OC recognition. At the end, the current state of the pump unit can be evaluated by using the right CRTSEM. The case study manifests that the proposed self-organization CRTSEM can provide reasonable and accurate state evaluation results for the pump unit. Besides, the assumption that the switching of OCs will influence the results of state evaluation is also verified.

  17. Estimates of the timing of reductions in genital warts and high grade cervical intraepithelial neoplasia after onset of human papillomavirus (HPV) vaccination in the United States.

    Science.gov (United States)

    Chesson, Harrell W; Ekwueme, Donatus U; Saraiya, Mona; Dunne, Eileen F; Markowitz, Lauri E

    2013-08-20

    The objective of this study was to estimate the number of years after onset of a quadrivalent HPV vaccination program before notable reductions in genital warts and cervical intraepithelial neoplasia (CIN) will occur in teenagers and young adults in the United States. We applied a previously published model of HPV vaccination in the United States and focused on the timing of reductions in genital warts among both sexes and reductions in CIN 2/3 among females. Using different coverage scenarios, the lowest being consistent with current 3-dose coverage in the United States, we estimated the number of years before reductions of 10%, 25%, and 50% would be observed after onset of an HPV vaccination program for ages 12-26 years. The model suggested female-only HPV vaccination in the intermediate coverage scenario will result in a 10% reduction in genital warts within 2-4 years for females aged 15-19 years and a 10% reduction in CIN 2/3 among females aged 20-29 years within 7-11 years. Coverage had a major impact on when reductions would be observed. For example, in the higher coverage scenario a 25% reduction in CIN2/3 would be observed with 8 years compared with 15 years in the lower coverage scenario. Our model provides estimates of the potential timing and magnitude of the impact of HPV vaccination on genital warts and CIN 2/3 at the population level in the United States. Notable, population-level impacts of HPV vaccination on genital warts and CIN 2/3 can occur within a few years after onset of vaccination, particularly among younger age groups. Our results are generally consistent with early reports of declines in genital warts among youth. Published by Elsevier Ltd.

  18. Accelerometer Measured Level of Physical Activity Indoors and Outdoors During Preschool Time in Sweden and the United States

    DEFF Research Database (Denmark)

    Raustorp, A.; Pagels, P.; Boldemann, C.

    2012-01-01

    BACKGROUND: It is important to understand the correlates of physical activity in order to influence policy and create environments that promote physical activity among preschool children. We compared preschoolers' physical activity in Swedish and in US settings and objectively examined differences...... boys and girls indoor and outdoor physical activity regarding different intensity levels and sedentary behaviour. METHODS: Accelerometer determined physical activity in 50 children with mean age 52 months, (range 40-67) was recorded during preschool time for 5 consecutive weekdays at four sites...

  19. Real-time subsystem in nuclear physics. Use of a terminal unit for automatical control of experiments

    International Nuclear Information System (INIS)

    Chatain, Dominique.

    1975-01-01

    A data processing system allowing data acquisition and the automatic control of spectrometry experiments, has been designed and installed at the Institut de Physique Nucleaire of Lyon. This system consists of a CDC 1700 computer used by the computing center as a terminal of the IN2P3 CDC 6600 computer and to which a remote station located near the experiment has been connected. Peripherals for spectrometer control and a display are connected to the remote station. This display makes it possible for users to converse with the computer and to visualize the spectra processing under a graphic or alphanumerical form. The software consists of a real time subsystem of the standard CDC system: ''Mass Storage Operating System''. This real time subsystem is meant to achieve data transfers between the computer and its remote station. A dynamic store allocation simulating a virtual memory is attached to the system. It allows the parallel running of many programs, no matter how long they are. Moreover a disk file supervisor allows experimenters to store experimental results for delayed processing [fr

  20. Dynamic simulation of a pilot scale vacuum gas oil hydrocracking unit by the space-time CE/SE method

    Energy Technology Data Exchange (ETDEWEB)

    Sadighi, S.; Ahmad, A. [Institute of Hydrogen Economy, Universiti Teknologi Malaysia, Johor Bahru (Malaysia); Shirvani, M. [Faculty of Chemical Engineering, University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2012-05-15

    This work introduces a modified space-time conservation element/solution element (CE/SE) method for the simulation of the dynamic behavior of a pilot-scale hydrocracking reactor. With this approach, a four-lump dynamic model including vacuum gas oil (VGO), middle distillate, naphtha and gas is solved. The proposed method is capable of handling the stiffness of the partial differential equations resulting from the hydrocracking reactions. To have a better judgment, the model is also solved by the finite difference method (FDM), and the results from both approaches are compared. Initially, the absolute average deviation of the cold dynamic simulation using the CE/SE approach is 8.98 %, which is better than that obtained using the FDM. Then, the stability analysis proves that for achieving an appropriate response from the dynamic model, the Courant number, which is a function of the time step size, mesh size and volume flow rate through the catalytic bed, should be less than 1. Finally, it is found that, following a careful selection of these parameters, the CE/SE solutions to the hydrocracking model can produce higher accuracy than the FDM results. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  1. Sensitivity and uncertainty analyses of unsaturated flow travel time in the CHnz unit of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Nichols, W.E.; Freshley, M.D.

    1991-10-01

    This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs

  2. The analysis in real time, during the pre-commissioning period, of the overall plant control system of the CANDU NPP unit

    International Nuclear Information System (INIS)

    Tapu, C.; Irimia, M.

    1994-01-01

    The physical processes in the CANDU NPP are controlled by a computer system which, based on the associated package of programmes, performs both the monitoring function at the level of the operational area, and the digital regulation function of the technological systems within the NPP. For an optimal operation of the NPP, in dynamic as well as in stationary regime, the correlation of the direct digital control of the technological systems is very important for the NPP's overall digital control system. Taking into account the fact that during the pre-commissioning period of a CANDU unit it is necessary to test in dynamic regime the performance of the overall digital control function of the NPP, a system of verification and testing in real time was developed, by connecting a micro simulator of the physical process in the NPP to the actual computer system of the unit. The paper presents the methods and techniques used, as well as the results of the tests for various operational modes, which highlight the functioning of the digital control system of the CANDU NPP unit. (Author)

  3. "I Always Feel Like I Have to Rush…" Pet Owner and Small Animal Veterinary Surgeons' Reflections on Time during Preventative Healthcare Consultations in the United Kingdom.

    Science.gov (United States)

    Belshaw, Zoe; Robinson, Natalie J; Dean, Rachel S; Brennan, Marnie L

    2018-02-08

    Canine and feline preventative healthcare consultations can be more complex than other consultation types, but they are typically not allocated additional time in the United Kingdom (UK). Impacts of the perceived length of UK preventative healthcare consultations have not previously been described. The aim of this novel study was to provide the first qualitative description of owner and veterinary surgeon reflections on time during preventative healthcare consultations. Semi-structured telephone interviews were conducted with 14 veterinary surgeons and 15 owners about all aspects of canine and feline preventative healthcare consultations. These qualitative data were thematically analysed, and four key themes identified. This paper describes the theme relating to time and consultation length. Patient, owner, veterinary surgeon and practice variables were recalled to impact the actual, versus allocated, length of a preventative healthcare consultation. Preventative healthcare consultations involving young, old and multi-morbid animals and new veterinary surgeon-owner partnerships appear particularly susceptible to time pressures. Owners and veterinary surgeons recalled rushing and minimizing discussions to keep consultations within their allocated time. The impact of the pace, content and duration of a preventative healthcare consultation may be influential factors in consultation satisfaction. These interviews provide an important insight into the complex nature of preventative healthcare consultations and the behaviour of participants under different perceived time pressures. These data may be of interest and relevance to all stakeholders in dog and cat preventative healthcare.

  4. Modifiable variables in physical therapy education programs associated with first-time and three-year National Physical Therapy Examination pass rates in the United States

    Directory of Open Access Journals (Sweden)

    Chad Cook

    2015-09-01

    Full Text Available Purpose: This study aimed to examine the modifiable programmatic characteristics reflected in the Commission on Accreditation in Physical Therapy Education (CAPTE Annual Accreditation Report for all accredited programs that reported pass rates on the National Physical Therapist Examination, and to build a predictive model for first-time and three-year ultimate pass rates. Methods: This observational study analyzed programmatic information from the 185 CAPTE-accredited physical therapy programs in the United States and Puerto Rico out of a total of 193 programs that provided the first-time and three-year ultimate pass rates in 2011. Fourteen predictive variables representing student selection and composition, clinical education length and design, and general program length and design were analyzed against first-time pass rates and ultimate pass rates on the NPTE. Univariate and multivariate multinomial regression analysis for first-time pass rates and logistic regression analysis for three-year ultimate pass rates were performed. Results: The variables associated with the first-time pass rate in the multivariate analysis were the mean undergraduate grade point average (GPA and the average age of the cohort. Multivariate analysis showed that mean undergraduate GPA was associated with the three-year ultimate pass rate. Conclusions: Mean undergraduate GPA was found to be the only modifiable predictor for both first-time and three-year pass rates among CAPTE-accredited physical therapy programs.

  5. Substance Use Disorders and Poverty as Prospective Predictors of Adult First-Time Suicide Ideation or Attempt in the United States.

    Science.gov (United States)

    Thompson, Ronald G; Alonzo, Dana; Hu, Mei-Chen; Hasin, Deborah S

    2017-04-01

    This study examined whether substance use disorders (SUD) and poverty predicted first-time suicide ideation or attempt in United States national data. Respondents without prior histories of suicide ideation or attempt at Wave 1 of the NESARC (N = 31,568) were analyzed to determine the main and interactive effects of SUD and poverty on first-time suicide ideation or attempt by Wave 2, 3 years later. Adjusted for controls, poverty (AOR = 1.35, CI = 1.05-1.73) and drug use disorders (AOR = 2.10, CI = 1.07-4.14) independently increased risk for first-time suicide ideation or attempt at Wave 2. SUD and poverty did not interact to differentially increase risk for first-time suicide ideation or attempt, prior to or after adjustment for controls. This study reinforces the importance of SUD and poverty in the risk for first-time suicide ideation or attempt. Public health efforts should target messages to drug users and the impoverished that highlight their increased risk for first-time suicide.

  6. Facies architecture of basin-margin units in time and space: Lower to Middle Miocene Sivas Basin, Turkey

    Science.gov (United States)

    Çiner, A.; Kosun, E.

    2003-04-01

    The Miocene Sivas Basin is located within a collision zone, forming one of the largest basins in Central Turkey that developed unconformably on a foundered Paleozoic-Mesozoic basement and Eocene-Oligocene deposits. The time and space relationships of sedimentary environments and depositional evolution of Lower to Middle Miocene rocks exposed between Zara and Hafik towns is studied. A 4 km thick continuous section is subdivided into the Agilkaya and Egribucak Formations. Each formation shows an overall fining upward trend and contains three members. Although a complete section is present at the western part (near Hafik) of the basin, to the east the uppermost two members (near Zara) are absent. The lower members of both formations are composed of fluvial sheet-sandstone and red mudstone that migrate laterally on a flood basin within a semi-arid fan system. In the Agilkaya Formation that crops out near Zara, alluvial fans composed of red-pink volcanic pebbles are also present. The middle members are composed of bedded to massive gypsum and red-green mudstone of a coastal and/or continental sabkha environment. While the massive gypsum beds reach several 10’s of m in Hafik area, near Zara, they are only few m thick and alternate with green mudstones. In Hafik, bedded gypsums are intercalated with lagoonal dolomitic limestone and bituminous shale in the Agilkaya Formation and with fluvial red-pink sandstone-red mudstone in the Egribucak Formation. The upper members are made up of fossiliferous mudstone and discontinuous sandy limestone beds with gutter casts, HCS, and 3-D ripples. They indicate storm-induced sedimentation in a shallow marine setting. The disorganized accumulations of ostreid and cerithiid shells, interpreted as coquina bars, are the products of storm generated reworking processes in brackish environments. Rapid vertical and horizontal facies changes and the facies associations in both formations reflect the locally subsiding nature of this molassic

  7. Learning from the implementation of residential optional time of use pricing in the United States electricity industry

    Science.gov (United States)

    Li, Xibao

    Residential time-of-use (TOU) rates have been in practice in the U.S. since the 1970s. However, for institutional, political, and regulatory reasons, only a very small proportion of residential customers are actually on these schedules. In this thesis, I explore why this is the case by empirically investigating two groups of questions: (1) On the "supply" side: Do utilities choose to offer TOU rates in residential sectors on their own initiative if state commissions do not order them to do so? Since utilities have other options, what is the relationship between the TOU rate and other alternatives? To answer these questions, I survey residential tariffs offered by more than 100 major investor-owned utilities, study the impact of various factors on utilities' rate-making behavior, and examine utility revealed preferences among four rate options: seasonal rates, inverted block rates, demand charges, and TOU rates. Estimated results suggest that the scale of residential sectors and the revenue contribution from residential sectors are the only two significant factors that influence utility decisions on offering TOU rates. Technical and economic considerations are not significant statistically. This implies that the little acceptance of TOU rates is partly attributed to utilities' inadequate attention to TOU rate design. (2) On the "demand" side: For utilities offering TOU tariffs, why do only a very small proportion of residential customers choose these tariffs? What factors influence customer choices? Unlike previous studies that used individual-level experimental data, this research employs actual aggregated information from 29 utilities offering optional TOU rates. By incorporating neo-classical demand analysis into an aggregated random coefficient logit model, I investigate the impact of both price and non-price tariff characteristics and non-tariff factors on customer choice behavior. The analysis indicates that customer pure tariff preference (which captures the

  8. Development of a Real-Time Thermal Performance Diagnostic Monitoring system Using Self-Organizing Neural Network for Kori-2 Nuclear Power Unit

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Seong, Poong Hyun

    1996-01-01

    In this work, a PC-based thermal performance monitoring system is developed for the nuclear power plants. the system performs real-time thermal performance monitoring and diagnosis during plant operation. Specifically, a prototype for the Kori-2 nuclear power unit is developed and examined is very difficult because the system structure is highly complex and the components are very much inter-related. In this study, some major diagnostic performance parameters are selected in order to represent the thermal cycle effectively and to reduce the computing time. The Fuzzy ARTMAP, a self-organizing neural network, is used to recognize the characteristic pattern change of the performance parameters in abnormal situation. By examination, the algorithm is shown to be ale to detect abnormality and to identify the fault component or the change of system operation condition successfully. For the convenience of operators, a graphical user interface is also constructed in this work. 5 figs., 3 tabs., 11 refs. (Author)

  9. The role of the intensive care unit in real-time surveillance of emerging pandemics: the Italian GiViTI experience.

    Science.gov (United States)

    Bertolini, G; Nattino, G; Langer, M; Tavola, M; Crespi, D; Mondini, M; Rossi, C; Previtali, C; Marshall, J; Poole, D

    2016-01-01

    The prompt availability of reliable epidemiological information on emerging pandemics is crucial for public health policy-makers. Early in 2013, a possible new H1N1 epidemic notified by an intensive care unit (ICU) to GiViTI, the Italian ICU network, prompted the re-activation of the real-time monitoring system developed during the 2009-2010 pandemic. Based on data from 216 ICUs, we were able to detect and monitor an outbreak of severe H1N1 infection, and to compare the situation with previous years. The timely and correct assessment of the severity of an epidemic can be obtained by investigating ICU admissions, especially when historical comparisons can be made.

  10. The influence of floc size and hydraulic detention time on the performance of a dissolved air flotation (DAF) pilot unit in the light of a mathematical model.

    Science.gov (United States)

    Moruzzi, R B; Reali, M A P

    2014-12-01

    The influence of floc size and hydraulic detention time on the performance of a dissolved air flotation (DAF) pilot unit was investigated in the light of a known mathematical model. The following design and operational parameters were considered: the hydraulic detention time (tdcz) and hydraulic loading rate in the contact zone, the down-flow loading rate in the clarification zone, the particle size distribution (d F), and the recirculation rate (p). As a reference for DAF performance analysis, the proposed β.td parameter from the above mentioned mathematical model was employed. The results indicated that tdcz is an important factor in DAF performance and that d F and floc size are also determinants of DAF efficiency. Further, β.td was sensitive to both design and operational parameters, which were varied in the DAF pilot plant. The performance of the DAF unit decreases with increasing β.td values because a higher td (considering a fixed β) or a higher β (e.g., higher hydrophobicity of the flocs for a fixed td) would be necessary in the reaction zone to reach desired flotation efficiency.

  11. Real-time processing for full-range Fourier-domain optical-coherence tomography with zero-filling interpolation using multiple graphic processing units.

    Science.gov (United States)

    Watanabe, Yuuki; Maeno, Seiya; Aoshima, Kenji; Hasegawa, Haruyuki; Koseki, Hitoshi

    2010-09-01

    The real-time display of full-range, 2048?axial pixelx1024?lateral pixel, Fourier-domain optical-coherence tomography (FD-OCT) images is demonstrated. The required speed was achieved by using dual graphic processing units (GPUs) with many stream processors to realize highly parallel processing. We used a zero-filling technique, including a forward Fourier transform, a zero padding to increase the axial data-array size to 8192, an inverse-Fourier transform back to the spectral domain, a linear interpolation from wavelength to wavenumber, a lateral Hilbert transform to obtain the complex spectrum, a Fourier transform to obtain the axial profiles, and a log scaling. The data-transfer time of the frame grabber was 15.73?ms, and the processing time, which includes the data transfer between the GPU memory and the host computer, was 14.75?ms, for a total time shorter than the 36.70?ms frame-interval time using a line-scan CCD camera operated at 27.9?kHz. That is, our OCT system achieved a processed-image display rate of 27.23 frames/s.

  12. Admission time to hospital: a varying standard for a critical definition for admissions to an intensive care unit from the emergency department.

    Science.gov (United States)

    Nanayakkara, Shane; Weiss, Heike; Bailey, Michael; van Lint, Allison; Cameron, Peter; Pilcher, David

    2014-11-01

    Time spent in the emergency department (ED) before admission to hospital is often considered an important key performance indicator (KPI). Throughout Australia and New Zealand, there is no standard definition of 'time of admission' for patients admitted through the ED. By using data submitted to the Australian and New Zealand Intensive Care Society Adult Patient Database, the aim was to determine the differing methods used to define hospital admission time and assess how these impact on the calculation of time spent in the ED before admission to an intensive care unit (ICU). Between March and December of 2010, 61 hospitals were contacted directly. Decision methods for determining time of admission to the ED were matched to 67,787 patient records. Univariate and multivariate analyses were conducted to assess the relationship between decision method and the reported time spent in the ED. Four mechanisms of recording time of admission were identified, with time of triage being the most common (28/61 hospitals). Reported median time spent in the ED varied from 2.5 (IQR 0.83-5.35) to 5.1 h (2.82-8.68), depending on the decision method. After adjusting for illness severity, hospital type and location, decision method remained a significant factor in determining measurement of ED length of stay. Different methods are used in Australia and New Zealand to define admission time to hospital. Professional bodies, hospitals and jurisdictions should ensure standardisation of definitions for appropriate interpretation of KPIs as well as for the interpretation of studies assessing the impact of admission time to ICU from the ED. WHAT IS KNOWN ABOUT THE TOPIC?: There are standards for the maximum time spent in the ED internationally, but these standards vary greatly across Australia. The definition of such a standard is critically important not only to patient care, but also in the assessment of hospital outcomes. Key performance indicators rely on quality data to improve decision

  13. Mapping the Information Trace in Local Field Potentials by a Computational Method of Two-Dimensional Time-Shifting Synchronization Likelihood Based on Graphic Processing Unit Acceleration.

    Science.gov (United States)

    Zhao, Zi-Fang; Li, Xue-Zhu; Wan, You

    2017-12-01

    The local field potential (LFP) is a signal reflecting the electrical activity of neurons surrounding the electrode tip. Synchronization between LFP signals provides important details about how neural networks are organized. Synchronization between two distant brain regions is hard to detect using linear synchronization algorithms like correlation and coherence. Synchronization likelihood (SL) is a non-linear synchronization-detecting algorithm widely used in studies of neural signals from two distant brain areas. One drawback of non-linear algorithms is the heavy computational burden. In the present study, we proposed a graphic processing unit (GPU)-accelerated implementation of an SL algorithm with optional 2-dimensional time-shifting. We tested the algorithm with both artificial data and raw LFP data. The results showed that this method revealed detailed information from original data with the synchronization values of two temporal axes, delay time and onset time, and thus can be used to reconstruct the temporal structure of a neural network. Our results suggest that this GPU-accelerated method can be extended to other algorithms for processing time-series signals (like EEG and fMRI) using similar recording techniques.

  14. Intérvalo unitario de tiempo de medición para ruido ambiental Unit timing for environmental noise measurements

    Directory of Open Access Journals (Sweden)

    William A. Giraldo A.

    2011-01-01

    Full Text Available En las entidades ambientales, los encargados de las mediciones de ruido ambiental y en general todas las personas que de una u otra forma han trabajado en esta temática, en algún momento se han puesto a pensar sobre la representatividad del tiempo unitario de muestreo y la forma de realizar evaluaciones para dar cumplimiento con dicho tiempo, sin que se aumenten considerablemente los costos de medición. En este artículo se plantea una metodología para determinar cómo un intervalo de cierta duración -en este caso, quince (15 minutos- para el muestreo del nivel de presión sonora es representativo para el período de una (1 hora, logrando de esta manera optimizar el uso de los sonómetros "fijos" y proponiendo una estrategia para reducir los costos en las mediciones de ruido ambiental y en general la elaboración de mapas de ruido.The managers of environmental noise measurements in environmental control agencies, or in general every person that work in this subject, have to think on the representativity of the unit measurement time interval, and how to evaluate it in order to get good quality results regarding the unit measurement time without increasing the measurement costs. A methodology for deciding if a certain measuring time interval -in this case, fifteen (15 minutes- is representative of noise pressure levels occurring during one hour, is proposed in this paper. This methodology allows to optimize the use of stationary sound level meters and to propose a strategy for reducing the costs of environmental noise measurements and of the designing of noise maps in general.

  15. Evaluation of a real-time PCR assay for rectal screening of OXA-48-producing Enterobacteriaceae in a general intensive care unit of an endemic hospital.

    Science.gov (United States)

    Fernández, J; Cunningham, S A; Fernández-Verdugo, A; Viña-Soria, L; Martín, L; Rodicio, M R; Escudero, D; Vazquez, F; Mandrekar, J N; Patel, R

    2017-07-01

    Carbapenemase-producing Enterobacteriaceae are increasing worldwide. Rectal screening for these bacteria can inform the management of infected and colonized patients, especially those admitted to intensive care units (ICUs). A laboratory developed, qualitative duplex real-time polymerase chain reaction assay for rapid detection of OXA-48-like and VIM producing Enterobacteriaceae, performed on rectal swabs, was designed and evaluated in an intensive care unit with endemic presence of OXA-48. During analytical assay validation, no cross-reactivity was observed and 100% sensitivity and specificity were obtained for both bla OXA-48-like and bla VIM in all spiked clinical samples. During the clinical part of the study, the global sensitivity and specificity of the real-time PCR assay for OXA-48 detection were 95.7% and 100% (P=0.1250), respectively, in comparison with culture; no VIM-producing Enterobacteriaceae were detected. Clinical features of patients in the ICU who were colonized or infected with OXA-48 producing Enterobacteriaceae, including outcome, were analyzed. Most had severe underlying conditions, and had risk factors for colonization with carbapenemase-producing Enterobacteriaceae before or during ICU admission, such as receiving previous antimicrobial therapy, prior healthcare exposure (including long-term care), chronic disease, immunosuppression and/or the presence of an intravascular catheter and/or mechanical ventilation device. The described real-time PCR assay is fast (~2-3hours, if DNA extraction is included), simple to perform and results are easy to interpret, features which make it applicable in the routine of clinical microbiology laboratories. Implementation in endemic hospitals could contribute to early detection of patients colonized by OXA-48 producing Enterobacteriaceae and prevention of their spread. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Accuracy, intra- and inter-unit reliability, and comparison between GPS and UWB-based position-tracking systems used for time-motion analyses in soccer.

    Science.gov (United States)

    Bastida Castillo, Alejandro; Gómez Carmona, Carlos D; De la Cruz Sánchez, Ernesto; Pino Ortega, José

    2018-05-01

    There is interest in the accuracy and inter-unit reliability of position-tracking systems to monitor players. Research into this technology, although relatively recent, has grown exponentially in the last years, and it is difficult to find professional team sport that does not use Global Positioning System (GPS) technology at least. The aim of this study is to know the accuracy of both GPS-based and Ultra Wide Band (UWB)-based systems on a soccer field and their inter- and intra-unit reliability. A secondary aim is to compare them for practical applications in sport science. Following institutional ethical approval and familiarization, 10 healthy and well-trained former soccer players (20 ± 1.6 years, 1.76 ± 0.08 cm, and 69.5 ± 9.8 kg) performed three course tests: (i) linear course, (ii) circular course, and (iii) a zig-zag course, all using UWB and GPS technologies. The average speed and distance covered were compared with timing gates and the real distance as references. The UWB technology showed better accuracy (bias: 0.57-5.85%), test-retest reliability (%TEM: 1.19), and inter-unit reliability (bias: 0.18) in determining distance covered than the GPS technology (bias: 0.69-6.05%; %TEM: 1.47; bias: 0.25) overall. Also, UWB showed better results (bias: 0.09; ICC: 0.979; bias: 0.01) for mean velocity measurement than GPS (bias: 0.18; ICC: 0.951; bias: 0.03).

  17. A recursive framework for time-dependent characteristics of tested and maintained standby units with arbitrary distributions for failures and repairs

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    2015-01-01

    The time-dependent unavailability and the failure and repair intensities of periodically tested aging standby system components are solved with recursive equations under three categories of testing and repair policies. In these policies, tests or repairs or both can be minimal or perfect renewals. Arbitrary distributions are allowed to times to failure as well as to repair and renewal durations. Major preventive maintenance is done periodically or at random times, e.g. when a true demand occurs. In the third option process renewal is done if a true demand occurs or when a certain mission time has expired since the previous maintenance, whichever occurs first. A practical feature is that even if a repair can renew the unit, it does not generally renew the alternating process. The formalism updates and extends earlier results by using a special backward-renewal equation method, by allowing scheduled tests not limited to equal intervals and accepting arbitrary distributions and multiple failure types and causes, including failures caused by tests, human errors and true demands. Explicit solutions are produced to integral equations associated with an age-renewal maintenance policy. - Highlights: • Time-dependent unavailability, failure count and repair count for a standby system. • Free testing schedule and distributions for times to failure, repair and maintenance. • Multiple failure modes; tests or repairs or both can be minimal or perfect renewals. • Process renewals periodically, randomly or based on the process age or an initiator. • Backward renewal equations as explicit solutions to Volterra-type integral equations

  18. Leaching properties of slag generated by a gasification/vitrification unit: the role of pH, particle size, contact time and cooling method used.

    Science.gov (United States)

    Moustakas, K; Mavropoulos, A; Katsou, E; Haralambous, K J; Loizidou, M

    2012-03-15

    The environmental impact from the operation of thermal waste treatment facilities mainly originates from the air emissions, as well as the generated solid residues. The objective of this paper is to examine the slag residue generated by a demonstration plasma gasification/vitrification unit and investigate the composition, the leaching properties of the slag under different conditions, as well as the role of the cooling method used. The influence of pH, particle size and contact time on the leachability of heavy metals are discussed. The main outcome is that the vitrified slag is characterized as inert and stable and can be safely disposed at landfills or used in the construction sector. Finally, the water-cooled slag showed better resistance in relation to heavy metal leachability compared to the air-cooled slag. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Graphics-processing-unit-accelerated finite-difference time-domain simulation of the interaction between ultrashort laser pulses and metal nanoparticles

    Science.gov (United States)

    Nikolskiy, V. P.; Stegailov, V. V.

    2018-01-01

    Metal nanoparticles (NPs) serve as important tools for many modern technologies. However, the proper microscopic models of the interaction between ultrashort laser pulses and metal NPs are currently not very well developed in many cases. One part of the problem is the description of the warm dense matter that is formed in NPs after intense irradiation. Another part of the problem is the description of the electromagnetic waves around NPs. Description of wave propagation requires the solution of Maxwell’s equations and the finite-difference time-domain (FDTD) method is the classic approach for solving them. There are many commercial and free implementations of FDTD, including the open source software that supports graphics processing unit (GPU) acceleration. In this report we present the results on the FDTD calculations for different cases of the interaction between ultrashort laser pulses and metal nanoparticles. Following our previous results, we analyze the efficiency of the GPU acceleration of the FDTD algorithm.

  20. Typhoid fever acquired in the United States, 1999–2010: epidemiology, microbiology, and use of a space–time scan statistic for outbreak detection

    Science.gov (United States)

    IMANISHI, M.; NEWTON, A. E.; VIEIRA, A. R.; GONZALEZ-AVILES, G.; KENDALL SCOTT, M. E.; MANIKONDA, K.; MAXWELL, T. N.; HALPIN, J. L.; FREEMAN, M. M.; MEDALLA, F.; AYERS, T. L.; DERADO, G.; MAHON, B. E.; MINTZ, E. D.

    2016-01-01

    SUMMARY Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space–time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space–time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space–time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection. PMID:25427666

  1. Typhoid fever acquired in the United States, 1999-2010: epidemiology, microbiology, and use of a space-time scan statistic for outbreak detection.

    Science.gov (United States)

    Imanishi, M; Newton, A E; Vieira, A R; Gonzalez-Aviles, G; Kendall Scott, M E; Manikonda, K; Maxwell, T N; Halpin, J L; Freeman, M M; Medalla, F; Ayers, T L; Derado, G; Mahon, B E; Mintz, E D

    2015-08-01

    Although rare, typhoid fever cases acquired in the United States continue to be reported. Detection and investigation of outbreaks in these domestically acquired cases offer opportunities to identify chronic carriers. We searched surveillance and laboratory databases for domestically acquired typhoid fever cases, used a space-time scan statistic to identify clusters, and classified clusters as outbreaks or non-outbreaks. From 1999 to 2010, domestically acquired cases accounted for 18% of 3373 reported typhoid fever cases; their isolates were less often multidrug-resistant (2% vs. 15%) compared to isolates from travel-associated cases. We identified 28 outbreaks and two possible outbreaks within 45 space-time clusters of ⩾2 domestically acquired cases, including three outbreaks involving ⩾2 molecular subtypes. The approach detected seven of the ten outbreaks published in the literature or reported to CDC. Although this approach did not definitively identify any previously unrecognized outbreaks, it showed the potential to detect outbreaks of typhoid fever that may escape detection by routine analysis of surveillance data. Sixteen outbreaks had been linked to a carrier. Every case of typhoid fever acquired in a non-endemic country warrants thorough investigation. Space-time scan statistics, together with shoe-leather epidemiology and molecular subtyping, may improve outbreak detection.

  2. Comparison of 2015 Medicare relative value units for gender-specific procedures: Gynecologic and gynecologic-oncologic versus urologic CPT coding. Has time healed gender-worth?

    Science.gov (United States)

    Benoit, M F; Ma, J F; Upperman, B A

    2017-02-01

    In 1992, Congress implemented a relative value unit (RVU) payment system to set reimbursement for all procedures covered by Medicare. In 1997, data supported that a significant gender bias existed in reimbursement for gynecologic compared to urologic procedures. The present study was performed to compare work and total RVU's for gender specific procedures effective January 2015 and to evaluate if time has healed the gender-based RVU worth. Using the 2015 CPT codes, we compared work and total RVU's for 50 pairs of gender specific procedures. We also evaluated 2015 procedure related provider compensation. The groups were matched so that the procedures were anatomically similar. We also compared 2015 to 1997 RVU and fee schedules. Evaluation of work RVU's for the paired procedures revealed that in 36 cases (72%), male vs female procedures had a higher wRVU and tRVU. For total fee/reimbursement, 42 (84%) male based procedures were compensated at a higher rate than the paired female procedures. On average, male specific surgeries were reimbursed at an amount that was 27.67% higher for male procedures than for female-specific surgeries. Female procedure based work RVU's have increased minimally from 1997 to 2015. Time and effort have trended towards resolution of some gender-related procedure worth discrepancies but there are still significant RVU and compensation differences that should be further reviewed and modified as surgical time and effort highly correlate. Copyright © 2016. Published by Elsevier Inc.

  3. Incidence of pulmonary aspergillosis and correlation of conventional diagnostic methods with nested PCR and real-time PCR assay using BAL fluid in intensive care unit patients.

    Science.gov (United States)

    Zarrinfar, Hossein; Makimura, Koichi; Satoh, Kazuo; Khodadadi, Hossein; Mirhendi, Hossein

    2013-05-01

    Although the incidence of invasive aspergillosis in the intensive care unit (ICU) is scarce, it has emerged as major problems in critically ill patients. In this study, the incidence of pulmonary aspergillosis (PA) in ICU patients has evaluated and direct microscopy and culture has compared with nested polymerase chain reaction (PCR) and real-time PCR for detection of Aspergillus fumigatus and A. flavus in bronchoalveolar lavage (BAL) samples of the patients. Thirty BAL samples obtained from ICU patients during a 16-month period were subjected to direct examinations on 20% potassium hydroxide (KOH) and culture on two culture media. Nested PCR targeting internal transcribed spacer ribosomal DNA and TaqMan real-time PCR assay targeting β-tubulin gene were used for the detection of A. fumigatus and A. flavus. Of 30 patients, 60% were men and 40% were women. The diagnosis of invasive PA was probable in 1 (3%), possible in 11 (37%), and not IPA in 18 (60%). Nine samples were positive in nested PCR including seven samples by A. flavus and two by A. fumigatus specific primers. The lowest amount of DNA that TaqMan real-time PCR could detect was ≥40 copy numbers. Only one of the samples had a positive result of A. flavus real-time PCR with Ct value of 37.5. Although a significant number of specimens were positive in nested PCR, results of this study showed that establishment of a correlation between the conventional methods with nested PCR and real-time PCR needs more data confirmed by a prospective study with a larger sample group. © 2013 Wiley Periodicals, Inc.

  4. Space Object Collision Probability via Monte Carlo on the Graphics Processing Unit

    Science.gov (United States)

    Vittaldev, Vivek; Russell, Ryan P.

    2017-09-01

    Fast and accurate collision probability computations are essential for protecting space assets. Monte Carlo (MC) simulation is the most accurate but computationally intensive method. A Graphics Processing Unit (GPU) is used to parallelize the computation and reduce the overall runtime. Using MC techniques to compute the collision probability is common in literature as the benchmark. An optimized implementation on the GPU, however, is a challenging problem and is the main focus of the current work. The MC simulation takes samples from the uncertainty distributions of the Resident Space Objects (RSOs) at any time during a time window of interest and outputs the separations at closest approach. Therefore, any uncertainty propagation method may be used and the collision probability is automatically computed as a function of RSO collision radii. Integration using a fixed time step and a quartic interpolation after every Runge Kutta step ensures that no close approaches are missed. Two orders of magnitude speedups over a serial CPU implementation are shown, and speedups improve moderately with higher fidelity dynamics. The tool makes the MC approach tractable on a single workstation, and can be used as a final product, or for verifying surrogate and analytical collision probability methods.

  5. Optimized 4-bit Quantum Reversible Arithmetic Logic Unit

    Science.gov (United States)

    Ayyoub, Slimani; Achour, Benslama

    2017-08-01

    Reversible logic has received a great attention in the recent years due to its ability to reduce the power dissipation. The main purposes of designing reversible logic are to decrease quantum cost, depth of the circuits and the number of garbage outputs. The arithmetic logic unit (ALU) is an important part of central processing unit (CPU) as the execution unit. This paper presents a complete design of a new reversible arithmetic logic unit (ALU) that can be part of a programmable reversible computing device such as a quantum computer. The proposed ALU based on a reversible low power control unit and small performance parameters full adder named double Peres gates. The presented ALU can produce the largest number (28) of arithmetic and logic functions and have the smallest number of quantum cost and delay compared with existing designs.

  6. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  7. Wood pellets, what else? Greenhouse gas parity times of European electricity from wood pellets produced in the south-eastern United States using different softwood feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Hanssen, Steef V. [Radboud Univ., Nijmegen (Netherlands). Dept. of Environmental Science, Faculty of Science; Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Duden, Anna S. [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Junginger, Martin [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences; Dale, Virginia H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Environmental Sciences Division, Center for BioEnergy Sustainability; van der Hilst, Floor [Utrecht Univ., Utrecht (The Netherlands). Copernicus Inst. of Sustainable Development, Faculty of Geosciences

    2016-12-29

    Several EU countries import wood pellets from the south-eastern United States. The imported wood pellets are (co-)fired in power plants with the aim of reducing overall greenhouse gas (GHG) emissions from electricity and meeting EU renewable energy targets. To assess whether GHG emissions are reduced and on what timescale, we construct the GHG balance of wood-pellet electricity. This GHG balance consists of supply chain and combustion GHG emissions, carbon sequestration during biomass growth, and avoided GHG emissions through replacing fossil electricity. We investigate wood pellets from four softwood feedstock types: small roundwood, commercial thinnings, harvest residues, and mill residues. Per feedstock, the GHG balance of wood-pellet electricity is compared against those of alternative scenarios. Alternative scenarios are combinations of alternative fates of the feedstock material, such as in-forest decomposition, or the production of paper or wood panels like oriented strand board (OSB). Alternative scenario composition depends on feedstock type and local demand for this feedstock. Results indicate that the GHG balance of wood-pellet electricity equals that of alternative scenarios within 0 to 21 years (the GHG parity time), after which wood-pellet electricity has sustained climate benefits. Parity times increase by a maximum of twelve years when varying key variables (emissions associated with paper and panels, soil carbon increase via feedstock decomposition, wood-pellet electricity supply chain emissions) within maximum plausible ranges. Using commercial thinnings, harvest residues or mill residues as feedstock leads to the shortest GHG parity times (0-6 years) and fastest GHG benefits from wood-pellet electricity. Here, we find shorter GHG parity times than previous studies, for we use a novel approach that differentiates feedstocks and considers alternative scenarios based on (combinations of) alternative feedstock fates, rather than on alternative land

  8. Towards a Unified Sentiment Lexicon Based on Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Liliana Ibeth Barbosa-Santillán

    2014-01-01

    Full Text Available This paper presents an approach to create what we have called a Unified Sentiment Lexicon (USL. This approach aims at aligning, unifying, and expanding the set of sentiment lexicons which are available on the web in order to increase their robustness of coverage. One problem related to the task of the automatic unification of different scores of sentiment lexicons is that there are multiple lexical entries for which the classification of positive, negative, or neutral {P,N,Z} depends on the unit of measurement used in the annotation methodology of the source sentiment lexicon. Our USL approach computes the unified strength of polarity of each lexical entry based on the Pearson correlation coefficient which measures how correlated lexical entries are with a value between 1 and −1, where 1 indicates that the lexical entries are perfectly correlated, 0 indicates no correlation, and −1 means they are perfectly inversely correlated and so is the UnifiedMetrics procedure for CPU and GPU, respectively. Another problem is the high processing time required for computing all the lexical entries in the unification task. Thus, the USL approach computes a subset of lexical entries in each of the 1344 GPU cores and uses parallel processing in order to unify 155802 lexical entries. The results of the analysis conducted using the USL approach show that the USL has 95.430 lexical entries, out of which there are 35.201 considered to be positive, 22.029 negative, and 38.200 neutral. Finally, the runtime was 10 minutes for 95.430 lexical entries; this allows a reduction of the time computing for the UnifiedMetrics by 3 times.

  9. A simplified prevention bundle with dual hand hygiene audit reduces early-onset ventilator-associated pneumonia in cardiovascular surgery units: An interrupted time-series analysis.

    Directory of Open Access Journals (Sweden)

    Kang-Cheng Su

    Full Text Available To investigate the effect of a simplified prevention bundle with alcohol-based, dual hand hygiene (HH audit on the incidence of early-onset ventilation-associated pneumonia (VAP.This 3-year, quasi-experimental study with interrupted time-series analysis was conducted in two cardiovascular surgery intensive care units in a medical center. Unaware external HH audit (eHH performed by non-unit-based observers was a routine task before and after bundle implementation. Based on the realistic ICU settings, we implemented a 3-component bundle, which included: a compulsory education program, a knowing internal HH audit (iHH performed by unit-based observers, and a standardized oral care (OC protocol with 0.1% chlorhexidine gluconate. The study periods comprised 4 phases: 12-month pre-implementation phase 1 (eHH+/education-/iHH-/OC-, 3-month run-in phase 2 (eHH+/education+/iHH+/OC+, 15-month implementation phase 3 (eHH+/education+/iHH+/OC+, and 6-month post-implementation phase 4 (eHH+/education-/iHH+/OC-.A total of 2553 ventilator-days were observed. VAP incidences (events/1000 ventilator days in phase 1-4 were 39.1, 40.5, 15.9, and 20.4, respectively. VAP was significantly reduced by 59% in phase 3 (vs. phase 1, incidence rate ratio [IRR] 0.41, P = 0.002, but rebounded in phase 4. Moreover, VAP incidence was inversely correlated to compliance of OC (r2 = 0.531, P = 0.001 and eHH (r2 = 0.878, P < 0.001, but not applied for iHH, despite iHH compliance was higher than eHH compliance during phase 2 to 4. Compared to eHH, iHH provided more efficient and faster improvements for standard HH practice. The minimal compliances required for significant VAP reduction were 85% and 75% for OC and eHH (both P < 0.05, IRR 0.28 and 0.42, respectively.This simplified prevention bundle effectively reduces early-onset VAP incidence. An unaware HH compliance correlates with VAP incidence. A knowing HH audit provides better improvement in HH practice. Accordingly, we suggest

  10. Development of automatic nuclear plate analyzing system equipped with TV measuring unit and its application to analysis of elementary particle reaction, 1

    International Nuclear Information System (INIS)

    Ushida, Noriyuki

    1987-01-01

    Various improvements are made on an analysis system which was previously reported. Twenty five emulsion plates, each with a decreased size of 3 cm x 3 cm, are mounted on a single acrylic resin sheet to reduce the required measurement time. An interface called New DOMS (digitized on-line microscope) is designed to reduce the analysis time and to improve the reliability of the analysis. The newly developed analysis system consists of five blocks: a stage block (with measuring range of 170 mm along the x and y axes and 2 mm along the z axis and an accuracy of 1 μm for each axis), DG-M10 host computer (with external storages for 15M byte hard disk and 368k byte minifloppy disk), DOMS interface (for control of the stage, operation of the graphic image and control of the CCD TV measuring unit), CCD TV measuring unit (equipped with a CCD TV camera to display the observed emulsion on a TV monitor for measuring the grain position), and measurement terminal (consisting of a picture monitor, video terminal module and keyboards). This report also shows a DOMS system function block diagram (crate controller and I/O, phase converter, motor controller, sub CPU for dysplay, graphic memory, ROM writer, power supply), describes the CCD TV measuring unit hardware (CCD TV camera, sync. separator, window generator, darkest point detector, mixer, focus counter), and outlines the connections among the components. (Nogami, K.)

  11. Graphics processing unit based computation for NDE applications

    Science.gov (United States)

    Nahas, C. A.; Rajagopal, Prabhu; Balasubramaniam, Krishnan; Krishnamurthy, C. V.

    2012-05-01

    Advances in parallel processing in recent years are helping to improve the cost of numerical simulation. Breakthroughs in Graphical Processing Unit (GPU) based computation now offer the prospect of further drastic improvements. The introduction of 'compute unified device architecture' (CUDA) by NVIDIA (the global technology company based in Santa Clara, California, USA) has made programming GPUs for general purpose computing accessible to the average programmer. Here we use CUDA to develop parallel finite difference schemes as applicable to two problems of interest to NDE community, namely heat diffusion and elastic wave propagation. The implementations are for two-dimensions. Performance improvement of the GPU implementation against serial CPU implementation is then discussed.

  12. Cystic fibrosis physicians' perspectives on the timing of referral for lung transplant evaluation: a survey of physicians in the United States.

    Science.gov (United States)

    Ramos, Kathleen J; Somayaji, Ranjani; Lease, Erika D; Goss, Christopher H; Aitken, Moira L

    2017-01-19

    Prior studies reveal that a significant proportion of patients with cystic fibrosis (CF) and advanced lung disease are not referred for lung transplant (LTx) evaluation. We sought to assess expert CF physician perspectives on the timing of LTx referral and investigate their LTx knowledge. We developed an online anonymous survey that was distributed by the Cystic Fibrosis Foundation (CFF) to the medical directors of all CFF-accredited care centers in the United States in 2015. The survey addressed only adult patients (≥18 years old) and was sent to 119 adult CF physicians, 86 CFF-affiliated CF physicians (who see adults and children, but have smaller program sizes than adult or pediatric centers), and 127 pediatric CF physicians (who see some adults, but mostly children). The focus of the questions was on CFF-care center characteristics, physician experience and indications/contraindications to referral for LTx evaluation. There were 114/332 (34%) total responses to the survey. The response rates were: 57/119 (48%) adult physicians, 12/86 (14%) affiliate physicians and 43/127 (34%) pediatric physicians; 2 physicians did not include their CFF center type. Despite the poor ability of FEV 1  < 30% to predict death within 2 years, 94% of responding CF physicians said they would refer an adult patient for LTx evaluation if the patient's lung function fell to FEV 1  < 30% predicted. Only 54% of respondents report that pulmonary hypertension would trigger referral. Pulmonary hypertension is an internationally recommended indication to list a patient for LTx (not just for referral for evaluation). Very few physicians (N = 17, 15%) employed components of the lung allocation score (LAS) to determine the timing of referral for LTx evaluation. Interestingly, patient preference not to undergo LTx was "often" or "always" the primary patient-related reason to defer referral for LTx evaluation for 41% (47/114) of respondents. Some potential barriers to timely LTx

  13. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  14. Evaluating the predictability of distance race performance in NCAA cross country and track and field from high school race times in the United States.

    Science.gov (United States)

    Brusa, Jamie L

    2017-12-30

    Successful recruiting for collegiate track & field athletes has become a more competitive and essential component of coaching. This study aims to determine the relationship between race performances of distance runners at the United States high school and National Collegiate Athletic Association (NCAA) levels. Conditional inference classification tree models were built and analysed to predict the probability that runners would qualify for the NCAA Division I National Cross Country Meet and/or the East or West NCAA Division I Outdoor Track & Field Preliminary Round based on their high school race times in the 800 m, 1600 m, and 3200 m. Prediction accuracies of the classification trees ranged from 60.0 to 76.6 percent. The models produced the most reliable estimates for predicting qualifiers in cross country, the 1500 m, and the 800 m for females and cross country, the 5000 m, and the 800 m for males. NCAA track & field coaches can use the results from this study as a guideline for recruiting decisions. Additionally, future studies can apply the methodological foundations of this research to predicting race performances set at different metrics, such as national meets in other countries or Olympic qualifications, from previous race data.

  15. Monitoring 2009 Forest Disturbance Across the Conterminous United States, Based on Near-Real Time and Historical MODIS 250 Meter NDVI Products

    Science.gov (United States)

    Spruce, J.; Hargrove, W. W.; Gasser, G.; Smoot, J. C.; Kuper, P.

    2009-01-01

    This case study shows the promise of computing current season forest disturbance detection products at regional to CONUS scales. Use of the eMODIS expedited product enabled a NRT CONUS forest disturbance detection product, a requirement for an eventual, operational forest threat EWS. The 2009 classification product from this study can be used to quantify the areal extent of forest disturbance across CONUS, although a quantitative accuracy assessment still needs to be completed. However, the results would not include disturbances that occurred after July 27, such as the Station Fire. While not shown here, the project also produced maximum NDVI products for the June 10-July 27 period of each year of the 2000-2009 time frame. These products could be applied to compute forest change products on an annual basis. GIS could then be used to assess disturbance persistence. Such follow-on work could lead to attribution of year in which a disturbance occurred. These products (e.g., Figures 6 and 7) may also be useful for assessing forest change associated with climate change, such as carbon losses from bark beetle-induced forest mortality in the Western United States. Other MODIS phenological products are being assessed for aiding forest monitoring needs of the EWS, including cumulative NDVI products (Figure 10).

  16. Efforts to Reduce International Space Station Crew Maintenance Time in the Management of the Extravehicular Mobility Unit Transport Loop Water Quality

    Science.gov (United States)

    Etter,David; Rector, Tony; Boyle, robert; Zande, Chris Vande

    2012-01-01

    The EMU (Extravehicular Mobility Unit) contains a semi-closed-loop re-circulating water circuit (Transport Loop) to absorb heat into a LCVG (Liquid Coolant and Ventilation Garment) worn by the astronaut. A second, single-pass water circuit (Feed-water Loop) provides water to a cooling device (Sublimator) containing porous plates, and that water sublimates through the porous plates to space vacuum. The cooling effect from the sublimation of this water translates to a cooling of the LCVG water that circulates through the Sublimator. The quality of the EMU Transport Loop water is maintained through the use of a water processing kit (ALCLR - Airlock Cooling Loop Remediation) that is used to periodically clean and disinfect the water circuit. Opportunities to reduce crew time associated with ALCLR operations include a detailed review of the historical water quality data for evidence to support an extension to the implementation cycle. Furthermore, an EMU returned after 2-years of use on the ISS (International Space Station) is being used as a test bed to evaluate the results of extended and repeated ALCLR implementation cycles. Finally, design, use and on-orbit location enhancements to the ALCLR kit components are being considered to allow the implementation cycle to occur in parallel with other EMU maintenance and check-out activities, and to extend the life of the ALCLR kit components. These efforts are undertaken to reduce the crew-time and logistics burdens for the EMU, while ensuring the long-term health of the EMU water circuits for a post- Shuttle 6-year service life.

  17. Hygienic safety of reusable tap water filters (Germlyser® with an operating time of 4 or 8 weeks in a haematological oncology transplantation unit

    Directory of Open Access Journals (Sweden)

    Rochow Markus

    2007-05-01

    Full Text Available Abstract Background Microbial safe tap water is crucial for the safety of immunosuppressed patients. Methods To evaluate the suitability of new, reusable point-of-use filters (Germlyser®, Aquafree GmbH, Hamburg, Germany, three variations of a reusable filter with the same filter principle but with different outlets (with and without silver and inner surface coating of the filter encasements (with and without nano-crystalline silver were tested. The filter efficacy was monitored over 1, 4 and 8 weeks operating time in a haematological oncology transplantation unit equipped with 18 water outlets (12 taps, 6 showers. Results The filtered water fulfilled the requirements of absence of pathogens over time. From 348 samples, 8 samples (2.3% exceeded 100 cfu/ml (no sample ≥ 500 cfu/ml. As no reprocessed filter exhibited 100% filter efficacy in the final quality control after each reprocessing, these contaminations could be explained by retrograde contamination during use. Conclusion As a consequence of the study, the manufacturer recommends changing filters after 4 weeks in high risk areas and after 8 weeks in moderate infectious risk areas, together with routine weekly alcohol-based surface disinfection and additionally in case of visible contamination. The filter efficacy of the 3 filters types did not differ significantly regarding total bacterial counts. Manual reprocessing proved to be insufficient. Using a validated reprocessing in a washer/disinfector with alkaline, acid treatment and thermic disinfection, the filters were effectively reprocessable and now provide tap water meeting the German drinking water regulations as well as the WHO guidelines, including absence of pathogens.

  18. Assessment of the time-dependent need for stay in a high dependency unit (HDU) after major surgery by using data from an anesthesia information management system.

    Science.gov (United States)

    Betten, Jan; Roness, Aleksander Kirkerud; Endreseth, Birger Henning; Trønnes, Håkon; Tyvold, Stig Sverre; Klepstad, Pål; Nordseth, Trond

    2016-04-01

    Admittance to a high dependency unit (HDU) is expensive. Patients who receive surgical treatment with 'low anterior resection of the rectum' (LAR) or 'abdominoperineal resection of the rectum' (APR) at our hospital are routinely treated in an HDU the first 16-24 h of the postoperative (PO) period. The aim of this study was to describe the extent of HDU-specific interventions given. We included patients treated with LAR or APR at the St. Olav University Hospital (Trondheim, Norway) over a 1-year period. Physiologic data and HDU-interventions recorded during the PO-period were obtained from the anesthesia information management system (AIMS). HDU-specific interventions were defined as the need for respiratory support, fluid replacement therapy >500 ml/h, vasoactive medications, or a need for high dose opioids (morphine >7.5 mg/h i.v.). Sixty-two patients were included. Most patients needed HDU-specific interventions during the first 6 h of the PO period. After this, one-third of the patients needed one or more of the HDU-specific interventions for shorter periods of time. Another one-third of the patients had a need for HDU-specific therapies for more than ten consecutive hours, primarily an infusion of nor-epinephrine. Most patients treated with LAR or APR was in need of an HDU-specific intervention during the first 6 h of the PO-period, with a marked decline after this time period. The applied methodology, using an AIMS, demonstrates that there is great variability in individual patients' postoperative needs after major surgery, and that these needs are dynamic in their nature.

  19. A new approach for global synchronization in hierarchical scheduled real-time systems

    NARCIS (Netherlands)

    Behnam, M.; Nolte, T.; Bril, R.J.

    2009-01-01

    We present our ongoing work to improve an existing synchronization protocol SIRAP for hierarchically scheduled real-time systems. A less pessimistic schedulability analysis is presented which can make the SIRAP protocol more efficient in terms of calculated CPU resource needs. In addition and for

  20. Real-Time Generic Face Tracking in the Wild with CUDA

    NARCIS (Netherlands)

    Cheng, Shiyang; Asthana, Akshay; Asthana, Ashish; Zafeiriou, Stefanos; Shen, Jie; Pantic, Maja

    We present a robust real-time face tracking system based on the Constrained Local Models framework by adopting the novel regression-based Discriminative Response Map Fitting (DRMF) method. By exploiting the algorithm's potential parallelism, we present a hybrid CPU-GPU implementation capable of

  1. Use of palivizumab and infection control measures to control an outbreak of respiratory syncytial virus in a neonatal intensive care unit confirmed by real-time polymerase chain reaction.

    LENUS (Irish Health Repository)

    O'Connell, K

    2011-04-01

    Respiratory syncytial virus (RSV) is a potentially life-threatening infection in premature infants. We report an outbreak involving four infants in the neonatal intensive care unit (NICU) of our hospital that occurred in February 2010. RSV A infection was confirmed by real-time polymerase chain reaction. Palivizumab was administered to all infants in the NICU. There were no additional symptomatic cases and repeat RSV surveillance confirmed that there was no further cross-transmission within the unit. The outbreak highlighted the infection control challenge of very high bed occupancy in the unit and the usefulness of molecular methods in facilitating detection and management.

  2. The AMchip04 and the processing unit prototype for the FastTracker

    International Nuclear Information System (INIS)

    Andreani, A; Alberti, F; Stabile, A; Annovi, A; Beretta, M; Volpi, G; Bogdan, M; Shochet, M; Tang, J; Tompkins, L; Citterio, M; Giannetti, P; Lanza, A; Magalotti, D; Piendibene, M

    2012-01-01

    Modern experiments search for extremely rare processes hidden in much larger background levels. As the experiment's complexity, the accelerator backgrounds and luminosity increase we need increasingly complex and exclusive event selection. We present the first prototype of a new Processing Unit (PU), the core of the FastTracker processor (FTK). FTK is a real time tracking device for the ATLAS experiment's trigger upgrade. The computing power of the PU is such that a few hundred of them will be able to reconstruct all the tracks with transverse momentum above 1 GeV/c in ATLAS events up to Phase II instantaneous luminosities (3 × 10 34 cm −2 s −1 ) with an event input rate of 100 kHz and a latency below a hundred microseconds. The PU provides massive computing power to minimize the online execution time of complex tracking algorithms. The time consuming pattern recognition problem, generally referred to as the ''combinatorial challenge'', is solved by the Associative Memory (AM) technology exploiting parallelism to the maximum extent; it compares the event to all pre-calculated ''expectations'' or ''patterns'' (pattern matching) simultaneously, looking for candidate tracks called ''roads''. This approach reduces to a linear behavior the typical exponential complexity of the CPU based algorithms. Pattern recognition is completed by the time data are loaded into the AM devices. We report on the design of the first Processing Unit prototypes. The design had to address the most challenging aspects of this technology: a huge number of detector clusters (''hits'') must be distributed at high rate with very large fan-out to all patterns (10 Million patterns will be located on 128 chips placed on a single board) and a huge number of roads must be collected and sent back to the FTK post-pattern-recognition functions. A network of high speed serial links is used to solve the data distribution problem.

  3. Physician discretion is safe and may lower stress test utilization in emergency department chest pain unit patients.

    Science.gov (United States)

    Napoli, Anthony M; Arrighi, James A; Siket, Matthew S; Gibbs, Frantz J

    2012-03-01

    Chest pain unit (CPU) observation with defined stress utilization protocols is a common management option for low-risk emergency department patients. We sought to evaluate the safety of a joint emergency medicine and cardiology staffed CPU. Prospective observational trial of consecutive patients admitted to an emergency department CPU was conducted. A standard 6-hour observation protocol was followed by cardiology consultation and stress utilization largely at their discretion. Included patients were at low/intermediate risk by the American Heart Association, had nondiagnostic electrocardiograms, and a normal initial troponin. Excluded patients were those with an acute comorbidity, age >75, and a history of coronary artery disease, or had a coexistent problem restricting 24-hour observation. Primary outcome was 30-day major adverse cardiovascular events-defined as death, nonfatal acute myocardial infarction, revascularization, or out-of-hospital cardiac arrest. A total of 1063 patients were enrolled over 8 months. The mean age of the patients was 52.8 ± 11.8 years, and 51% (95% confidence interval [CI], 48-54) were female. The mean thrombolysis in myocardial infarction and Diamond & Forrester scores were 0.6% (95% CI, 0.51-0.62) and 33% (95% CI, 31-35), respectively. In all, 51% (95% CI, 48-54) received stress testing (52% nuclear stress, 39% stress echocardiogram, 5% exercise, 4% other). In all, 0.9% patients (n = 10, 95% CI, 0.4-1.5) were diagnosed with a non-ST elevation myocardial infarction and 2.2% (n = 23, 95% CI, 1.3-3) with acute coronary syndrome. There was 1 (95% CI, 0%-0.3%) case of a 30-day major adverse cardiovascular events. The 51% stress test utilization rate was less than the range reported in previous CPU studies (P < 0.05). Joint emergency medicine and cardiology management of patients within a CPU protocol is safe, efficacious, and may safely reduce stress testing rates.

  4. Maternal Education Is Associated with Disparities in Breastfeeding at Time of Discharge but Not at Initiation of Enteral Feeding in the Neonatal Intensive Care Unit.

    Science.gov (United States)

    Herich, Lena Carolin; Cuttini, Marina; Croci, Ileana; Franco, Francesco; Di Lallo, Domenico; Baronciani, Dante; Fares, Katia; Gargano, Giancarlo; Raponi, Massimiliano; Zeitlin, Jennifer

    2017-03-01

    To investigate the relationship between maternal education and breastfeeding in very preterm infants admitted to neonatal intensive care units. This prospective, population-based cohort study analyzed the data of all very preterm infants admitted to neonatal care during 1 year in 3 regions in Italy (Lazio, Emilia-Romagna, and Marche). The use of mothers' own milk was recorded at initial enteral feedings and at hospital discharge. We used multilevel logistic analysis to model the association between maternal education and breastfeeding outcomes, adjusting for maternal age and country of birth. Region was included as random effect. There were 1047 very preterm infants who received enteral feeding, and 975 were discharged alive. At discharge, the use of mother's own milk, exclusively or not, and feeding directly at the breast were significantly more likely for mothers with an upper secondary education or higher. We found no relationship between maternal education and type of milk at initial enteral feedings. However, the exclusive early use of the mother's own milk at initial feedings was related significantly with receiving any maternal milk and feeding directly at the breast at discharge from hospital, and the association with feeding at the breast was stronger for the least educated mothers. In this population-based cohort of very preterm infants, we found a significant and positive association between maternal education and the likelihood of receiving their mother's own milk at the time of discharge. In light of the proven benefits of maternal milk, strategies to support breastfeeding should be targeted to mothers with less education. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Development of Real-Time Dual-Display Handheld and Bench-Top Hybrid-Mode SD-OCTs

    Directory of Open Access Journals (Sweden)

    Nam Hyun Cho

    2014-01-01

    Full Text Available Development of a dual-display handheld optical coherence tomography (OCT system for retina and optic-nerve-head diagnosis beyond the volunteer motion constraints is reported. The developed system is portable and easily movable, containing the compact portable OCT system that includes the handheld probe and computer. Eye posterior chambers were diagnosed using the handheld probe, and the probe could be fixed to the bench-top cradle depending on the volunteers’ physical condition. The images obtained using this handheld probe were displayed in real time on the computer monitor and on a small secondary built-in monitor; the displayed images were saved using the handheld probe’s built-in button. Large-scale signal-processing procedures such as k-domain linearization, fast Fourier transform (FFT, and log-scaling signal processing can be rapidly applied using graphics-processing-unit (GPU accelerated processing rather than central-processing-unit (CPU processing. The Labview-based system resolution is 1,024 × 512 pixels, and the frame rate is 56 frames/s, useful for real-time display. The 3D images of the posterior chambers including the retina, optic-nerve head, blood vessels, and optic nerve were composed using real-time displayed images with 500 × 500 × 500 pixel resolution. A handheld and bench-top hybrid mode with a dual-display handheld OCT was developed to overcome the drawbacks of the conventional method.

  6. NATO’s Relevance to United States Enduring National Interests Time to Remove the Training Wheels but Continue to Hold the Handle Bars

    Science.gov (United States)

    2016-06-10

    advice, and friendship will have a lasting and positive effect on not only my military career but also my professional & personal life after the Army...if NATO didn’t exist today, the United States would not seek to create it.”1 Magnus Petersson flrnher asserts that within the United States...this topic relevant to the current and emerging strategic environment.7 Magnus Petersson, The US-NATO Debate: From Libya to Ukraine. (New York

  7. NATOs Relevance to United States Enduring National Interests Time to Remove the Training Wheels but Continue to Hold the Handle Bars

    Science.gov (United States)

    2016-06-10

    advice, and friendship will have a lasting and positive effect on not only my military career but also my professional & personal life after the Army...if NATO didn’t exist today, the United States would not seek to create it.”1 Magnus Petersson flrnher asserts that within the United States...this topic relevant to the current and emerging strategic environment.7 Magnus Petersson, The US-NATO Debate: From Libya to Ukraine. (New York

  8. Simulating Photon Mapping for Real-time Applications

    DEFF Research Database (Denmark)

    Larsen, Bent Dalgaard; Christensen, Niels Jørgen

    2004-01-01

    This paper introduces a novel method for simulating photon mapping for real-time applications. First we introduce a new method for selectively redistributing photons. Then we describe a method for selectively updating the indirect illumination. The indirect illumination is calculated using a new...... GPU accelerated final gathering method and the illumination is then stored in light maps. Caustic photons are traced on the CPU and then drawn using points in the framebuffer, and finally filtered using the GPU. Both diffuse and non-diffuse surfaces can be handled by calculating the direct...... illumination on the GPU and the photon tracing on the CPU. We achieve real-time frame rates for dynamic scenes....

  9. The Analysis of Task and Data Characteristic and the Collaborative Processing Method in Real-Time Visualization Pipeline of Urban 3DGIS

    Directory of Open Access Journals (Sweden)

    Dongbo Zhou

    2017-03-01

    Full Text Available Parallel processing in the real-time visualization of three-dimensional Geographic Information Systems (3DGIS has tended to concentrate on algorithm levels in recent years, and most of the existing methods employ multiple threads in a Central Processing Unit (CPU or kernel in a Graphics Processing Unit (GPU to improve efficiency in the computation of the Level of Details (LODs for three-dimensional (3D Models and in the display of Digital Elevation Models (DEMs and Digital Orthphoto Maps (DOMs. The systematic analysis of the task and data characteristics of parallelism in the real-time visualization of 3DGIS continues to fall behind the development of hardware. In this paper, the basic procedures of real-time visualization of urban 3DGIS are first reviewed, and then the real-time visualization pipeline is analyzed. Further, the pipeline is decomposed into different task stages based on the task order and the input-output dependency. Based on the analysis of task parallelism in different pipeline stages, the data parallelism characteristics in each task are summarized by studying the involved algorithms. Finally, this paper proposes a parallel co-processing mode and a collaborative strategy for real-time visualization of urban 3DGIS. It also provides a fundamental basis for developing parallel algorithms and strategies in 3DGIS.

  10. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  11. Real time control of the SSC string magnets

    International Nuclear Information System (INIS)

    Calvo, O.; Flora, R.; MacPherson, M.

    1987-01-01

    The system described in this paper, called SECAR, was designed to control the excitation of a test string of magnets for the proposed Superconducting Super Collider (SSC) and will be used to upgrade the present Tevatron Excitation, Control and Regulation (TECAR) hardware and software. It resides in a VME orate and is controlled by a 68020/68881 based CPU running the application software under a real time operating system named VRTX

  12. An evaluation of dose/unit area and time as key factors influencing the elicitation capacity of methylchloroisothiazolinone/methylisothiazolinone (MCI/MI) in MCI/MI-allergic patients

    DEFF Research Database (Denmark)

    Zachariae, Claus; Lerbaek, Anne; McNamee, Pauline M

    2006-01-01

    Methylchloroisothiazolinone and methylisothiazolinone (MCI/MI) contact allergy affects 1-3% of patch-tested patients in European centres. The aim of the present study was to evaluate the importance of the factors--time and concentration (dose/per unit area)--in the elicitation capacity by means...... (2 p.p.m.) of MCI/MI/unit area of the skin for 4 weeks. After a wash-out period of at least 4 weeks, the subjects were exposed to 0.094 microg/cm2 (7.5 p.p.m.) of MCI/MI/unit area of the skin for 4 weeks. The study showed the importance of both time and exposure in the elicitation process...

  13. Monitoring of mass flux of catalyst FCC in a Cold Pilot Unit by gamma radiation transmission; Monitoramento da taxa de fluxo do catalisador FCC em uma unidade piloto a frio por medicao de transmissao gama

    Energy Technology Data Exchange (ETDEWEB)

    Brito, Marcio Fernando Paixao de

    2014-09-01

    This paper proposes a model for monitoring the mass flow of catalyst FCC - Fluid Catalytic Cracking - in a CPU - Cold Pilot unit - due to the injection of air and solid by gamma radiation transmission. The CPU simplifies the process of FCC, which is represented by the catalyst cycle, and it was constructed of acrylic, so that the flow can be visualized. The CPU consists of riser separation chamber and return column, and simulates the riser reactor of the FCC process. The catalyst is injected into the column back to the base of the riser, an inclined tube, where the compressed air means that there fluidization along the riser. When the catalyst comes in the separation chamber, the solid phase is sent to the return column, and the gas phase exits the system through one of the four cyclones at the top of the separation chamber. The transmission gamma of measures will be made by means of three test sections that have source and detector shielded. Pressure drop in the riser measurements are made through three pressure gauges positioned on the riser. The source used was Am-241 gamma ray with energy of 60 keV, and detector used was a scintillator of NaI (Tl) of 2 {sup x} 2{sup .} Measures the mass flow of catalyst are made by varying the seal of the catalyst, and density of solid in the riser because with the combination of these measures can determine the speed of the catalyst in the riser. The results show that the transmission gamma is a suitable technique for monitoring the flow of catalyst, flow model in CPU is annular, tomography third generation is more appropriate to study the CPU and the density variation in circulation in the CPU decreases linearly with increasing air flow. (author)

  14. The Value of Non-Work Time in Cross-National Quality of Life Comparisons: The Case of the United States vs. the Netherlands

    NARCIS (Netherlands)

    Verbakel, C.M.C.; DiPrete, T.A.

    2008-01-01

    Comparisons of wellbeing between the United States and Western Europe generally show that most Americans have higher standards of living than do Western Europeans at comparable locations in their national income distributions. These comparisons of wellbeing typically privilege disposable income and

  15. Wood pellets, what else? : Greenhouse gas parity times of European electricity from wood pellets produced in the south-eastern United States using different softwood feedstocks

    NARCIS (Netherlands)

    Hanssen, Steef V.; Duden, Anna S.; Junginger, Martin; Dale, Virginia H.; van der Hilst, Floortje

    Several EU countries import wood pellets from the south-eastern United States. The imported wood pellets are (co-)fired in power plants with the aim of reducing overall greenhouse gas (GHG) emissions from electricity and meeting EU renewable energy targets. To assess whether GHG emissions are

  16. Interface unit

    NARCIS (Netherlands)

    Keyson, D.V.; Freudenthal, A.; De Hoogh, M.P.A.; Dekoven, E.A.M.

    2001-01-01

    The invention relates to an interface unit comprising at least a display unit for communication with a user, which is designed for being coupled with a control unit for at least one or more parameters in a living or working environment, such as the temperature setting in a house, which control unit

  17. Design and development of a diversified real time computer for future FBRs

    International Nuclear Information System (INIS)

    Sujith, K.R.; Bhattacharyya, Anindya; Behera, R.P.; Murali, N.

    2014-01-01

    The current safety related computer system of Prototype Fast Breeder Reactor (PFBR) under construction in Kalpakkam consists of two redundant Versa Module Europa (VME) bus based Real Time Computer system with a Switch Over Logic Circuit (SOLC). Since both the VME systems are identical, the dual redundant system is prone to common cause failure (CCF). The probability of CCF can be reduced by adopting diversity. Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on 'independent' generation of 'different' implementations. This paper discusses the design and development of a diversified Real Time Computer which will replace one of the computer system in the dual redundant architecture. Compact PCI (cPCI) bus systems are widely used in safety critical applications such as avionics, railways, defence and uses diverse electrical signaling and logical specifications, hence was chosen for development of the diversified system. Towards the initial development a CPU card based on an ARM-9 processor, 16 channel Relay Output (RO) card and a 30 channel Analog Input (AI) card was developed. All the cards mentioned supports hot-swap and geographic addressing capability. In order to mitigate the component obsolescence problem the 32 bit PCI target controller and associated glue logic for the slave I/O cards was indigenously developed using VHDL. U-boot was selected as the boot loader and arm Linux 2.6 as the preliminary operating system for the CPU card. Board specific initialization code for the CPU card was written in ARM assembly language and serial port initialization was written in C language. Boot loader along with Linux 2.6 kernel and jffs2 file system was flashed into the CPU card. Test applications written in C language were used to test the various peripherals of the CPU card. Device driver for the AI and RO card was developed as Linux kernel modules and application library was also

  18. Real-time implementation of optimized maximum noise fraction transform for feature extraction of hyperspectral images

    Science.gov (United States)

    Wu, Yuanfeng; Gao, Lianru; Zhang, Bing; Zhao, Haina; Li, Jun

    2014-01-01

    We present a parallel implementation of the optimized maximum noise fraction (G-OMNF) transform algorithm for feature extraction of hyperspectral images on commodity graphics processing units (GPUs). The proposed approach explored the algorithm data-level concurrency and optimized the computing flow. We first defined a three-dimensional grid, in which each thread calculates a sub-block data to easily facilitate the spatial and spectral neighborhood data searches in noise estimation, which is one of the most important steps involved in OMNF. Then, we optimized the processing flow and computed the noise covariance matrix before computing the image covariance matrix to reduce the original hyperspectral image data transmission. These optimization strategies can greatly improve the computing efficiency and can be applied to other feature extraction algorithms. The proposed parallel feature extraction algorithm was implemented on an Nvidia Tesla GPU using the compute unified device architecture and basic linear algebra subroutines library. Through the experiments on several real hyperspectral images, our GPU parallel implementation provides a significant speedup of the algorithm compared with the CPU implementation, especially for highly data parallelizable and arithmetically intensive algorithm parts, such as noise estimation. In order to further evaluate the effectiveness of G-OMNF, we used two different applications: spectral unmixing and classification for evaluation. Considering the sensor scanning rate and the data acquisition time, the proposed parallel implementation met the on-board real-time feature extraction.

  19. Remote Maintenance Design Guide for Compact Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    2000-07-13

    Oak Ridge National Laboratory (ORNL) Robotics and Process Systems (RPSD) personnel have extensive experience working with remotely operated and maintained systems. These systems require expert knowledge in teleoperation, human factors, telerobotics, and other robotic devices so that remote equipment may be manipulated, operated, serviced, surveyed, and moved about in a hazardous environment. The RPSD staff has a wealth of experience in this area, including knowledge in the broad topics of human factors, modular electronics, modular mechanical systems, hardware design, and specialized tooling. Examples of projects that illustrate and highlight RPSD's unique experience in remote systems design and application include the following: (1) design of a remote shear and remote dissolver systems in support of U.S. Department of Energy (DOE) fuel recycling research and nuclear power missions; (2) building remotely operated mobile systems for metrology and characterizing hazardous facilities in support of remote operations within those facilities; (3) construction of modular robotic arms, including the Laboratory Telerobotic Manipulator, which was designed for the National Aeronautics and Space Administration (NASA) and the Advanced ServoManipulator, which was designed for the DOE; (4) design of remotely operated laboratories, including chemical analysis and biochemical processing laboratories; (5) construction of remote systems for environmental clean up and characterization, including underwater, buried waste, underground storage tank (UST) and decontamination and dismantlement (D&D) applications. Remote maintenance has played a significant role in fuel reprocessing because of combined chemical and radiological contamination. Furthermore, remote maintenance is expected to play a strong role in future waste remediation. The compact processing units (CPUs) being designed for use in underground waste storage tank remediation are examples of improvements in systems

  20. A Parallel Algebraic Multigrid Solver on Graphics Processing Units

    KAUST Repository

    Haase, Gundolf

    2010-01-01

    The paper presents a multi-GPU implementation of the preconditioned conjugate gradient algorithm with an algebraic multigrid preconditioner (PCG-AMG) for an elliptic model problem on a 3D unstructured grid. An efficient parallel sparse matrix-vector multiplication scheme underlying the PCG-AMG algorithm is presented for the many-core GPU architecture. A performance comparison of the parallel solver shows that a singe Nvidia Tesla C1060 GPU board delivers the performance of a sixteen node Infiniband cluster and a multi-GPU configuration with eight GPUs is about 100 times faster than a typical server CPU core. © 2010 Springer-Verlag.

  1. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  2. The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units

    CERN Document Server

    Tavares Delgado, Ademar; The ATLAS collaboration

    2016-01-01

    The ATLAS Trigger Algorithms for General Purpose Graphics Processor Units Type: Talk Abstract: We present the ATLAS Trigger algorithms developed to exploit General­ Purpose Graphics Processor Units. ATLAS is a particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system has two levels, hardware-­based Level 1 and the High Level Trigger implemented in software running on a farm of commodity CPU. Performing the trigger event selection within the available farm resources presents a significant challenge that will increase future LHC upgrades. are being evaluated as a potential solution for trigger algorithms acceleration. Key factors determining the potential benefit of this new technology are the relative execution speedup, the number of GPUs required and the relative financial cost of the selected GPU. We have developed a trigger demonstrator which includes algorithms for reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Cal...

  3. Power Conditioning And Distribution Units For 50V Platforms A Flexible And Modular Concept Allowing To Deal With Time Constraining Programs

    Science.gov (United States)

    Lempereur, V.; Liegeois, B.; Deplus, N.

    2011-10-01

    In the frame of its Power Conditioning and Distribution Unit (PCDU) Medium power product family, Thales Alenia space ETCA is currently developing Power Conditioning Unit (PCU) and PCDU products for 50V platforms applications. These developments are performed in very schedule constraining programs. This challenge can be met thanks to the modular PCDU concept allowing to share a common heritage at mechanical & thermal points of view as well as at electrical functions level. First Medium power PCDU application has been developed for Herschel-Planck PCDU and re-used in several other missions (e.g. GlobalStar2 PCDU for which we are producing more than 26 units). Based on this heritage, a development plan based on Electrical Model (EM) (avoiding Electrical Qualification Model - EQM) can be proposed when the mechanical qualification of the concept covers the environment required in new projects. This first heritage level allows reducing development schedule and activities. In addition, development is also optimized thanks to the re-use of functions designed and qualified in Herschel- PlanckPCDU. This coversinternal TM/TC management inside PCDU based on a centralized scheduler and an internal high speed serial bus. Finally, thanks to common architecture of several 50V platforms based on full regulated bus, S3R (Sequential Shunt Switch Regulator) concept and one (or two) Li- Ion battery(ies), a common PCU/PCDU architecture has allowed the development of modules or functions that are used in several applications. These achievements are discussed with particular emphasis on PCDU architecture trade-offs allowing flexibility of proposed technical solutions (w.r.t. mono/bi-battery configurations, SA inner capacitance value, output power needs...). Pro's and con's of sharing concepts and designs between several applications on 50V platforms are also be discussed.

  4. Death and the Times: Depictions of War Deaths in the United States and Israel From Vietnam and the Six-Day War to Iraq and Lebanon

    OpenAIRE

    Lachmann, Richard; Sheinheit, Ian J.; Li, Jing; Gat, Ayala; Filisha, Mishel

    2012-01-01

    Why has support for casualties in foreign wars declined in the United States since Vietnam? We compare The New York Times’ very different depictions of war deaths in the Vietnam and Iraq wars. Then we offer an explanation for why there has been this fundamental transformation in the ways in which American war dead are regarded and valued. We find that the change is in retrospective interpretations of the war and in memorials to the Vietnam dead after that war ended rather than in public evalu...

  5. Multi­-Threaded Algorithms for General purpose Graphics Processor Units in the ATLAS High Level Trigger

    CERN Document Server

    Conde Mui\\~no, Patricia; The ATLAS collaboration

    2016-01-01

    General purpose Graphics Processor Units (GPGPU) are being evaluated for possible future inclusion in an upgraded ATLAS High Level Trigger farm. We have developed a demonstrator including GPGPU implementations of Inner Detector and Muon tracking and Calorimeter clustering within the ATLAS software framework. ATLAS is a general purpose particle physics experiment located on the LHC collider at CERN. The ATLAS Trigger system consists of two levels, with level 1 implemented in hardware and the High Level Trigger implemented in software running on a farm of commodity CPU. The High Level Trigger reduces the trigger rate from the 100 kHz level 1 acceptance rate to 1 kHz for recording, requiring an average per­-event processing time of ~250 ms for this task. The selection in the high level trigger is based on reconstructing tracks in the Inner Detector and Muon Spectrometer and clusters of energy deposited in the Calorimeter. Performing this reconstruction within the available farm resources presents a significant ...

  6. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Science.gov (United States)

    Seo, Jiwon; Chen, Yu-Hsuan; De Lorenzo, David S.; Lo, Sherman; Enge, Per; Akos, Dennis; Lee, Jiyun

    2011-01-01

    Due to their weak received signal power, Global Positioning System (GPS) signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs). However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR) with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU) coupled with a new generation Graphics Processing Unit (GPU) having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities. PMID:22164116

  7. A Real-Time Capable Software-Defined Receiver Using GPU for Adaptive Anti-Jam GPS Sensors

    Directory of Open Access Journals (Sweden)

    Dennis Akos

    2011-09-01

    Full Text Available Due to their weak received signal power, Global Positioning System (GPS signals are vulnerable to radio frequency interference. Adaptive beam and null steering of the gain pattern of a GPS antenna array can significantly increase the resistance of GPS sensors to signal interference and jamming. Since adaptive array processing requires intensive computational power, beamsteering GPS receivers were usually implemented using hardware such as field-programmable gate arrays (FPGAs. However, a software implementation using general-purpose processors is much more desirable because of its flexibility and cost effectiveness. This paper presents a GPS software-defined radio (SDR with adaptive beamsteering capability for anti-jam applications. The GPS SDR design is based on an optimized desktop parallel processing architecture using a quad-core Central Processing Unit (CPU coupled with a new generation Graphics Processing Unit (GPU having massively parallel processors. This GPS SDR demonstrates sufficient computational capability to support a four-element antenna array and future GPS L5 signal processing in real time. After providing the details of our design and optimization schemes for future GPU-based GPS SDR developments, the jamming resistance of our GPS SDR under synthetic wideband jamming is presented. Since the GPS SDR uses commercial-off-the-shelf hardware and processors, it can be easily adopted in civil GPS applications requiring anti-jam capabilities.

  8. Is hand hygiene before putting on nonsterile gloves in the intensive care unit a waste of health care worker time?--a randomized controlled trial.

    Science.gov (United States)

    Rock, Clare; Harris, Anthony D; Reich, Nicholas G; Johnson, J Kristie; Thom, Kerri A

    2013-11-01

    Hand hygiene (HH) is recognized as a basic effective measure in prevention of nosocomial infections. However, the importance of HH before donning nonsterile gloves is unknown, and few published studies address this issue. Despite the lack of evidence, the World Health Organization and other leading bodies recommend this practice. The aim of this study was to assess the utility of HH before donning nonsterile gloves prior to patient contact. A prospective, randomized, controlled trial of health care workers entering Contact Isolation rooms in intensive care units was performed. Baseline finger and palm prints were made from dominant hands onto agar plates. Health care workers were then randomized to directly don nonsterile gloves or perform HH and then don nonsterile gloves. Postgloving finger and palm prints were then made from the gloved hands. Plates were incubated and colony-forming units (CFU) of bacteria were counted. Total bacterial colony counts of gloved hands did not differ between the 2 groups (6.9 vs 8.1 CFU, respectively, P = .52). Staphylococcus aureus was identified from gloves (once in "hand hygiene prior to gloving" group, twice in "direct gloving" group). All other organisms were expected commensal flora. HH before donning nonsterile gloves does not decrease already low bacterial counts on gloves. The utility of HH before donning nonsterile gloves may be unnecessary. Copyright © 2013 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  9. Credentialism, Adults, and Part-Time Higher Education in the United Kingdom: An Account of Rising Take Up and Some Implications for Policy.

    Science.gov (United States)

    Fuller, Alison

    2001-01-01

    Explains the growing importance of higher-level qualifications for adults in the UK, highlighting statistical trends in commitment to learning and qualifying-the result of taking part-time courses in higher education. Most part-time undergraduates fund their own tuition. Mature students' backgrounds and perspectives partly account for their rising…

  10. Benchmarking hardware architecture candidates for the NFIRAOS real-time controller

    Science.gov (United States)

    Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre

    2014-07-01

    As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.

  11. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  12. Multichannel analyzer with real-time correction of counting losses based on a fast 16/32 bit microprocessor

    International Nuclear Information System (INIS)

    Westphal, G.P.; Kasa, T.

    1984-01-01

    It is demonstrated that from a modern microprocessor with 32 bit architecture and from standard VLSI peripheral chips a multichannel analyzer with real-time correction of counting losses may be designed in a very flexible yet cost-effective manner. Throughput rates of 100,000 events/second are a good match even for high-rate spectroscopy systems and may be further enhanced by the use of already available CPU chips with higher clock frequency. Low power consumption and a very compact form factor make the design highly recommendable for portable applications. By means of a simple and easily reproducible rotating sample device the dynamic response of the VPG counting loss correction method have been tested and found to be more than sufficient for conceivable real-time applications. Enhanced statistical accuracy of correction factors may be traded against speed of response by the mere change of one preset value which lends itself to the simple implementation of self-adapting systems. Reliability as well as user convenience is improved by self-calibration of pulse evolution time in the VPG counting loss correction unit

  13. Technical challenges related to implementation of a formula one real time data acquisition and analysis system in a paediatric intensive care unit.

    Science.gov (United States)

    Matam, B Rajeswari; Duncan, Heather

    2018-06-01

    Most existing, expert monitoring systems do not provide the real time continuous analysis of the monitored physiological data that is necessary to detect transient or combined vital sign indicators nor do they provide long term storage of the data for retrospective analyses. In this paper we examine the feasibility of implementing a long term data storage system which has the ability to incorporate real-time data analytics, the system design, report the main technical issues encountered, the solutions implemented and the statistics of the data recorded. McLaren Electronic Systems expertise used to continually monitor and analyse the data from F1 racing cars in real time was utilised to implement a similar real-time data recording platform system adapted with real time analytics to suit the requirements of the intensive care environment. We encountered many technical (hardware and software) implementation challenges. However there were many advantages of the system once it was operational. They include: (1) The ability to store the data for long periods of time enabling access to historical physiological data. (2) The ability to alter the time axis to contract or expand periods of interest. (3) The ability to store and review ECG morphology retrospectively. (4) Detailed post event (cardiac/respiratory arrest or other clinically significant deteriorations in patients) data can be reviewed clinically as opposed to trend data providing valuable clinical insight. Informed mortality and morbidity reviews can be conducted. (5) Storage of waveform data capture to use for algorithm development for adaptive early warning systems. Recording data from bed-side monitors in intensive care/wards is feasible. It is possible to set up real time data recording and long term storage systems. These systems in future can be improved with additional patient specific metrics which predict the status of a patient thus paving the way for real time predictive monitoring.

  14. Summary of Time Period-Based and Other Approximation Methods for Determining the Capacity Value of Wind and Solar in the United States: September 2010 - February 2012

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, J.; Porter, K.

    2012-03-01

    This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.

  15. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  16. Correlation between timing of tracheostomy and duration of mechanical ventilation in patients with potentially normal lungs admitted to intensive care unit

    Directory of Open Access Journals (Sweden)

    Mehrdad Masoudifar

    2012-01-01

    Conclusion: Our study with mentioned sample size could not show any relationship between timing of tracheostomy and duration of mechanical ventilation in patients under mechanical ventilation with good pulmonary function in ICU.

  17. Operable Unit Boundaries

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset consists of operable unit data from multiple Superfund sites in U.S. EPA Region 8. These data were acquired from multiple sources at different times and...

  18. Applying graphics processor units to Monte Carlo dose calculation in radiation therapy

    Directory of Open Access Journals (Sweden)

    Bakhtiari M

    2010-01-01

    Full Text Available We investigate the potential in using of using a graphics processor unit (GPU for Monte-Carlo (MC-based radiation dose calculations. The percent depth dose (PDD of photons in a medium with known absorption and scattering coefficients is computed using a MC simulation running on both a standard CPU and a GPU. We demonstrate that the GPU′s capability for massive parallel processing provides a significant acceleration in the MC calculation, and offers a significant advantage for distributed stochastic simulations on a single computer. Harnessing this potential of GPUs will help in the early adoption of MC for routine planning in a clinical environment.

  19. Associations of acoustically measured tongue/jaw movements and portion of time speaking with negative symptom severity in patients with schizophrenia in Italy and the United States.

    Science.gov (United States)

    Bernardini, Francesco; Lunden, Anya; Covington, Michael; Broussard, Beth; Halpern, Brooke; Alolayan, Yazeed; Crisafio, Anthony; Pauselli, Luca; Balducci, Pierfrancesco M; Capulong, Leslie; Attademo, Luigi; Lucarini, Emanuela; Salierno, Gianfranco; Natalicchi, Luca; Quartesan, Roberto; Compton, Michael T

    2016-05-30

    This is the first cross-language study of the effect of schizophrenia on speech as measured by analyzing phonetic parameters with sound spectrography. We hypothesized that reduced variability in pitch and formants would be correlated with negative symptom severity in two samples of patients with schizophrenia, one from Italy, and one from the United States. Audio recordings of spontaneous speech were available from 40 patients. From each speech sample, a file of F0 (pitch) and formant values (F1 and F2, resonance bands indicating the moment-by-moment shape of the oral cavity), and the portion of the recording in which there was speaking ("fraction voiced," FV), was created. Correlations between variability in the phonetic indices and negative symptom severity were tested and further examined using regression analyses. Meaningful negative correlations between Scale for the Assessment of Negative Symptoms (SANS) total score and standard deviation (SD) of F2, as well as variability in pitch (SD F0) were observed in the Italian sample. We also found meaningful associations of SANS affective flattening and SANS alogia with SD F0, and of SANS avolition/apathy and SD F2 in the Italian sample. In both samples, FV was meaningfully correlated with SANS total score, avolition/apathy, and anhedonia/asociality. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Generating Units

    Data.gov (United States)

    Department of Homeland Security — Generating Units are any combination of physically connected generators, reactors, boilers, combustion turbines, and other prime movers operated together to produce...