WorldWideScience

Sample records for total cpu time

  1. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  2. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  3. CPU time reduction strategies for the Lambda modes calculation of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V.; Garayoa, J.; Hernandez, V. [Universidad Politecnica de Valencia (Spain). Dept. de Sistemas Informaticos y Computacion; Navarro, J.; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Dept. de Ingenieria Quimica y Nuclear; Ginestar, D. [Universidad Politecnica de Valencia (Spain). Dept. de Matematica Aplicada

    1997-12-01

    In this paper, we present two strategies to reduce the CPU time spent in the lambda modes calculation for a realistic nuclear power reactor.The discretization of the multigroup neutron diffusion equation has been made using a nodal collocation method, solving the associated eigenvalue problem with two different techniques: the Subspace Iteration Method and Arnoldi`s Method. CPU time reduction is based on a coarse grain parallelization approach together with a multistep algorithm to initialize adequately the solution. (author). 9 refs., 6 tabs.

  4. Enhanced round robin CPU scheduling with burst time based time quantum

    Science.gov (United States)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  5. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    International Nuclear Information System (INIS)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun

    2016-01-01

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD

  6. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD.

  7. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    Science.gov (United States)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  8. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    OpenAIRE

    Hiienkari, Markus; Teittinen, Jukka; Koskinen, Lauri; Turnquist, Matthew; Mäkipää, Jani; Rantala, Arto; Sopanen, Matti; Kaltiokallio, Mikko

    2015-01-01

    To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable ...

  9. An FPGA Based Multiprocessing CPU for Beam Synchronous Timing in CERN's SPS and LHC

    CERN Document Server

    Ballester, F J; Gras, J J; Lewis, J; Savioz, J J; Serrano, J

    2003-01-01

    The Beam Synchronous Timing system (BST) will be used around the LHC and its injector, the SPS, to broadcast timing meassages and synchronize actions with the beam in different receivers. To achieve beam synchronization, the BST Master card encodes messages using the bunch clock, with a nominal value of 40.079 MHz for the LHC. These messages are produced by a set of tasks every revolution period, which is every 89 us for the LHC and every 23 us for the SPS, therefore imposing a hard real-time constraint on the system. To achieve determinism, the BST Master uses a dedicated CPU inside its main Field Programmable Gate Array (FPGA) featuring zero-delay hardware task switching and a reduced instruction set. This paper describes the BST Master card, stressing the main FPGA design, as well as the associated software, including the LynxOS driver and the tailor-made assembler.

  10. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    Directory of Open Access Journals (Sweden)

    Markus Hiienkari

    2015-04-01

    Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.

  11. Using the CPU and GPU for real-time video enhancement on a mobile computer

    CSIR Research Space (South Africa)

    Bachoo, AK

    2010-09-01

    Full Text Available . In this paper, the current advances in mobile CPU and GPU hardware are used to implement video enhancement algorithms in a new way on a mobile computer. Both the CPU and GPU are used effectively to achieve realtime performance for complex image enhancement...

  12. Interactive dose shaping - efficient strategies for CPU-based real-time treatment planning

    International Nuclear Information System (INIS)

    Ziegenhein, P; Kamerling, C P; Oelfke, U

    2014-01-01

    Conventional intensity modulated radiation therapy (IMRT) treatment planning is based on the traditional concept of iterative optimization using an objective function specified by dose volume histogram constraints for pre-segmented VOIs. This indirect approach suffers from unavoidable shortcomings: i) The control of local dose features is limited to segmented VOIs. ii) Any objective function is a mathematical measure of the plan quality, i.e., is not able to define the clinically optimal treatment plan. iii) Adapting an existing plan to changed patient anatomy as detected by IGRT procedures is difficult. To overcome these shortcomings, we introduce the method of Interactive Dose Shaping (IDS) as a new paradigm for IMRT treatment planning. IDS allows for a direct and interactive manipulation of local dose features in real-time. The key element driving the IDS process is a two-step Dose Modification and Recovery (DMR) strategy: A local dose modification is initiated by the user which translates into modified fluence patterns. This also affects existing desired dose features elsewhere which is compensated by a heuristic recovery process. The IDS paradigm was implemented together with a CPU-based ultra-fast dose calculation and a 3D GUI for dose manipulation and visualization. A local dose feature can be implemented via the DMR strategy within 1-2 seconds. By imposing a series of local dose features, equal plan qualities could be achieved compared to conventional planning for prostate and head and neck cases within 1-2 minutes. The idea of Interactive Dose Shaping for treatment planning has been introduced and first applications of this concept have been realized.

  13. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  14. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  15. Online real-time reconstruction of adaptive TSENSE with commodity CPU / GPU hardware

    DEFF Research Database (Denmark)

    Roujol, Sebastien; de Senneville, Baudouin; Vahala, E.

    2009-01-01

    A real-time reconstruction for adaptive TSENSE is presented that is optimized for MR-guidance of interventional procedures. The proposed method allows high frame-rate imaging with low image latencies, even when large coil arrays are employed and can be implemented on affordable commodity hardware....

  16. Online real-time reconstruction of adaptive TSENSE with commodity CPU / GPU hardware

    DEFF Research Database (Denmark)

    Roujol, Sebastien; de Senneville, Baudouin Denis; Vahalla, Erkki

    2009-01-01

    Adaptive temporal sensitivity encoding (TSENSE) has been suggested as a robust parallel imaging method suitable for MR guidance of interventional procedures. However, in practice, the reconstruction of adaptive TSENSE images obtained with large coil arrays leads to long reconstruction times...... image sizes used in interventional imaging (128 × 96, 16 channels, sensitivity encoding (SENSE) factor 2-4), the pipeline is able to reconstruct adaptive TSENSE images with image latencies below 90 ms at frame rates of up to 40 images/s, rendering the MR performance in practice limited...... by the constraints of the MR acquisition. Its performance is demonstrated by the online reconstruction of in vivo MR images for rapid temperature mapping of the kidney and for cardiac catheterization....

  17. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    Science.gov (United States)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  18. A heterogeneous system based on GPU and multi-core CPU for real-time fluid and rigid body simulation

    Science.gov (United States)

    da Silva Junior, José Ricardo; Gonzalez Clua, Esteban W.; Montenegro, Anselmo; Lage, Marcos; Dreux, Marcelo de Andrade; Joselli, Mark; Pagliosa, Paulo A.; Kuryla, Christine Lucille

    2012-03-01

    Computational fluid dynamics in simulation has become an important field not only for physics and engineering areas but also for simulation, computer graphics, virtual reality and even video game development. Many efficient models have been developed over the years, but when many contact interactions must be processed, most models present difficulties or cannot achieve real-time results when executed. The advent of parallel computing has enabled the development of many strategies for accelerating the simulations. Our work proposes a new system which uses some successful algorithms already proposed, as well as a data structure organisation based on a heterogeneous architecture using CPUs and GPUs, in order to process the simulation of the interaction of fluids and rigid bodies. This successfully results in a two-way interaction between them and their surrounding objects. As far as we know, this is the first work that presents a computational collaborative environment which makes use of two different paradigms of hardware architecture for this specific kind of problem. Since our method achieves real-time results, it is suitable for virtual reality, simulation and video game fluid simulation problems.

  19. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  1. First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC

    CERN Document Server

    Halyo, V.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-07

    Recent innovations focused around {\\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \\xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \\x...

  2. CPU time optimization and precise adjustment of the Geant4 physics parameters for a VARIAN 2100 C/D gamma radiotherapy linear accelerator simulation using GAMOS

    Science.gov (United States)

    Arce, Pedro; Lagares, Juan Ignacio

    2018-02-01

    We have verified the GAMOS/Geant4 simulation model of a 6 MV VARIAN Clinac 2100 C/D linear accelerator by the procedure of adjusting the initial beam parameters to fit the percentage depth dose and cross-profile dose experimental data at different depths in a water phantom. Thanks to the use of a wide range of field sizes, from 2  ×  2 cm2 to 40  ×  40 cm2, a small phantom voxel size and high statistics, fine precision in the determination of the beam parameters has been achieved. This precision has allowed us to make a thorough study of the different physics models and parameters that Geant4 offers. The three Geant4 electromagnetic physics sets of models, i.e. Standard, Livermore and Penelope, have been compared to the experiment, testing the four different models of angular bremsstrahlung distributions as well as the three available multiple-scattering models, and optimizing the most relevant Geant4 electromagnetic physics parameters. Before the fitting, a comprehensive CPU time optimization has been done, using several of the Geant4 efficiency improvement techniques plus a few more developed in GAMOS.

  3. CPU and GPU (Cuda Template Matching Comparison

    Directory of Open Access Journals (Sweden)

    Evaldas Borcovas

    2014-05-01

    Full Text Available Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I, NVidia GeForce GT320M CUDAcompliable graphics card (GPU I and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II, NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II.Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have.

  4. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    International Nuclear Information System (INIS)

    Yoon, Jong Seon; Choi, Hyoung Gwon; Jeon, Byoung Jin

    2017-01-01

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  5. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of)

    2017-02-15

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  6. Timing the total reflection of light

    International Nuclear Information System (INIS)

    Chauvat, Dominique; Bonnet, Christophe; Dunseath, Kevin; Emile, Olivier; Le Floch, Albert

    2005-01-01

    We have identified for the first time the absolute delay at total reflection, envisioned by Newton. We show that there are in fact two divergent Wigner delays, depending on the polarisation of the incident light. These measurements give a new insight on the passage from total reflection to refraction

  7. Total sleep time severely drops during adolescence.

    Directory of Open Access Journals (Sweden)

    Damien Leger

    Full Text Available UNLABELLED: Restricted sleep duration among young adults and adolescents has been shown to increase the risk of morbidities such as obesity, diabetes or accidents. However there are few epidemiological studies on normal total sleep time (TST in representative groups of teen-agers which allow to get normative data. PURPOSE: To explore perceived total sleep time on schooldays (TSTS and non schooldays (TSTN and the prevalence of sleep initiating insomnia among a nationally representative sample of teenagers. METHODS: Data from 9,251 children aged 11 to 15 years-old, 50.7% of which were boys, as part of the cross-national study 2011 HBSC were analyzed. Self-completion questionnaires were administered in classrooms. An estimate of TSTS and TSTN (week-ends and vacations was calculated based on specifically designed sleep habits report. Sleep deprivation was estimated by a TSTN - TSTS difference >2 hours. Sleep initiating nsomnia was assessed according to International classification of sleep disorders (ICSD 2. Children who reported sleeping 7 hours or less per night were considered as short sleepers. RESULTS: A serious drop of TST was observed between 11 yo and 15 yo, both during the schooldays (9 hours 26 minutes vs. 7 h 55 min.; p<0.001 and at a lesser extent during week-ends (10 h 17 min. vs. 9 h 44 min.; p<0.001. Sleep deprivation concerned 16.0% of chidren aged of 11 yo vs. 40.5% of those of 15 yo (p<0.001. Too short sleep was reported by 2.6% of the 11 yo vs. 24.6% of the 15 yo (p<0.001. CONCLUSION: Despite the obvious need for sleep in adolescence, TST drastically decreases with age among children from 11 to 15 yo which creates significant sleep debt increasing with age.

  8. ITCA: Inter-Task Conflict-Aware CPU accounting for CMP

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla Almeida, Francisco Javier; Gioiosa, Roberto; Valero Cortés, Mateo

    2010-01-01

    Chip-MultiProcessors (CMP) introduce complexities when accounting CPU utilization to processes because the progress done by a process during an interval of time highly depends on the activity of the other processes it is coscheduled with. We propose a new hardware CPU accounting mechanism to improve the accuracy when measuring the CPU utilization in CMPs and compare it with previous accounting mechanisms. Our results show that currently known mechanisms lead to a 16% average error when it com...

  9. First evaluation of the CPU, GPGPU and MIC architectures for real time particle tracking based on Hough transform at the LHC

    International Nuclear Information System (INIS)

    V Halyo, V Halyo; LeGresley, P; Lujan, P; Karpusenko, V; Vladimirov, A

    2014-01-01

    Recent innovations focused around parallel processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's Xeon Phi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on multi-core Intel i7-3770 and Intel Xeon E5-2697v2 CPUs, an NVIDIA Tesla K20c GPU, and an Intel Xeon Phi 7120 coprocessor. Preliminary time performance will be presented

  10. STEM image simulation with hybrid CPU/GPU programming

    International Nuclear Information System (INIS)

    Yao, Y.; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-01-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  11. STEM image simulation with hybrid CPU/GPU programming

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Y., E-mail: yaoyuan@iphy.ac.cn; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-07-15

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  12. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  13. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  14. GeantV: from CPU to accelerators

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  15. GeantV: from CPU to accelerators

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Arora, A; Apostolakis, J; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S; Lima, G; Duhem, L

    2016-01-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs. (paper)

  16. ITCA: Inter-Task Conflict-Aware CPU accounting for CMPs

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla, Francisco; Gioiosa, Roberto; Buyuktosunoglu, Alper; Valero Cortés, Mateo

    2009-01-01

    Chip-MultiProcessor (CMP) architectures are becoming more and more popular as an alternative to the traditional processors that only extract instruction-level parallelism from an application. CMPs introduce complexities when accounting CPU utilization. This is due to the fact that the progress done by an application during an interval of time highly depends on the activity of the other applications it is co-scheduled with. In this paper, we identify how an inaccurate measurement of the CPU ut...

  17. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  18. PROCESS INNOVATION: HOLISTIC SCENARIOS TO REDUCE TOTAL LEAD TIME

    Directory of Open Access Journals (Sweden)

    Alin POSTEUCĂ

    2015-11-01

    Full Text Available The globalization of markets requires continuous development of business holistic scenarios to ensure acceptable flexibility to satisfy customers. Continuous improvement of supply chain supposes continuous improvement of materials and products lead time and flow, material stocks and finished products stocks and increasing the number of suppliers close by as possible. The contribution of our study is to present holistic scenarios of total lead time improvement and innovation by implementing supply chain policy.

  19. Whole blood coagulation time, haematocrit, haemoglobin and total ...

    African Journals Online (AJOL)

    The study was carried out to determine the values of whole blood coagulation time (WBCT), haematocrit (HM), haemaglobin (HB) and total protein (TP) of one hundred and eighteen apparently healthy turkeys reared under an extensive management system in Zaria. The mean values for WBCT, HM, HB and TP were 1.12 ...

  20. Novel crystal timing calibration method based on total variation

    Science.gov (United States)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  1. Online performance evaluation of RAID 5 using CPU utilization

    Science.gov (United States)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  2. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    Directory of Open Access Journals (Sweden)

    Chun-Liang Lee

    Full Text Available The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  3. Thermoelectric mini cooler coupled with micro thermosiphon for CPU cooling system

    International Nuclear Information System (INIS)

    Liu, Di; Zhao, Fu-Yun; Yang, Hong-Xing; Tang, Guang-Fa

    2015-01-01

    In the present study, a thermoelectric mini cooler coupling with a micro thermosiphon cooling system has been proposed for the purpose of CPU cooling. A mathematical model of heat transfer, depending on one-dimensional treatment of thermal and electric power, is firstly established for the thermoelectric module. Analytical results demonstrate the relationship between the maximal COP (Coefficient of Performance) and Q c with the figure of merit. Full-scale experiments have been conducted to investigate the effect of thermoelectric operating voltage, power input of heat source, and thermoelectric module number on the performance of the cooling system. Experimental results indicated that the cooling production increases with promotion of thermoelectric operating voltage. Surface temperature of CPU heat source linearly increases with increasing of power input, and its maximum value reached 70 °C as the prototype CPU power input was equivalent to 84 W. Insulation between air and heat source surface can prevent the condensate water due to low surface temperature. In addition, thermal performance of this cooling system could be enhanced when the total dimension of thermoelectric module matched well with the dimension of CPU. This research could benefit the design of thermal dissipation of electronic chips and CPU units. - Highlights: • A cooling system coupled with thermoelectric module and loop thermosiphon is developed. • Thermoelectric module coupled with loop thermosiphon can achieve high heat-transfer efficiency. • A mathematical model of thermoelectric cooling is built. • An analysis of modeling results for design and experimental data are presented. • Influence of power input and operating voltage on the cooling system are researched

  4. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  5. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Total sitting time, leisure time physical activity and risk of hospitalization due to low back pain

    DEFF Research Database (Denmark)

    Balling, Mie; Holmberg, Teresa; Petersen, Christina B

    2018-01-01

    AIMS: This study aimed to test the hypotheses that a high total sitting time and vigorous physical activity in leisure time increase the risk of low back pain and herniated lumbar disc disease. METHODS: A total of 76,438 adults answered questions regarding their total sitting time and physical...... activity during leisure time in the Danish Health Examination Survey 2007-2008. Information on low back pain diagnoses up to 10 September 2015 was obtained from The National Patient Register. The mean follow-up time was 7.4 years. Data were analysed using Cox regression analysis with adjustment...... disc disease. However, moderate or vigorous physical activity, as compared to light physical activity, was associated with increased risk of low back pain (HR = 1.16, 95% CI: 1.03-1.30 and HR = 1.45, 95% CI: 1.15-1.83). Moderate, but not vigorous physical activity was associated with increased risk...

  7. Improving the Performance of CPU Architectures by Reducing the Operating System Overhead (Extended Version

    Directory of Open Access Journals (Sweden)

    Zagan Ionel

    2016-07-01

    Full Text Available The predictable CPU architectures that run hard real-time tasks must be executed with isolation in order to provide a timing-analyzable execution for real-time systems. The major problems for real-time operating systems are determined by an excessive jitter, introduced mainly through task switching. This can alter deadline requirements, and, consequently, the predictability of hard real-time tasks. New requirements also arise for a real-time operating system used in mixed-criticality systems, when the executions of hard real-time applications require timing predictability. The present article discusses several solutions to improve the performance of CPU architectures and eventually overcome the Operating Systems overhead inconveniences. This paper focuses on the innovative CPU implementation named nMPRA-MT, designed for small real-time applications. This implementation uses the replication and remapping techniques for the program counter, general purpose registers and pipeline registers, enabling multiple threads to share a single pipeline assembly line. In order to increase predictability, the proposed architecture partially removes the hazard situation at the expense of larger execution latency per one instruction.

  8. Time related total lactic acid bacteria population diversity and ...

    African Journals Online (AJOL)

    The total lactic acid bacterial community involved in the spontaneous fermentation of malted cowpea fortified cereal weaning food was investigated by phenotypically and cultivation independent method. A total of 74 out of the isolated 178 strains were Lactobacillus plantarum, 32 were Pediococcus acidilactici and over 60% ...

  9. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  10. Criterion-based laparoscopic training reduces total training time

    NARCIS (Netherlands)

    Brinkman, W.M.; Buzink, S.N.; Alevizos, L.; De Hingh, I.H.J.T.; Jakimowicz, J.J.

    2011-01-01

    The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. Methods During four training sessions within 1 week (one session per day) 34 medical interns

  11. Real-time analysis of total, elemental, and total speciated mercury

    International Nuclear Information System (INIS)

    Schlager, R.J.; Wilson, K.G.; Sappey, A.D.

    1995-01-01

    ADA Technologies, Inc., is developing a continuous emissions monitoring system that measures the concentrations of mercury in flue gas. Mercury is emitted as an air pollutant from a number of industrial processes. The largest contributors of these emissions are coal and oil combustion, municipal waste combustion, medical waste combustion, and the thermal treatment of hazardous materials. It is difficult, time consuming, and expensive to measure mercury emissions using current testing methods. Part of the difficulty lies in the fact that mercury is emitted from sources in several different forms, such as elemental mercury and mercuric chloride. The ADA analyzer measures these emissions in real time, thus providing a number of advantages over existing test methods: (1) it will provide a real-time measure of emission rates, (2) it will assure facility operators, regulators, and the public that emissions control systems are working at peak efficiency, and (3) it will provide information as to the nature of the emitted mercury (elemental mercury or speciated compounds). This update presents an overview of the CEM and describes features of key components of the monitoring system--the mercury detector, a mercury species converter, and the analyzer calibration system

  12. Real-time analysis of total, elemental, and total speciated mercury

    Energy Technology Data Exchange (ETDEWEB)

    Schlager, R.J.; Wilson, K.G.; Sappey, A.D. [ADA Technologies, Inc., Englewood, CO (United States)

    1995-11-01

    ADA Technologies, Inc., is developing a continuous emissions monitoring system that measures the concentrations of mercury in flue gas. Mercury is emitted as an air pollutant from a number of industrial processes. The largest contributors of these emissions are coal and oil combustion, municipal waste combustion, medical waste combustion, and the thermal treatment of hazardous materials. It is difficult, time consuming, and expensive to measure mercury emissions using current testing methods. Part of the difficulty lies in the fact that mercury is emitted from sources in several different forms, such as elemental mercury and mercuric chloride. The ADA analyzer measures these emissions in real time, thus providing a number of advantages over existing test methods: (1) it will provide a real-time measure of emission rates, (2) it will assure facility operators, regulators, and the public that emissions control systems are working at peak efficiency, and (3) it will provide information as to the nature of the emitted mercury (elemental mercury or speciated compounds). This update presents an overview of the CEM and describes features of key components of the monitoring system--the mercury detector, a mercury species converter, and the analyzer calibration system.

  13. RURAL EXTENSION EPISTEMOLOGY AND THE TIME OF TOTAL EXTENSION

    Directory of Open Access Journals (Sweden)

    Silvio Calgaro Neto

    2016-09-01

    Full Text Available This article is dedicated to explore the field of knowledge related to rural extension. In general, a three complementary perspective is used as theoretical strategy to present this epistemological study. The first perspective, seeks to accomplish a brief archeology of rural extension, identifying the remarkable historical passages. At the second, we look to some theoretical models through the modern epistemological platform. Finally, the third perspective, aims to present a methodological proposal that contemplate this epistemic characteristics, relating with the contemporary transformations observed in the knowledge construction and technological transference for a rural development. Keywords: Total institutions. University.

  14. The Effect of NUMA Tunings on CPU Performance

    Science.gov (United States)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  15. The Effect of NUMA Tunings on CPU Performance

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-01-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory.The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality.As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software. (paper)

  16. CPU and cache efficient management of memory-resident databases

    NARCIS (Netherlands)

    Pirk, H.; Funke, F.; Grund, M.; Neumann, T.; Leser, U.; Manegold, S.; Kemper, A.; Kersten, M.L.

    2013-01-01

    Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations,

  17. CPU and Cache Efficient Management of Memory-Resident Databases

    NARCIS (Netherlands)

    H. Pirk (Holger); F. Funke; M. Grund; T. Neumann (Thomas); U. Leser; S. Manegold (Stefan); A. Kemper (Alfons); M.L. Kersten (Martin)

    2013-01-01

    htmlabstractMemory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current

  18. Criterion-based laparoscopic training reduces total training time

    OpenAIRE

    Brinkman, Willem M.; Buzink, Sonja N.; Alevizos, Leonidas; de Hingh, Ignace H. J. T.; Jakimowicz, Jack J.

    2011-01-01

    Introduction The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. Methods During four training sessions within 1 week (one session per day) 34 medical interns (no laparoscopic experience) practiced on two basic tasks on the Simbionix LAP Mentor virtual-reality (VR) simulator: ‘clipping and grasping’ and ‘cutting’. Group C (criterion-based) (N...

  19. Promise of a low power mobile CPU based embedded system in artificial leg control.

    Science.gov (United States)

    Hernandez, Robert; Zhang, Fan; Zhang, Xiaorong; Huang, He; Yang, Qing

    2012-01-01

    This paper presents the design and implementation of a low power embedded system using mobile processor technology (Intel Atom™ Z530 Processor) specifically tailored for a neural-machine interface (NMI) for artificial limbs. This embedded system effectively performs our previously developed NMI algorithm based on neuromuscular-mechanical fusion and phase-dependent pattern classification. The analysis shows that NMI embedded system can meet real-time constraints with high accuracies for recognizing the user's locomotion mode. Our implementation utilizes the mobile processor efficiently to allow a power consumption of 2.2 watts and low CPU utilization (less than 4.3%) while executing the complex NMI algorithm. Our experiments have shown that the highly optimized C program implementation on the embedded system has superb advantages over existing PC implementations on MATLAB. The study results suggest that mobile-CPU-based embedded system is promising for implementing advanced control for powered lower limb prostheses.

  20. Minimizing total weighted completion time in a proportionate flow shop

    NARCIS (Netherlands)

    Shakhlevich, N.V.; Hoogeveen, J.A.; Pinedo, M.L.

    1998-01-01

    We study the special case of the m machine flow shop problem in which the processing time of each operation of job j is equal to pj; this variant of the flow shop problem is known as the proportionate flow shop problem. We show that for any number of machines and for any regular performance

  1. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  2. Working time intervals and total work time on nursing positions in Poland

    Directory of Open Access Journals (Sweden)

    Danuta Kunecka

    2015-06-01

    Full Text Available Background: For the last few years a topic of overwork on nursing posts has given rise to strong discussions. The author has set herself a goal of answering the question if it is a result of real overwork of this particular profession or rather commonly assumed frustration of this professional group. The aim of this paper is to conduct the analysis of working time on chosen nursing positions in relation to measures of time being used as intervals in the course of conducting standard professional activities during one working day. Material and Methods: Research material consisted of documentation of work time on chosen nursing workplaces, compiled between 2007–2012 within the framework of a nursing course at the Pomeranian Medical University in Szczecin. As a method of measurement a photograph of a working day has been used. Measurements were performed in institutions located in 6 voivodeships in Poland. Results: Results suggest that only 6.5% of total of surveyed representatives of nurse profession spends proper amount of time (meaning: a time set by the applicable standards on work intervals during a working day. Conclusions: The scale of the phenomenon indicates excessive workload for nursing positions, which along with a longer period of time, longer working hours may cause decrease in efficiency of work and cause a drop in quality of provided services. Med Pr 2015;66,(2:165–172

  3. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  4. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    Science.gov (United States)

    Taft, James R.

    2000-01-01

    aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.

  5. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    Energy Technology Data Exchange (ETDEWEB)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve 1348 (Belgium); Sterpin, Edmond [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and Department of Oncology, Katholieke Universiteit Leuven, O& N I Herestraat 49, 3000 Leuven (Belgium)

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  6. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    International Nuclear Information System (INIS)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-01-01

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10"7 primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  7. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    Science.gov (United States)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  8. Acceleration of stereo-matching on multi-core CPU and GPU

    OpenAIRE

    Tian, Xu; Cockshott, Paul; Oehler, Susanne

    2014-01-01

    This paper presents an accelerated version of a\\ud dense stereo-correspondence algorithm for two different parallelism\\ud enabled architectures, multi-core CPU and GPU. The\\ud algorithm is part of the vision system developed for a binocular\\ud robot-head in the context of the CloPeMa 1 research project.\\ud This research project focuses on the conception of a new clothes\\ud folding robot with real-time and high resolution requirements\\ud for the vision system. The performance analysis shows th...

  9. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  10. A combined PLC and CPU approach to multiprocessor control

    International Nuclear Information System (INIS)

    Harris, J.J.; Broesch, J.D.; Coon, R.M.

    1995-10-01

    A sophisticated multiprocessor control system has been developed for use in the E-Power Supply System Integrated Control (EPSSIC) on the DIII-D tokamak. EPSSIC provides control and interlocks for the ohmic heating coil power supply and its associated systems. Of particular interest is the architecture of this system: both a Programmable Logic Controller (PLC) and a Central Processor Unit (CPU) have been combined on a standard VME bus. The PLC and CPU input and output signals are routed through signal conditioning modules, which provide the necessary voltage and ground isolation. Additionally these modules adapt the signal levels to that of the VME I/O boards. One set of I/O signals is shared between the two processors. The resulting multiprocessor system provides a number of advantages: redundant operation for mission critical situations, flexible communications using conventional TCP/IP protocols, the simplicity of ladder logic programming for the majority of the control code, and an easily maintained and expandable non-proprietary system

  11. Liquid Cooling System for CPU by Electroconjugate Fluid

    Directory of Open Access Journals (Sweden)

    Yasuo Sakurai

    2014-06-01

    Full Text Available The dissipated power of CPU for personal computer has been increased because the performance of personal computer becomes higher. Therefore, a liquid cooling system has been employed in some personal computers in order to improve their cooling performance. Electroconjugate fluid (ECF is one of the functional fluids. ECF has a remarkable property that a strong jet flow is generated between electrodes when a high voltage is applied to ECF through the electrodes. By using this strong jet flow, an ECF-pump with simple structure, no sliding portion, no noise, and no vibration seems to be able to be developed. And then, by the use of the ECF-pump, a new liquid cooling system by ECF seems to be realized. In this study, to realize this system, an ECF-pump is proposed and fabricated to investigate the basic characteristics of the ECF-pump experimentally. Next, by utilizing the ECF-pump, a model of a liquid cooling system by ECF is manufactured and some experiments are carried out to investigate the performance of this system. As a result, by using this system, the temperature of heat source of 50 W is kept at 60°C or less. In general, CPU is usually used at this temperature or less.

  12. Application of total care time and payment per unit time model for physician reimbursement for common general surgery operations.

    Science.gov (United States)

    Chatterjee, Abhishek; Holubar, Stefan D; Figy, Sean; Chen, Lilian; Montagne, Shirley A; Rosen, Joseph M; Desimone, Joseph P

    2012-06-01

    The relative value unit system relies on subjective measures of physician input in the care of patients. A payment per unit time model incorporates surgeon reimbursement to the total care time spent in the operating room, postoperative in-house, and clinic time to define payment per unit time. We aimed to compare common general surgery operations by using the total care time and payment per unit time method in order to demonstrate a more objective measurement for physician reimbursement. Average total physician payment per case was obtained for 5 outpatient operations and 4 inpatient operations in general surgery. Total care time was defined as the sum of operative time, 30 minutes per hospital day, and 30 minutes per office visit for each operation. Payment per unit time was calculated by dividing the physician reimbursement per case by the total care time. Total care time, physician payment per case, and payment per unit time for each type of operation demonstrated that an average payment per time spent for inpatient operations was $455.73 and slightly more at $467.51 for outpatient operations. Partial colectomy with primary anastomosis had the longest total care time (8.98 hours) and the least payment per unit time ($188.52). Laparoscopic gastric bypass had the highest payment per time ($707.30). The total care time and payment per unit time method can be used as an adjunct to compare reimbursement among different operations on an institutional level as well as on a national level. Although many operations have similar payment trends based on time spent by the surgeon, payment differences using this methodology are seen and may be in need of further review. Copyright © 2012 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  13. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU

    International Nuclear Information System (INIS)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A.; Vega C, H. R.; Alonso M, O. E.

    2016-10-01

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  14. Designing of Vague Logic Based 2-Layered Framework for CPU Scheduler

    Directory of Open Access Journals (Sweden)

    Supriya Raheja

    2016-01-01

    Full Text Available Fuzzy based CPU scheduler has become of great interest by operating system because of its ability to handle imprecise information associated with task. This paper introduces an extension to the fuzzy based round robin scheduler to a Vague Logic Based Round Robin (VBRR scheduler. VBRR scheduler works on 2-layered framework. At the first layer, scheduler has a vague inference system which has the ability to handle the impreciseness of task using vague logic. At the second layer, Vague Logic Based Round Robin (VBRR scheduling algorithm works to schedule the tasks. VBRR scheduler has the learning capability based on which scheduler adapts intelligently an optimum length for time quantum. An optimum time quantum reduces the overhead on scheduler by reducing the unnecessary context switches which lead to improve the overall performance of system. The work is simulated using MATLAB and compared with the conventional round robin scheduler and the other two fuzzy based approaches to CPU scheduler. Given simulation analysis and results prove the effectiveness and efficiency of VBRR scheduler.

  15. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  16. A Bit String Content Aware Chunking Strategy for Reduced CPU Energy on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    2015-01-01

    Full Text Available In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.

  17. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    Science.gov (United States)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  18. Single machine total completion time minimization scheduling with a time-dependent learning effect and deteriorating jobs

    Science.gov (United States)

    Wang, Ji-Bo; Wang, Ming-Zheng; Ji, Ping

    2012-05-01

    In this article, we consider a single machine scheduling problem with a time-dependent learning effect and deteriorating jobs. By the effects of time-dependent learning and deterioration, we mean that the job processing time is defined by a function of its starting time and total normal processing time of jobs in front of it in the sequence. The objective is to determine an optimal schedule so as to minimize the total completion time. This problem remains open for the case of -1 < a < 0, where a denotes the learning index; we show that an optimal schedule of the problem is V-shaped with respect to job normal processing times. Three heuristic algorithms utilising the V-shaped property are proposed, and computational experiments show that the last heuristic algorithm performs effectively and efficiently in obtaining near-optimal solutions.

  19. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  20. Asymptotic behavior of total times For jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures to the total time when...... including failures, for any failure distribution. In particular, we show that if the task time distribution has an unbounded support then the total time distribution H is always heavy-tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  1. Asymptotic behaviour of total times for jobs that must start over if a failure occurs

    DEFF Research Database (Denmark)

    Asmussen, Søren; Fiorini, Pierre; Lipsky, Lester

    2008-01-01

    the ready queue, or it may restart the task. The behavior of systems under the first two scenarios is well documented, but the third (RESTART) has resisted detailed analysis. In this paper we derive tight asymptotic relations between the distribution of task times without failures and the total time when...... including failures, for any failure distribution. In particular, we show that if the task-time distribution has an unbounded support, then the total-time distribution H is always heavy tailed. Asymptotic expressions are given for the tail of H in various scenarios. The key ingredients of the analysis...

  2. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Samfass, Philipp [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-26

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. One of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).

  3. Total Work, Gender and Social Norms in EU and US Time Use

    OpenAIRE

    Burda , Michael C; Hamermesh , Daniel S; Weil , Philippe

    2008-01-01

    Using time-diary data from 27 countries, we demonstrate a negative relationship between real GDP per capita and the female-male difference in total work time--the sum of work for pay and work at home. We also show that in rich non-Catholic countries on four continents men and women do the same amount of total work on average. Our survey results demonstrate that labor economists, macroeconomists, sociologists and the general public consistently believe that women perform more total work. The f...

  4. Total sleep time, alcohol consumption, and the duration and severity of alcohol hangover

    NARCIS (Netherlands)

    van Schrojenstein Lantman, Marith; Mackus, Marlou; Roth, Thomas; Verster, Joris C|info:eu-repo/dai/nl/241442702

    2017-01-01

    INTRODUCTION: An evening of alcohol consumption often occurs at the expense of sleep time. The aim of this study was to determine the relationship between total sleep time and the duration and severity of the alcohol hangover. METHODS: A survey was conducted among Dutch University students to

  5. Increased Total Anesthetic Time Leads to Higher Rates of Surgical Site Infections in Spinal Fusions.

    Science.gov (United States)

    Puffer, Ross C; Murphy, Meghan; Maloney, Patrick; Kor, Daryl; Nassr, Ahmad; Freedman, Brett; Fogelson, Jeremy; Bydon, Mohamad

    2017-06-01

    A retrospective review of a consecutive series of spinal fusions comparing patient and procedural characteristics of patients who developed surgical site infections (SSIs) after spinal fusion. It is known that increased surgical time (incision to closure) is associated with a higher rate of postoperative SSIs. We sought to determine whether increased total anesthetic time (intubation to extubation) is a factor in the development of SSIs as well. In spine surgery for deformity and degenerative disease, SSI has been associated with operative time, revealing a nearly 10-fold increase in SSI rates in prolonged surgery. Surgical time is associated with infections in other surgical disciplines as well. No studies have reported whether total anesthetic time (intubation to extubation) has an association with SSIs. Surgical records were searched in a retrospective fashion to identify all spine fusion procedures performed between January 2010 and July 2012. All SSIs during that timeframe were recorded and compared with the list of cases performed between 2010 and 2012 in a case-control design. There were 20 (1.7%) SSIs in this fusion cohort. On univariate analyses of operative factors, there was a significant association between total anesthetic time (Infection 7.6 ± 0.5 hrs vs. no infection -6.0 ± 0.1 hrs, P operative time (infection 5.5 ± 0.4 hrs vs. no infection - 4.4 ± 0.06 hrs, P infections, whereas level of pathology and emergent surgery were not significant. On multivariate logistic analysis, BMI and total anesthetic time remained independent predictors of SSI whereas ASA status and operative time did not. Increasing BMI and total anesthetic time were independent predictors of SSIs in this cohort of over 1000 consecutive spinal fusions. 3.

  6. Deployment of IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Babik, M.; Chudoba, J.; Dewhurst, A.; Finnern, T.; Froy, T.; Grigoras, C.; Hafeez, K.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Martelli, E.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Traynor, D.

    2017-10-01

    The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group’s plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.

  7. Does antegrade JJ stenting affect the total operative time during laparoscopic pyeloplasty?

    Science.gov (United States)

    Bolat, Mustafa Suat; Çınar, Önder; Akdeniz, Ekrem

    2017-12-01

    We aimed to show the effect of retrograde JJ stenting and intraoperative antegrade JJ stenting techniques on operative time in patients who underwent laparoscopic pyeloplasty. A total of 34 patients were retrospectively investigated (15 male and 19 female) with ureteropelvic junction obstruction. Of the patients stentized under local anesthesia preoperatively, as a part of surgery, 15 were retrogradely stentized at the beginning of the procedure (Group 1), and 19 were antegradely stentized during the procedure (Group 2). A transperitoneal dismembered pyeloplasty technique was performed in all patients. The two groups were retrospectively compared in terms of complications, the mean total operative time, and the mean stenting times. The mean ages of the patients were 31.5±15.5 and 33.2±15.5 years (p=0.09), and the mean body mass indexes were 25.8±5.6 and 26.2.3±8.4 kg/m 2 in Group 1 and Group 2, respectively. The mean total operative times were 128.9±38.9 min and 112.7±21.9 min (p=0.04); the mean stenting times were 12.6±5.4 min and 3.5±2.4 min (p=0.02); and the mean rates of catheterization-to-total surgery times were 0.1 and 0.03 (p=0.01) in Group 1 and 2, respectively. The mean hospital stays and the mean anastomosis times were similar between the two groups (p>0.05). Antegrade JJ stenting during laparoscopic pyeloplasty significantly decreased the total operative time.

  8. Objectively Measured Total and Occupational Sedentary Time in Three Work Settings

    Science.gov (United States)

    van Dommelen, Paula; Coffeng, Jennifer K.; van der Ploeg, Hidde P.; van der Beek, Allard J.; Boot, Cécile R. L.; Hendriksen, Ingrid J. M.

    2016-01-01

    Background Sedentary behaviour increases the risk for morbidity. Our primary aim is to determine the proportion and factors associated with objectively measured total and occupational sedentary time in three work settings. Secondary aim is to study the proportion of physical activity and prolonged sedentary bouts. Methods Data were obtained using ActiGraph accelerometers from employees of: 1) a financial service provider (n = 49 men, 31 women), 2) two research institutes (n = 30 men, 57 women), and 3) a construction company (n = 38 men). Total (over the whole day) and occupational sedentary time, physical activity and prolonged sedentary bouts (lasting ≥30 minutes) were calculated by work setting. Linear regression analyses were performed to examine general, health and work-related factors associated with sedentary time. Results The employees of the financial service provider and the research institutes spent 76–80% of their occupational time in sedentary behaviour, 18–20% in light intensity physical activity and 3–5% in moderate-to-vigorous intensity physical activity. Occupational time in prolonged sedentary bouts was 27–30%. Total time was less sedentary (64–70%), and had more light intensity physical activity (26–33%). The employees of the construction company spent 44% of their occupational time in sedentary behaviour, 49% in light, and 7% in moderate intensity physical activity, and spent 7% in sedentary bouts. Total time spent in sedentary behavior was 56%, 40% in light, and 4% in moderate intensity physical behaviour, and 12% in sedentary bouts. For women, low to intermediate education was the only factor that was negatively associated with occupational sedentary time. Conclusions Sedentary behaviour is high among white-collar employees, especially in highly educated women. A relatively small proportion of sedentary time was accrued in sedentary bouts. It is recommended that worksite health promotion efforts should focus on reducing sedentary

  9. Minimizing Total Completion Time For Preemptive Scheduling With Release Dates And Deadline Constraints

    Directory of Open Access Journals (Sweden)

    He Cheng

    2014-02-01

    Full Text Available It is known that the single machine preemptive scheduling problem of minimizing total completion time with release date and deadline constraints is NP- hard. Du and Leung solved some special cases by the generalized Baker's algorithm and the generalized Smith's algorithm in O(n2 time. In this paper we give an O(n2 algorithm for the special case where the processing times and deadlines are agreeable. Moreover, for the case where the processing times and deadlines are disagreeable, we present two properties which could enable us to reduce the range of the enumeration algorithm

  10. Timing of Re-Transfusion Drain Removal Following Total Knee Replacement

    Science.gov (United States)

    Leeman, MF; Costa, ML; Costello, E; Edwards, D

    2006-01-01

    INTRODUCTION The use of postoperative drains following total knee replacement (TKR) has recently been modified by the use of re-transfusion drains. The aim of our study was to investigate the optimal time for removal of re-transfusion drains following TKR. PATIENTS AND METHODS The medical records of 66 patients who had a TKR performed between October 2003 and October 2004 were reviewed; blood drained before 6 h and the total volume of blood drained was recorded. RESULTS A total of 56 patients had complete records of postoperative drainage. The mean volume of blood collected in the drain in the first 6 h was 442 ml. The mean total volume of blood in the drain was 595 ml. Therefore, of the blood drained, 78% was available for transfusion. CONCLUSION Re-transfusion drains should be removed after 6 h, when no further re-transfusion is permissible. PMID:16551400

  11. A polynomial time algorithm for checking regularity of totally normed process algebra

    NARCIS (Netherlands)

    Yang, F.; Huang, H.

    2015-01-01

    A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for

  12. Complexities of the storm-time characteristics of ionospheric total electron content

    International Nuclear Information System (INIS)

    Kane, R.P.

    1982-01-01

    The complexities of the storm-time variations of the ionospheric total electron content are briefly reviewed. It is suggested that large variations from storm to storm may be due to irregular flows from the auroral region towards equator. A proper study of such flows needs an elaborate network of TEC measuring instruments. The need of planning and organizing such a network is emphasized

  13. Objectively measured total and occupational sedentary time in three work settings

    NARCIS (Netherlands)

    Dommelen, P. van; Coffeng, J. K.; Ploeg, H.P. van der; Beek, A.J. van der; Boot, C.R.; Hendriksen, I.J.

    2016-01-01

    Background. Sedentary behaviour increases the risk for morbidity. Our primary aim is to determine the proportion and factors associated with objectively measured total and occupational sedentary time in three work settings. Secondary aim is to study the proportion of physical activity and prolonged

  14. What are the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty?

    Science.gov (United States)

    Harato, Kengo; Maeno, Shinichi; Tanikawa, Hidenori; Kaneda, Kazuya; Morishige, Yutaro; Nomoto, So; Niki, Yasuo

    2016-08-01

    It was hypothesized that surgical time of beginners would be much longer than that of experts. Our purpose was to investigate and clarify the important manoeuvres for beginners to minimize surgical time in primary total knee arthroplasty (TKA) as a multicentre study. A total of 300 knees in 248 patients (averaged 74.6 years) were enrolled. All TKAs were done using the same instruments and the same measured resection technique at 14 facilities by 25 orthopaedic surgeons. Surgeons were divided into three surgeon groups (four experts, nine medium-volume surgeons and 12 beginners). The surgical technique was divided into five phases. Detailed surgical time and ratio of the time in each phase to overall surgical time were recorded and compared among the groups in each phase. A total of 62, 119, and 119 TKAs were done by beginners, medium-volume surgeons, and experts, respectively. Significant differences in surgical time among the groups were seen in each phase. Concerning the ratio of the time, experts and medium-volume surgeons seemed cautious in fixation of the permanent component compared to other phases. Interestingly, even in ratio, beginners and medium-volume surgeons took more time in exposure of soft tissue compared to experts. (0.14 in beginners, 0.13 in medium-volume surgeons, 0.11 in experts, P time in exposure and closure of soft tissue compared to experts. Improvement in basic technique is essential to minimize surgical time among beginners. First of all, surgical instructors should teach basic techniques in primary TKA for beginners. Therapeutic studies, Level IV.

  15. Batch Scheduling for Hybrid Assembly Differentiation Flow Shop to Minimize Total Actual Flow Time

    Science.gov (United States)

    Maulidya, R.; Suprayogi; Wangsaputra, R.; Halim, A. H.

    2018-03-01

    A hybrid assembly differentiation flow shop is a three-stage flow shop consisting of Machining, Assembly and Differentiation Stages and producing different types of products. In the machining stage, parts are processed in batches on different (unrelated) machines. In the assembly stage, each part of the different parts is assembled into an assembly product. Finally, the assembled products will further be processed into different types of final products in the differentiation stage. In this paper, we develop a batch scheduling model for a hybrid assembly differentiation flow shop to minimize the total actual flow time defined as the total times part spent in the shop floor from the arrival times until its due date. We also proposed a heuristic algorithm for solving the problems. The proposed algorithm is tested using a set of hypothetic data. The solution shows that the algorithm can solve the problems effectively.

  16. Television viewing, computer use and total screen time in Canadian youth.

    Science.gov (United States)

    Mark, Amy E; Boyce, William F; Janssen, Ian

    2006-11-01

    Research has linked excessive television viewing and computer use in children and adolescents to a variety of health and social problems. Current recommendations are that screen time in children and adolescents should be limited to no more than 2 h per day. To determine the percentage of Canadian youth meeting the screen time guideline recommendations. The representative study sample consisted of 6942 Canadian youth in grades 6 to 10 who participated in the 2001/2002 World Health Organization Health Behaviour in School-Aged Children survey. Only 41% of girls and 34% of boys in grades 6 to 10 watched 2 h or less of television per day. Once the time of leisure computer use was included and total daily screen time was examined, only 18% of girls and 14% of boys met the guidelines. The prevalence of those meeting the screen time guidelines was higher in girls than boys. Fewer than 20% of Canadian youth in grades 6 to 10 met the total screen time guidelines, suggesting that increased public health interventions are needed to reduce the number of leisure time hours that Canadian youth spend watching television and using the computer.

  17. Different but Equal: Total Work, Gender and Social Norms in EU and US Time Use

    OpenAIRE

    Daniel S Hamermesh; Michael C Burda; Philippe Weil

    2008-01-01

    Using time-diary data from 27 countries, we demonstrate a negative relationship between real GDP per capita and the female-male difference in total work time—the sum of work for pay and work at home. We also show that in rich non-Catholic countries on four continents men and women do the same amount of total work on average. Our survey results demonstrate that labor economists, macroeconomists, sociologists and the general public consistently believe that women perform more tot...

  18. Physiotherapy Exercise After Fast-Track Total Hip and Knee Arthroplasty: Time for Reconsideration?

    DEFF Research Database (Denmark)

    Bandholm, Thomas; Kehlet, Henrik

    2012-01-01

    Bandholm T, Kehlet H. Physiotherapy exercise after fast-track total hip and knee arthroplasty: time for reconsideration? Major surgery, including total hip arthroplasty (THA) and total knee arthroplasty (TKA), is followed by a convalescence period, during which the loss of muscle strength......-track methodology or enhanced recovery programs. It is the nature of this methodology to systematically and scientifically optimize all perioperative care components, with the overall goal of enhancing recovery. This is also the case for the care component "physiotherapy exercise" after THA and TKA. The 2 latest...... meta-analyses on the effectiveness of physiotherapy exercise after THA and TKA generally conclude that physiotherapy exercise after THA and TKA either does not work or is not very effective. The reason for this may be that the "pill" of physiotherapy exercise typically offered after THA and TKA does...

  19. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  20. Time-based analysis of total cost of patient episodes: a case study of hip replacement.

    Science.gov (United States)

    Peltokorpi, Antti; Kujala, Jaakko

    2006-01-01

    Healthcare in the public and private sectors is facing increasing pressure to become more cost-effective. Time-based competition and work-in-progress have been used successfully to measure and improve the efficiency of industrial manufacturing. Seeks to address this issue. Presents a framework for time based management of the total cost of a patient episode and apply it to the six sigma DMAIC-process development approach. The framework is used to analyse hip replacement patient episodes in Päijät-Häme Hospital District in Finland, which has a catchment area of 210,000 inhabitants and performs an average of 230 hip replacements per year. The work-in-progress concept is applicable to healthcare--notably that the DMAIC-process development approach can be used to analyse the total cost of patient episodes. Concludes that a framework, which combines the patient-in-process and the DMAIC development approach, can be used not only to analyse the total cost of patient episode but also to improve patient process efficiency. Presents a framework that combines patient-in-process and DMAIC-process development approaches, which can be used to analyse the total cost of a patient episode in order to improve patient process efficiency.

  1. Just In Time Value Chain Total Quality Management Part Of Technical Strategic Management Accounting

    Directory of Open Access Journals (Sweden)

    Lesi Hertati

    2015-08-01

    Full Text Available This article aims to determine Just In Time Value Chain Total Quality Management tqm as a technique in management accounting stategis.Tujuan Just In Time value chain or value chain Total Quality Management TQM is strategic for customer satisfaction in the long term obtained from the information. Quality information is the way to continuous improvement in order to increase the companys financial performance in the long term to increase competitive advantage. Strategic Management Accounting process gather competitor information explore opportunities to reduce costs integrate accounting with emphasis on the strategic position of the competition is a great plan. An overall strategic plan interrelated and serves as the basis for achieving targets or goals ahead.

  2. Estimation of total bacteria by real-time PCR in patients with periodontal disease.

    Science.gov (United States)

    Brajović, Gavrilo; Popović, Branka; Puletić, Miljan; Kostić, Marija; Milasin, Jelena

    2016-01-01

    Periodontal diseases are associated with the presence of elevated levels of bacteria within the gingival crevice. The aim of this study was to evaluate a total amount of bacteria in subgingival plaque samples in patients with a periodontal disease. A quantitative evaluation of total bacteria amount using quantitative real-time polymerase chain reaction (qRT-PCR) was performed on 20 samples of patients with ulceronecrotic periodontitis and on 10 samples of healthy subjects. The estimation of total bacterial amount was based on gene copy number for 16S rRNA that was determined by comparing to Ct values/gene copy number of the standard curve. A statistically significant difference between average gene copy number of total bacteria in periodontal patients (2.55 x 10⁷) and healthy control (2.37 x 10⁶) was found (p = 0.01). Also, a trend of higher numbers of the gene copy in deeper periodontal lesions (> 7 mm) was confirmed by a positive value of coefficient of correlation (r = 0.073). The quantitative estimation of total bacteria based on gene copy number could be an important additional tool in diagnosing periodontitis.

  3. Optimizing Ship Speed to Minimize Total Fuel Consumption with Multiple Time Windows

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2016-01-01

    Full Text Available We study the ship speed optimization problem with the objective of minimizing the total fuel consumption. We consider multiple time windows for each port call as constraints and formulate the problem as a nonlinear mixed integer program. We derive intrinsic properties of the problem and develop an exact algorithm based on the properties. Computational experiments show that the suggested algorithm is very efficient in finding an optimal solution.

  4. Objectively Measured Total and Occupational Sedentary Time in Three Work Settings

    OpenAIRE

    van Dommelen, Paula; Coffeng, Jennifer K.; van der Ploeg, Hidde P.; van der Beek, Allard J.; Boot, C?cile R. L.; Hendriksen, Ingrid J. M.

    2016-01-01

    Background. Sedentary behaviour increases the risk for morbidity. Our primary aim is to determine the proportion and factors associated with objectively measured total and occupational sedentary time in three work settings. Secondary aim is to study the proportion of physical activity and prolonged sedentary bouts. Methods. Data were obtained using ActiGraph accelerometers from employees of: 1) a financial service provider (n = 49 men, 31 women), 2) two research institutes (n = 30 men, 57 wom...

  5. [Determination of total and segmental colonic transit time in constipated children].

    Science.gov (United States)

    Zhang, Shu-cheng; Wang, Wei-lin; Bai, Yu-zuo; Yuan, Zheng-wei; Wang, Wei

    2003-03-01

    To determine the total and segmental colonic transit time of normal Chinese children and to explore its value in constipation in children. The subjects involved in this study were divided into 2 groups. One group was control, which had 33 healthy children (21 males and 12 females) aged 2 - 13 years (mean 5 years). The other was constipation group, which had 25 patients (15 males and 10 females) aged 3 - 14 years (mean 7 years) with constipation according to Benninga's criteria. Written informed consent was obtained from the parents of each subject. In this study the simplified method of radio opaque markers was used to determine the total gastrointestinal transit time and segmental colonic transit time of the normal and constipated children, and in part of these patients X-ray defecography was also used. The total gastrointestinal transit time (TGITT), right colonic transit time (RCTT), left colonic transit time (LCTT) and rectosigmoid colonic transit time (RSTT) of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. In the constipated children, the TGITT, LCTT and RSTT were significantly longer than those in controls (92.2 +/- 55.5 h vs 28.7 +/- 7.7 h, P < 0.001; 16.9 +/- 12.6 h vs 6.5 +/- 3.8 h, P < 0.01; 61.5 +/- 29.0 h vs 13.4 +/- 5.6 h, P < 0.001), while the RCTT had no significant difference. X-ray defecography demonstrated one rectocele, one perineal descent syndrome and one puborectal muscle syndrome, respectively. The TGITT, RCTT, LCTT and RSTT of the normal children were 28.7 +/- 7.7 h, 7.5 +/- 3.2 h, 6.5 +/- 3.8 h and 13.4 +/- 5.6 h, respectively. With the segmental colonic transit time, constipation can be divided into four types: slow-transit constipation, outlet obstruction, mixed type and normal transit constipation. X-ray defecography can demonstrate the anatomical or dynamic abnormalities within the anorectal area, with which constipation can be further divided into different subtypes, and

  6. First passage times in homogeneous nucleation: Dependence on the total number of particles

    International Nuclear Information System (INIS)

    Yvinec, Romain; Bernard, Samuel; Pujo-Menjouet, Laurent; Hingant, Erwan

    2016-01-01

    Motivated by nucleation and molecular aggregation in physical, chemical, and biological settings, we present an extension to a thorough analysis of the stochastic self-assembly of a fixed number of identical particles in a finite volume. We study the statistics of times required for maximal clusters to be completed, starting from a pure-monomeric particle configuration. For finite volumes, we extend previous analytical approaches to the case of arbitrary size-dependent aggregation and fragmentation kinetic rates. For larger volumes, we develop a scaling framework to study the first assembly time behavior as a function of the total quantity of particles. We find that the mean time to first completion of a maximum-sized cluster may have a surprisingly weak dependence on the total number of particles. We highlight how higher statistics (variance, distribution) of the first passage time may nevertheless help to infer key parameters, such as the size of the maximum cluster. Finally, we present a framework to quantify formation of macroscopic sized clusters, which are (asymptotically) very unlikely and occur as a large deviation phenomenon from the mean-field limit. We argue that this framework is suitable to describe phase transition phenomena, as inherent infrequent stochastic processes, in contrast to classical nucleation theory

  7. Empirical forecast of quiet time ionospheric Total Electron Content maps over Europe

    Science.gov (United States)

    Badeke, Ronny; Borries, Claudia; Hoque, Mainul M.; Minkwitz, David

    2018-06-01

    An accurate forecast of the atmospheric Total Electron Content (TEC) is helpful to investigate space weather influences on the ionosphere and technical applications like satellite-receiver radio links. The purpose of this work is to compare four empirical methods for a 24-h forecast of vertical TEC maps over Europe under geomagnetically quiet conditions. TEC map data are obtained from the Space Weather Application Center Ionosphere (SWACI) and the Universitat Politècnica de Catalunya (UPC). The time-series methods Standard Persistence Model (SPM), a 27 day median model (MediMod) and a Fourier Series Expansion are compared to maps for the entire year of 2015. As a representative of the climatological coefficient models the forecast performance of the Global Neustrelitz TEC model (NTCM-GL) is also investigated. Time periods of magnetic storms, which are identified with the Dst index, are excluded from the validation. By calculating the TEC values with the most recent maps, the time-series methods perform slightly better than the coefficient model NTCM-GL. The benefit of NTCM-GL is its independence on observational TEC data. Amongst the time-series methods mentioned, MediMod delivers the best overall performance regarding accuracy and data gap handling. Quiet-time SWACI maps can be forecasted accurately and in real-time by the MediMod time-series approach.

  8. Inhibition of CPU0213, a Dual Endothelin Receptor Antagonist, on Apoptosis via Nox4-Dependent ROS in HK-2 Cells

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-06-01

    Full Text Available Background/Aims: Our previous studies have indicated that a novel endothelin receptor antagonist CPU0213 effectively normalized renal function in diabetic nephropathy. However, the molecular mechanisms mediating the nephroprotective role of CPU0213 remain unknown. Methods and Results: In the present study, we first detected the role of CPU0213 on apoptosis in human renal tubular epithelial cell (HK-2. It was shown that high glucose significantly increased the protein expression of Bax and decreased Bcl-2 protein in HK-2 cells, which was reversed by CPU0213. The percentage of HK-2 cells that showed Annexin V-FITC binding was markedly suppressed by CPU0213, which confirmed the inhibitory role of CPU0213 on apoptosis. Given the regulation of endothelin (ET system to oxidative stress, we determined the role of redox signaling in the regulation of CPU0213 on apoptosis. It was demonstrated that the production of superoxide (O2-. was substantially attenuated by CPU0213 treatment in HK-2 cells. We further found that CPU0213 dramatically inhibited expression of Nox4 protein, which gene silencing mimicked the role of CPU0213 on the apoptosis under high glucose stimulation. We finally examined the role of CPU0213 on ET-1 receptors and found that high glucose-induced protein expression of endothelin A and B receptors was dramatically inhibited by CPU0213. Conclusion: Taken together, these results suggest that this Nox4-dependenet O2- production is critical for the apoptosis of HK-2 cells in high glucose. Endothelin receptor antagonist CPU0213 has an anti-apoptosis role through Nox4-dependent O2-.production, which address the nephroprotective role of CPU0213 in diabetic nephropathy.

  9. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU; Reconstruccion del espectro de neutrones usando una red neuronal artificial (RNA) en CPU y GPU

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A. [Universidad de Cordoba, 14002 Cordoba (Spain); Vega C, H. R.; Alonso M, O. E., E-mail: vic.mc68010@gmail.com [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  10. A heterogeneous CPU+GPU Poisson solver for space charge calculations in beam dynamics studies

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Dawei; Rienen, Ursula van [University of Rostock, Institute of General Electrical Engineering (Germany)

    2016-07-01

    In beam dynamics studies in accelerator physics, space charge plays a central role in the low energy regime of an accelerator. Numerical space charge calculations are required, both, in the design phase and in the operation of the machines as well. Due to its efficiency, mostly the Particle-In-Cell (PIC) method is chosen for the space charge calculation. Then, the solution of Poisson's equation for the charge distribution in the rest frame is the most prominent part within the solution process. The Poisson solver directly affects the accuracy of the self-field applied on the charged particles when the equation of motion is solved in the laboratory frame. As the Poisson solver consumes the major part of the computing time in most simulations it has to be as fast as possible since it has to be carried out once per time step. In this work, we demonstrate a novel heterogeneous CPU+GPU routine for the Poisson solver. The novel solver also benefits from our new research results on the utilization of a discrete cosine transform within the classical Hockney and Eastwood's convolution routine.

  11. Reduced Operating Time but Not Blood Loss With Cruciate Retaining Total Knee Arthroplasty

    Science.gov (United States)

    Vermesan, Dinu; Trocan, Ilie; Prejbeanu, Radu; Poenaru, Dan V; Haragus, Horia; Gratian, Damian; Marrelli, Massimo; Inchingolo, Francesco; Caprio, Monica; Cagiano, Raffaele; Tatullo, Marco

    2015-01-01

    Background There is no consensus regarding the use of retaining or replacing cruciate implants for patients with limited deformity who undergo a total knee replacement. Scope of this paper is to evaluate whether a cruciate sparing total knee replacement could have a reduced operating time compared to a posterior stabilized implant. Methods For this purpose, we performed a randomized study on 50 subjects. All procedures were performed by a single surgeon in the same conditions to minimize bias and only knees with a less than 20 varus deviation and/or maximum 15° fixed flexion contracture were included. Results Surgery time was significantly shorter with the cruciate retaining implant (P = 0.0037). The mean duration for the Vanguard implant was 68.9 (14.7) and for the NexGen II Legacy was 80.2 (11.3). A higher range of motion, but no significant Knee Society Scores at 6 months follow-up, was used as controls. Conclusions In conclusion, both implants had the potential to assure great outcomes. However, if a decision has to be made, choosing a cruciate retaining procedure could significantly reduce the surgical time. When performed under tourniquet, this gain does not lead to reduced blood loss. PMID:25584102

  12. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  13. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  14. Time-dependent density functional theory description of total photoabsorption cross sections

    Science.gov (United States)

    Tenorio, Bruno Nunes Cabral; Nascimento, Marco Antonio Chaer; Rocha, Alexandre Braga

    2018-02-01

    The time-dependent version of the density functional theory (TDDFT) has been used to calculate the total photoabsorption cross section of a number of molecules, namely, benzene, pyridine, furan, pyrrole, thiophene, phenol, naphthalene, and anthracene. The discrete electronic pseudo-spectra, obtained in a L2 basis set calculation were used in an analytic continuation procedure to obtain the photoabsorption cross sections. The ammonia molecule was chosen as a model system to compare the results obtained with TDDFT to those obtained with the linear response coupled cluster approach in order to make a link with our previous work and establish benchmarks.

  15. Extending DIII-D Neutral Beam Modulated Operations with a Camac Based Total on Time Interlock

    International Nuclear Information System (INIS)

    Baggest, D.S.; Broesch, J.D.; Phillips, J.C.

    1999-01-01

    A new total-on-time interlock has increased the operational time limits of the Neutral Beam systems at DIII-D. The interlock, called the Neutral Beam On-Time-Limiter (NBOTL), is a custom built CAMAC module utilizing a Xilinx 9572 Complex Programmable Logic Device (CPLD) as its primary circuit. The Neutral Beam Injection Systems are the primary source of auxiliary heating for DIII-D plasma discharges and contain eight sources capable of delivering 20MW of power. The delivered power is typically limited to 3.5 s per source to protect beam-line components, while a DIII-D plasma discharge usually exceeds 5 s. Implemented as a hardware interlock within the neutral beam power supplies, the NBOTL limits the beam injection time. With a continuing emphasis on modulated beam injections, the NBOTL guards against command faults and allows the beam injection to be safely spread over a longer plasma discharge time. The NBOTL design is an example of incorporating modern circuit design techniques (CPLD) within an established format (CAMAC). The CPLD is the heart of the NBOTL and contains 90% of the circuitry, including a loadable, 1 MHz, 28 bit, BCD count down timer, buffers, and CAMAC communication circuitry. This paper discusses the circuit design and implementation. Of particular interest is the melding of flexible modern programmable logic devices with the CAMAC format

  16. Time delay and duration of ionospheric total electron content responses to geomagnetic disturbances

    Directory of Open Access Journals (Sweden)

    J. Liu

    2010-03-01

    Full Text Available Although positive and negative signatures of ionospheric storms have been reported many times, global characteristics such as the time of occurrence, time delay and duration as well as their relations to the intensity of the ionospheric storms have not received enough attention. The 10 years of global ionosphere maps (GIMs of total electron content (TEC retrieved at Jet Propulsion Laboratory (JPL were used to conduct a statistical study of the time delay of the ionospheric responses to geomagnetic disturbances. Our results show that the time delays between geomagnetic disturbances and TEC responses depend on season, magnetic local time and magnetic latitude. In the summer hemisphere at mid- and high latitudes, the negative storm effects can propagate to the low latitudes at post-midnight to the morning sector with a time delay of 4–7 h. As the earth rotates to the sunlight, negative phase retreats to higher latitudes and starts to extend to the lower latitude toward midnight sector. In the winter hemisphere during the daytime and after sunset at mid- and low latitudes, the negative phase appearance time is delayed from 1–10 h depending on the local time, latitude and storm intensity compared to the same area in the summer hemisphere. The quick response of positive phase can be observed at the auroral area in the night-side of the winter hemisphere. At the low latitudes during the dawn-noon sector, the ionospheric negative phase responses quickly with time delays of 5–7 h in both equinoctial and solsticial months.

    Our results also manifest that there is a positive correlation between the intensity of geomagnetic disturbances and the time duration of both the positive phase and negative phase. The durations of both negative phase and positive phase have clear latitudinal, seasonal and magnetic local time (MLT dependence. In the winter hemisphere, long durations for the positive phase are 8–11 h and 12–14 h during the daytime at

  17. Time delay and duration of ionospheric total electron content responses to geomagnetic disturbances

    Directory of Open Access Journals (Sweden)

    J. Liu

    2010-03-01

    Full Text Available Although positive and negative signatures of ionospheric storms have been reported many times, global characteristics such as the time of occurrence, time delay and duration as well as their relations to the intensity of the ionospheric storms have not received enough attention. The 10 years of global ionosphere maps (GIMs of total electron content (TEC retrieved at Jet Propulsion Laboratory (JPL were used to conduct a statistical study of the time delay of the ionospheric responses to geomagnetic disturbances. Our results show that the time delays between geomagnetic disturbances and TEC responses depend on season, magnetic local time and magnetic latitude. In the summer hemisphere at mid- and high latitudes, the negative storm effects can propagate to the low latitudes at post-midnight to the morning sector with a time delay of 4–7 h. As the earth rotates to the sunlight, negative phase retreats to higher latitudes and starts to extend to the lower latitude toward midnight sector. In the winter hemisphere during the daytime and after sunset at mid- and low latitudes, the negative phase appearance time is delayed from 1–10 h depending on the local time, latitude and storm intensity compared to the same area in the summer hemisphere. The quick response of positive phase can be observed at the auroral area in the night-side of the winter hemisphere. At the low latitudes during the dawn-noon sector, the ionospheric negative phase responses quickly with time delays of 5–7 h in both equinoctial and solsticial months. Our results also manifest that there is a positive correlation between the intensity of geomagnetic disturbances and the time duration of both the positive phase and negative phase. The durations of both negative phase and positive phase have clear latitudinal, seasonal and magnetic local time (MLT dependence. In the winter hemisphere, long durations for the positive phase are 8–11 h and 12–14 h during the daytime at middle

  18. Can a surgery-first orthognathic approach reduce the total treatment time?

    Science.gov (United States)

    Jeong, Woo Shik; Choi, Jong Woo; Kim, Do Yeon; Lee, Jang Yeol; Kwon, Soon Man

    2017-04-01

    Although pre-surgical orthodontic treatment has been accepted as a necessary process for stable orthognathic correction in the traditional orthognathic approach, recent advances in the application of miniscrews and in the pre-surgical simulation of orthodontic management using dental models have shown that it is possible to perform a surgery-first orthognathic approach without pre-surgical orthodontic treatment. This prospective study investigated the surgical outcomes of patients with diagnosed skeletal class III dentofacial deformities who underwent orthognathic surgery between December 2007 and December 2014. Cephalometric landmark data for patients undergoing the surgery-first approach were analyzed in terms of postoperative changes in vertical and horizontal skeletal pattern, dental pattern, and soft tissue profile. Forty-five consecutive Asian patients with skeletal class III dentofacial deformities who underwent surgery-first orthognathic surgery and 52 patients who underwent conventional two-jaw orthognathic surgery were included. The analysis revealed that the total treatment period for the surgery-first approach averaged 14.6 months, compared with 22.0 months for the orthodontics-first approach. Comparisons between the immediate postoperative and preoperative and between the postoperative and immediate postoperative cephalometric data revealed factors that correlated with the total treatment duration. The surgery-first orthognathic approach can dramatically reduce the total treatment time, with no major complications. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Time-driven Activity-based Cost of Fast-Track Total Hip and Knee Arthroplasty

    DEFF Research Database (Denmark)

    Andreasen, Signe E; Holm, Henriette B; Jørgensen, Mira

    2017-01-01

    this between 2 departments with different logistical set-ups. METHODS: Prospective data collection was analyzed using the time-driven activity-based costing method (TDABC) on time consumed by different staff members involved in patient treatment in the perioperative period of fast-track THA and TKA in 2 Danish...... orthopedic departments with standardized fast-track settings, but different logistical set-ups. RESULTS: Length of stay was median 2 days in both departments. TDABC revealed minor differences in the perioperative settings between departments, but the total cost excluding the prosthesis was similar at USD......-track methodology, the result could be a more cost-effective pathway altogether. As THA and TKA are potentially costly procedures and the numbers are increasing in an economical limited environment, the aim of this study is to present baseline detailed economical calculations of fast-track THA and TKA and compare...

  20. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Science.gov (United States)

    2014-01-01

    Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP), which aims to minimize total service time, and proposes an iterated greedy (IG) algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set. PMID:25295295

  1. Minimizing the Total Service Time of Discrete Dynamic Berth Allocation Problem by an Iterated Greedy Heuristic

    Directory of Open Access Journals (Sweden)

    Shih-Wei Lin

    2014-01-01

    Full Text Available Berth allocation is the forefront operation performed when ships arrive at a port and is a critical task in container port optimization. Minimizing the time ships spend at berths constitutes an important objective of berth allocation problems. This study focuses on the discrete dynamic berth allocation problem (discrete DBAP, which aims to minimize total service time, and proposes an iterated greedy (IG algorithm to solve it. The proposed IG algorithm is tested on three benchmark problem sets. Experimental results show that the proposed IG algorithm can obtain optimal solutions for all test instances of the first and second problem sets and outperforms the best-known solutions for 35 out of 90 test instances of the third problem set.

  2. Decreasing Postanesthesia Care Unit to Floor Transfer Times to Facilitate Short Stay Total Joint Replacements.

    Science.gov (United States)

    Sibia, Udai S; Grover, Jennifer; Turcotte, Justin J; Seanger, Michelle L; England, Kimberly A; King, Jennifer L; King, Paul J

    2018-04-01

    We describe a process for studying and improving baseline postanesthesia care unit (PACU)-to-floor transfer times after total joint replacements. Quality improvement project using lean methodology. Phase I of the investigational process involved collection of baseline data. Phase II involved developing targeted solutions to improve throughput. Phase III involved measured project sustainability. Phase I investigations revealed that patients spent an additional 62 minutes waiting in the PACU after being designated ready for transfer. Five to 16 telephone calls were needed between the PACU and the unit to facilitate each patient transfer. The most common reason for delay was unavailability of the unit nurse who was attending to another patient (58%). Phase II interventions resulted in transfer times decreasing to 13 minutes (79% reduction, P care at other institutions. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  3. Time-gated scintillator imaging for real-time optical surface dosimetry in total skin electron therapy

    Science.gov (United States)

    Bruza, Petr; Gollub, Sarah L.; Andreozzi, Jacqueline M.; Tendler, Irwin I.; Williams, Benjamin B.; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2018-05-01

    The purpose of this study was to measure surface dose by remote time-gated imaging of plastic scintillators. A novel technique for time-gated, intensified camera imaging of scintillator emission was demonstrated, and key parameters influencing the signal were analyzed, including distance, angle and thickness. A set of scintillator samples was calibrated by using thermo-luminescence detector response as reference. Examples of use in total skin electron therapy are described. The data showed excellent room light rejection (signal-to-noise ratio of scintillation SNR  ≈  470), ideal scintillation dose response linearity, and 2% dose rate error. Individual sample scintillation response varied by 7% due to sample preparation. Inverse square distance dependence correction and lens throughput error (8% per meter) correction were needed. At scintillator-to-source angle and observation angle  <50°, the radiant energy fluence error was smaller than 1%. The achieved standard error of the scintillator cumulative dose measurement compared to the TLD dose was 5%. The results from this proof-of-concept study documented the first use of small scintillator targets for remote surface dosimetry in ambient room lighting. The measured dose accuracy renders our method to be comparable to thermo-luminescent detector dosimetry, with the ultimate realization of accuracy likely to be better than shown here. Once optimized, this approach to remote dosimetry may substantially reduce the time and effort required for surface dosimetry.

  4. Near-real-time Estimation and Forecast of Total Precipitable Water in Europe

    Science.gov (United States)

    Bartholy, J.; Kern, A.; Barcza, Z.; Pongracz, R.; Ihasz, I.; Kovacs, R.; Ferencz, C.

    2013-12-01

    Information about the amount and spatial distribution of atmospheric water vapor (or total precipitable water) is essential for understanding weather and the environment including the greenhouse effect, the climate system with its feedbacks and the hydrological cycle. Numerical weather prediction (NWP) models need accurate estimations of water vapor content to provide realistic forecasts including representation of clouds and precipitation. In the present study we introduce our research activity for the estimation and forecast of atmospheric water vapor in Central Europe using both observations and models. The Eötvös Loránd University (Hungary) operates a polar orbiting satellite receiving station in Budapest since 2002. This station receives Earth observation data from polar orbiting satellites including MODerate resolution Imaging Spectroradiometer (MODIS) Direct Broadcast (DB) data stream from satellites Terra and Aqua. The received DB MODIS data are automatically processed using freely distributed software packages. Using the IMAPP Level2 software total precipitable water is calculated operationally using two different methods. Quality of the TPW estimations is a crucial question for further application of the results, thus validation of the remotely sensed total precipitable water fields is presented using radiosonde data. In a current research project in Hungary we aim to compare different estimations of atmospheric water vapor content. Within the frame of the project we use a NWP model (DBCRAS; Direct Broadcast CIMSS Regional Assimilation System numerical weather prediction software developed by the University of Wisconsin, Madison) to forecast TPW. DBCRAS uses near real time Level2 products from the MODIS data processing chain. From the wide range of the derived Level2 products the MODIS TPW parameter found within the so-called mod07 results (Atmospheric Profiles Product) and the cloud top pressure and cloud effective emissivity parameters from the so

  5. Smoking is associated with earlier time to revision of total knee arthroplasty.

    Science.gov (United States)

    Lim, Chin Tat; Goodman, Stuart B; Huddleston, James I; Harris, Alex H S; Bhowmick, Subhrojyoti; Maloney, William J; Amanatullah, Derek F

    2017-10-01

    Smoking is associated with early postoperative complications, increased length of hospital stay, and an increased risk of revision after total knee arthroplasty (TKA). However, the effect of smoking on time to revision TKA is unknown. A total of 619 primary TKAs referred to an academic tertiary center for revision TKA were retrospectively stratified according to the patient smoking status. Smoking status was then analyzed for associations with time to revision TKA using a Chi square test. The association was also analyzed according to the indication for revision TKA. Smokers (37/41, 90%) have an increased risk of earlier revision for any reason compared to non-smokers (274/357, 77%, p=0.031). Smokers (37/41, 90%) have an increased risk of earlier revision for any reason compared to ex-smokers (168/221, 76%, p=0.028). Subgroup analysis did not reveal a difference in indication for revision TKA (p>0.05). Smokers are at increased risk of earlier revision TKA when compared to non-smokers and ex-smokers. The risk for ex-smokers was similar to that of non-smokers. Smoking appears to have an all-or-none effect on earlier revision TKA as patients who smoked more did not have higher risk of early revision TKA. These results highlight the need for clinicians to urge patients not to begin smoking and encourage smokers to quit smoking prior to primary TKA. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  7. Time-driven activity based costing of total knee replacement surgery at a London teaching hospital.

    Science.gov (United States)

    Chen, Alvin; Sabharwal, Sanjeeve; Akhtar, Kashif; Makaram, Navnit; Gupte, Chinmay M

    2015-12-01

    The aim of this study was to conduct a time-driven activity based costing (TDABC) analysis of the clinical pathway for total knee replacement (TKR) and to determine where the major cost drivers lay. The in-patient pathway was prospectively mapped utilising a TDABC model, following 20 TKRs. The mean age for these patients was 73.4 years. All patients were ASA grade I or II and their mean BMI was 30.4. The 14 varus knees had a mean deformity of 5.32° and the six valgus knee had a mean deformity of 10.83°. Timings were prospectively collected as each patient was followed through the TKR pathway. Pre-operative costs including pre-assessment and joint school were £ 163. Total staff costs for admission and the operating theatre were £ 658. Consumables cost for the operating theatre were £ 1862. The average length of stay was 5.25 days at a total cost of £ 910. Trust overheads contributed £ 1651. The overall institutional cost of a 'noncomplex' TKR in patients without substantial medical co-morbidities was estimated to be £ 5422, representing a profit of £ 1065 based on a best practice tariff of £ 6487. The major cost drivers in the TKR pathway were determined to be theatre consumables, corporate overheads, overall ward cost and operating theatre staffing costs. Appropriate discounting of implant costs, reduction in length of stay by adopting an enhanced recovery programme and control of corporate overheads through the use of elective orthopaedic treatment centres are proposed approaches for reducing the overall cost of treatment. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Total vaginectomy and urethral lengthening at time of neourethral prelamination in transgender men.

    Science.gov (United States)

    Medina, Carlos A; Fein, Lydia A; Salgado, Christopher J

    2017-11-29

    For transgender men (TGM), gender-affirmation surgery (GAS) is often the final stage of their gender transition. GAS involves creating a neophallus, typically using tissue remote from the genital region, such as radial forearm free-flap phalloplasty. Essential to this process is vaginectomy. Complexity of vaginal fascial attachments, atrophy due to testosterone use, and need to preserve integrity of the vaginal epithelium for tissue rearrangement add to the intricacy of the procedure during GAS. We designed the technique presented here to minimize complications and contribute to overall success of the phalloplasty procedure. After obtaining approval from the Institutional Review Board, our transgender (TG) database at the University of Miami Hospital was reviewed to identify cases with vaginectomy and urethral elongation performed at the time of radial forearm free-flap phalloplasty prelamination. Surgical technique for posterior vaginectomy and anterior vaginal wall-flap harvest with subsequent urethral lengthening is detailed. Six patients underwent total vaginectomy and urethral elongation at the time of radial forearm free-flap phalloplasty prelamination. Mean estimated blood loss (EBL) was 290 ± 199.4 ml for the vaginectomy and urethral elongation, and no one required transfusion. There were no intraoperative complications (cystotomy, ureteral obstruction, enterotomy, proctotomy, or neurological injury). One patient had a urologic complication (urethral stricture) in the neobulbar urethra. Total vaginectomy and urethral lengthening procedures at the time of GAS are relatively safe procedures, and using the described technique provides excellent tissue for urethral prelamination and a low complication rate in both the short and long term.

  9. Timing of urinary catheter removal after uncomplicated total abdominal hysterectomy: a prospective randomized trial.

    Science.gov (United States)

    Ahmed, Magdy R; Sayed Ahmed, Waleed A; Atwa, Khaled A; Metwally, Lobna

    2014-05-01

    To assess whether immediate (0h), intermediate (after 6h) or delayed (after 24h) removal of an indwelling urinary catheter after uncomplicated abdominal hysterectomy can affect the rate of re-catheterization due to urinary retention, rate of urinary tract infection, ambulation time and length of hospital stay. Prospective randomized controlled trial conducted at Suez Canal University Hospital, Egypt. Two hundred and twenty-one women underwent total abdominal hysterectomy for benign gynecological diseases and were randomly allocated into three groups. Women in group A (73 patients) had their urinary catheter removed immediately after surgery. Group B (81 patients) had the catheter removed 6h post-operatively while in group C (67 patients) the catheter was removed after 24h. The main outcome measures were the frequency of urinary retention, urinary tract infections, ambulation time and length of hospital stay. There was a significantly higher number of urinary retention episodes requiring re-catheterization in the immediate removal group compared to the intermediate and delayed removal groups (16.4% versus 2.5% and 0% respectively). Delayed urinary catheter removal was associated with a higher incidence of urinary tract infections (15%), delayed ambulation time (10.3h) and longer hospital stay (5.6 days) compared to the early (1.4%, 4.1h and 3.2 days respectively) and intermediate (3.7%, 6.8h and 3.4 days respectively) removal groups. Removal of the urinary catheter 6h postoperatively appears to be more advantageous than early or late removal in cases of uncomplicated total abdominal hysterectomy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Observed and simulated time evolution of HCl, ClONO2, and HF total column abundances

    Directory of Open Access Journals (Sweden)

    B.-M. Sinnhuber

    2012-04-01

    Full Text Available Time series of total column abundances of hydrogen chloride (HCl, chlorine nitrate (ClONO2, and hydrogen fluoride (HF were determined from ground-based Fourier transform infrared (FTIR spectra recorded at 17 sites belonging to the Network for the Detection of Atmospheric Composition Change (NDACC and located between 80.05° N and 77.82° S. By providing such a near-global overview on ground-based measurements of the two major stratospheric chlorine reservoir species, HCl and ClONO2, the present study is able to confirm the decrease of the atmospheric inorganic chlorine abundance during the last few years. This decrease is expected following the 1987 Montreal Protocol and its amendments and adjustments, where restrictions and a subsequent phase-out of the prominent anthropogenic chlorine source gases (solvents, chlorofluorocarbons were agreed upon to enable a stabilisation and recovery of the stratospheric ozone layer. The atmospheric fluorine content is expected to be influenced by the Montreal Protocol, too, because most of the banned anthropogenic gases also represent important fluorine sources. But many of the substitutes to the banned gases also contain fluorine so that the HF total column abundance is expected to have continued to increase during the last few years. The measurements are compared with calculations from five different models: the two-dimensional Bremen model, the two chemistry-transport models KASIMA and SLIMCAT, and the two chemistry-climate models EMAC and SOCOL. Thereby, the ability of the models to reproduce the absolute total column amounts, the seasonal cycles, and the temporal evolution found in the FTIR measurements is investigated and inter-compared. This is especially interesting because the models have different architectures. The overall agreement between the measurements and models for the total column abundances and the seasonal cycles is good. Linear trends of HCl, ClONO2, and HF are calculated from both

  11. The influence of tourniquet use and operative time on the incidence of deep vein thrombosis in total knee arthroplasty.

    Science.gov (United States)

    Hernandez, Arnaldo José; Almeida, Adriano Marques de; Fávaro, Edmar; Sguizzato, Guilherme Turola

    2012-09-01

    To evaluate the association between tourniquet and total operative time during total knee arthroplasty and the occurrence of deep vein thrombosis. Seventy-eight consecutive patients from our institution underwent cemented total knee arthroplasty for degenerative knee disorders. The pneumatic tourniquet time and total operative time were recorded in minutes. Four categories were established for total tourniquet time: 120 minutes. Three categories were defined for operative time: 150 minutes. Between 7 and 12 days after surgery, the patients underwent ascending venography to evaluate the presence of distal or proximal deep vein thrombosis. We evaluated the association between the tourniquet time and total operative time and the occurrence of deep vein thrombosis after total knee arthroplasty. In total, 33 cases (42.3%) were positive for deep vein thrombosis; 13 (16.7%) cases involved the proximal type. We found no statistically significant difference in tourniquet time or operative time between patients with or without deep vein thrombosis. We did observe a higher frequency of proximal deep vein thrombosis in patients who underwent surgery lasting longer than 120 minutes. The mean total operative time was also higher in patients with proximal deep vein thrombosis. The tourniquet time did not significantly differ in these patients. We concluded that surgery lasting longer than 120 minutes increases the risk of proximal deep vein thrombosis.

  12. Wait time management strategies for total joint replacement surgery: sustainability and unintended consequences.

    Science.gov (United States)

    Pomey, Marie-Pascale; Clavel, Nathalie; Amar, Claudia; Sabogale-Olarte, Juan Carlos; Sanmartin, Claudia; De Coster, Carolyn; Noseworthy, Tom

    2017-09-07

    In Canada, long waiting times for core specialized services have consistently been identified as a key barrier to access. Governments and organizations have responded with strategies for better access management, notably for total joint replacement (TJR) of the hip and knee. While wait time management strategies (WTMS) are promising, the factors which influence their sustainable implementation at the organizational level are understudied. Consequently, this study examined organizational and systemic factors that made it possible to sustain waiting times for TJR within federally established limits and for at least 18 months or more. The research design is a multiple case study of WTMS implementation. Five cases were selected across five Canadian provinces. Three success levels were pre-defined: 1) the WTMS maintained compliance with requirements for more than 18 months; 2) the WTMS met requirements for 18 months but could not sustain the level thereafter; 3) the WTMS never met requirements. For each case, we collected documents and interviewed key informants. We analyzed systemic and organizational factors, with particular attention to governance and leadership, culture, resources, methods, and tools. We found that successful organizations had specific characteristics: 1) management of the whole care continuum, 2) strong clinical leadership; 3) dedicated committees to coordinate and sustain strategy; 4) a culture based on trust and innovation. All strategies led to relatively similar unintended consequences. The main negative consequence was an initial increase in waiting times for TJR and the main positive consequence was operational enhancement of other areas of specialization based on the TJR model. This study highlights important differences in factors which help to achieve and sustain waiting times. To be sustainable, a WTMS needs to generate greater synergies between contextual-level strategy (provincial or regional) and organizational objectives and

  13. A rapid infusion protocol is safe for total dose iron polymaltose: time for change.

    Science.gov (United States)

    Garg, M; Morrison, G; Friedman, A; Lau, A; Lau, D; Gibson, P R

    2011-07-01

    Intravenous correction of iron deficiency by total dose iron polymaltose is inexpensive and safe, but current protocols entail prolonged administration over more than 4 h. This results in reduced patient acceptance, and hospital resource strain. We aimed to assess prospectively the safety of a rapid intravenous protocol and compare this with historical controls. Consecutive patients in whom intravenous iron replacement was indicated were invited to have up to 1.5 g iron polymaltose by a 58-min infusion protocol after an initial 15-min test dose without pre-medication. Infusion-related adverse events (AE) and delayed AE over the ensuing 5 days were also prospectively documented and graded as mild, moderate or severe. One hundred patients, 63 female, mean age 54 (range 18-85) years were studied. Thirty-four infusion-related AE to iron polymaltose occurred in a total of 24 patients--25 mild, 8 moderate and 1 severe; higher than previously reported for a slow protocol iron infusion. Thirty-one delayed AE occurred in 26 patients--26 mild, 3 moderate and 2 severe; similar to previously reported. All but five patients reported they would prefer iron replacement through the rapid protocol again. The presence of inflammatory bowel disease (IBD) predicted infusion-related reactions (54% vs 14% without IBD, P cost, resource utilization and time benefits for the patient and hospital system. © 2011 The Authors. Internal Medicine Journal © 2011 Royal Australasian College of Physicians.

  14. Optimum filters with time width constraints for liquid argon total-absorption detectors

    International Nuclear Information System (INIS)

    Gatti, E.; Radeka, V.

    1977-10-01

    Optimum filter responses are found for triangular current input pulses occurring in liquid argon ionization chambers used as total absorption detectors. The filters considered are subject to the following constraints: finite width of the output pulse having a prescribed ratio to the width of the triangular input current pulse and zero area of a bipolar antisymmetrical pulse or of a three lobe pulse, as required for high event rates. The feasibility of pulse shaping giving an output equal to, or shorter than, the input one is demonstrated. It is shown that the signal-to-noise ratio remains constant for the chamber interelectrode gap which gives an input pulse width (i.e., electron drift time) greater than one third of the required output pulse width

  15. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    Science.gov (United States)

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  16. Exact and Heuristic Solutions to Minimize Total Waiting Time in the Blood Products Distribution Problem

    Directory of Open Access Journals (Sweden)

    Amir Salehipour

    2012-01-01

    Full Text Available This paper presents a novel application of operations research to support decision making in blood distribution management. The rapid and dynamic increasing demand, criticality of the product, storage, handling, and distribution requirements, and the different geographical locations of hospitals and medical centers have made blood distribution a complex and important problem. In this study, a real blood distribution problem containing 24 hospitals was tackled by the authors, and an exact approach was presented. The objective of the problem is to distribute blood and its products among hospitals and medical centers such that the total waiting time of those requiring the product is minimized. Following the exact solution, a hybrid heuristic algorithm is proposed. Computational experiments showed the optimal solutions could be obtained for medium size instances, while for larger instances the proposed hybrid heuristic is very competitive.

  17. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  18. Estimating time-based instantaneous total mortality rate based on the age-structured abundance index

    Science.gov (United States)

    Wang, Yingbin; Jiao, Yan

    2015-05-01

    The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.

  19. Total and domain-specific sitting time among employees in desk-based work settings in Australia.

    Science.gov (United States)

    Bennie, Jason A; Pedisic, Zeljko; Timperio, Anna; Crawford, David; Dunstan, David; Bauman, Adrian; van Uffelen, Jannique; Salmon, Jo

    2015-06-01

    To describe the total and domain-specific daily sitting time among a sample of Australian office-based employees. In April 2010, paper-based surveys were provided to desk-based employees (n=801) in Victoria, Australia. Total daily and domain-specific (work, leisure-time and transport-related) sitting time (minutes/day) were assessed by validated questionnaires. Differences in sitting time were examined across socio-demographic (age, sex, occupational status) and lifestyle characteristics (physical activity levels, body mass index [BMI]) using multiple linear regression analyses. The median (95% confidence interval [CI]) of total daily sitting time was 540 (531-557) minutes/day. Insufficiently active adults (median=578 minutes/day, [95%CI: 564-602]), younger adults aged 18-29 years (median=561 minutes/day, [95%CI: 540-577]) reported the highest total daily sitting times. Occupational sitting time accounted for almost 60% of total daily sitting time. In multivariate analyses, total daily sitting time was negatively associated with age (unstandardised regression coefficient [B]=-1.58, pphysical activity (minutes/week) (B=-0.03, pemployees reported that more than half of their total daily sitting time was accrued in the work setting. Given the high contribution of occupational sitting to total daily sitting time among desk-based employees, interventions should focus on the work setting. © 2014 Public Health Association of Australia.

  20. Total variation regularization for a backward time-fractional diffusion problem

    International Nuclear Information System (INIS)

    Wang, Liyan; Liu, Jijun

    2013-01-01

    Consider a two-dimensional backward problem for a time-fractional diffusion process, which can be considered as image de-blurring where the blurring process is assumed to be slow diffusion. In order to avoid the over-smoothing effect for object image with edges and to construct a fast reconstruction scheme, the total variation regularizing term and the data residual error in the frequency domain are coupled to construct the cost functional. The well posedness of this optimization problem is studied. The minimizer is sought approximately using the iteration process for a series of optimization problems with Bregman distance as a penalty term. This iteration reconstruction scheme is essentially a new regularizing scheme with coupling parameter in the cost functional and the iteration stopping times as two regularizing parameters. We give the choice strategy for the regularizing parameters in terms of the noise level of measurement data, which yields the optimal error estimate on the iterative solution. The series optimization problems are solved by alternative iteration with explicit exact solution and therefore the amount of computation is much weakened. Numerical implementations are given to support our theoretical analysis on the convergence rate and to show the significant reconstruction improvements. (paper)

  1. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    International Nuclear Information System (INIS)

    Priimak, Dmitri

    2014-01-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques

  2. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    Energy Technology Data Exchange (ETDEWEB)

    Priimak, Dmitri

    2014-12-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.

  3. A PC based multi-CPU severe accident simulation trainer

    International Nuclear Information System (INIS)

    Jankowski, M.W.; Bienarz, P.P.; Sartmadjiev, A.D.

    2004-01-01

    MELSIM Severe Accident Simulation Trainer is a personal computer based system being developed by the International Atomic Energy Agency and Risk Management Associates, Inc. for the purpose of training the operators of nuclear power stations. It also serves for evaluating accident management strategies as well as assessing complex interfaces between emergency operating procedures and accident management guidelines. The system is being developed for the Soviet designed WWER-440/Model 213 reactor and it is plant specific. The Bohunice V2 power station in the Slovak Republic has been selected for trial operation of the system. The trainer utilizes several CPUs working simultaneously on different areas of simulation. Detailed plant operation displays are provided on colour monitor mimic screens which show changing plant conditions in approximate real-time. Up to 28 000 curves can be plotted on a separate monitor as the MELSIM program proceeds. These plots proceed concurrently with the program, and time specific segments can be recalled for review. A benchmarking (limited in scope) against well validated thermal-hydraulic codes and selected plant accident data (WWER-440/213 Rovno NPP, Ukraine) has been initiated. Preliminary results are presented and discussed. (author)

  4. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  5. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm

  6. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  7. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  8. DSM vs. NSM: CPU Performance Tradeoffs in Block-Oriented Query Processing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); N.J. Nes (Niels); P.A. Boncz (Peter)

    2008-01-01

    textabstractComparisons between the merits of row-wise storage (NSM) and columnar storage (DSM) are typically made with respect to the persistent storage layer of database systems. In this paper, however, we focus on the CPU efficiency tradeoffs of tuple representations inside the query

  9. Determination of time-dependent uncertainty of the total solar irradiance records from 1978 to present

    Directory of Open Access Journals (Sweden)

    Fröhlich Claus

    2016-01-01

    Full Text Available Aims. The existing records of total solar irradiance (TSI since 1978 differ not only in absolute values, but also show different trends. For the study of TSI variability these records need to be combined and three composites have been devised; however, the results depend on the choice of the records and the way they are combined. A new composite should be based on all existing records with an individual qualification. It is proposed to use a time-dependent uncertainty for weighting of the individual records. Methods. The determination of the time-dependent deviation of the TSI records is performed by comparison with the square root of the sunspot number (SSN. However, this correlation is only valid for timescales of the order of a year or more because TSI and SSN react quite differently to solar activity changes on shorter timescales. Hence the results concern only periods longer than the one-year-low-pass filter used in the analysis. Results. Besides the main objective to determine an investigator-independent uncertainty, the comparison of TSI with √SSN turns out to be a powerful tool for the study of the TSI long-term changes. The correlation of √SSN with TSI replicates very well the TSI minima, especially the very low value of the recent minimum. The results of the uncertainty determination confirm not only the need for adequate corrections for degradation, but also show that a rather detailed analysis is needed. The daily average of all TSI values available on that day, weighted with the correspondingly determined uncertainty, is used to construct a “new” composite, which, overall, compares well with the Physikalisch-Meteorologisches Observatorium Davos (PMOD composite. Finally, the TSI − √SSN comparison proves to be an important diagnostic tool not only for estimating uncertainties of observations, but also for a better understanding of the long-term variability of TSI.

  10. Enhanced responses to tumor immunization following total body irradiation are time-dependent.

    Directory of Open Access Journals (Sweden)

    Adi Diab

    Full Text Available The development of successful cancer vaccines is contingent on the ability to induce effective and persistent anti-tumor immunity against self-antigens that do not typically elicit immune responses. In this study, we examine the effects of a non-myeloablative dose of total body irradiation on the ability of tumor-naïve mice to respond to DNA vaccines against melanoma. We demonstrate that irradiation followed by lymphocyte infusion results in a dramatic increase in responsiveness to tumor vaccination, with augmentation of T cell responses to tumor antigens and tumor eradication. In irradiated mice, infused CD8(+ T cells expand in an environment that is relatively depleted in regulatory T cells, and this correlates with improved CD8(+ T cell functionality. We also observe an increase in the frequency of dendritic cells displaying an activated phenotype within lymphoid organs in the first 24 hours after irradiation. Intriguingly, both the relative decrease in regulatory T cells and increase in activated dendritic cells correspond with a brief window of augmented responsiveness to immunization. After this 24 hour window, the numbers of dendritic cells decline, as does the ability of mice to respond to immunizations. When immunizations are initiated within the period of augmented dendritic cell activation, mice develop anti-tumor responses that show increased durability as well as magnitude, and this approach leads to improved survival in experiments with mice bearing established tumors as well as in a spontaneous melanoma model. We conclude that irradiation can produce potent immune adjuvant effects independent of its ability to induce tumor ablation, and that the timing of immunization and lymphocyte infusion in the irradiated host are crucial for generating optimal anti-tumor immunity. Clinical strategies using these approaches must therefore optimize such parameters, as the correct timing of infusion and vaccination may mean the difference

  11. Parallel algorithm of real-time infrared image restoration based on total variation theory

    Science.gov (United States)

    Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei

    2015-10-01

    Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.

  12. Brake response time before and after total knee arthroplasty: a prospective cohort study

    Directory of Open Access Journals (Sweden)

    Niederseer David

    2010-11-01

    Full Text Available Abstract Background Although the numbers of total knee arthroplasty (TKA are increasing, there is only a small number of studies investigating driving safety after TKA. The parameter 'Brake Response Time (BRT' is one of the most important criteria for driving safety and was therefore chosen for investigation. The present study was conducted to test the hypotheses that patients with right- or left-sided TKA show a significant increase in BRT from pre-operative (pre-op, 1 day before surgery to post-operative (post-op, 2 weeks post surgery, and a significant decrease in BRT from post-op to the follow-up investigation (FU, 8 weeks post surgery. Additionally, it was hypothesized that the BRT of patients after TKA is significantly higher than that of healthy controls. Methods 31 of 70 consecutive patients (mean age 65.7 +/- 10.2 years receiving TKA were tested for their BRT pre-op, post-op and at FU. BRT was assessed using a custom-made driving simulator. We used normative BRT data from 31 healthy controls for comparison. Results There were no significant increases between pre-op and post-op BRT values for patients who had undergone left- or right-sided TKA. Even the proportion of patients above a BRT threshold of 700 ms was not significantly increased postop. Controls had a BRT which was significantly better than the BRT of patients with right- or left-sided TKA at all three time points. Conclusion The present study showed a small and insignificant postoperative increase in the BRT of patients who had undergone right- or left-sided TKA. Therefore, we believe it is not justified to impair the patient's quality of social and occupational life post-surgery by imposing restrictions on driving motor vehicles beyond an interval of two weeks after surgery.

  13. Modulation of Total Sleep Time by Transcranial Direct Current Stimulation (tDCS).

    Science.gov (United States)

    Frase, Lukas; Piosczyk, Hannah; Zittel, Sulamith; Jahn, Friederike; Selhausen, Peter; Krone, Lukas; Feige, Bernd; Mainberger, Florian; Maier, Jonathan G; Kuhn, Marion; Klöppel, Stefan; Normann, Claus; Sterr, Annette; Spiegelhalder, Kai; Riemann, Dieter; Nitsche, Michael A; Nissen, Christoph

    2016-09-01

    Arousal and sleep are fundamental physiological processes, and their modulation is of high clinical significance. This study tested the hypothesis that total sleep time (TST) in humans can be modulated by the non-invasive brain stimulation technique transcranial direct current stimulation (tDCS) targeting a 'top-down' cortico-thalamic pathway of sleep-wake regulation. Nineteen healthy participants underwent a within-subject, repeated-measures protocol across five nights in the sleep laboratory with polysomnographic monitoring (adaptation, baseline, three experimental nights). tDCS was delivered via bi-frontal target electrodes and bi-parietal return electrodes before sleep (anodal 'activation', cathodal 'deactivation', and sham stimulation). Bi-frontal anodal stimulation significantly decreased TST, compared with cathodal and sham stimulation. This effect was location specific. Bi-frontal cathodal stimulation did not significantly increase TST, potentially due to ceiling effects in good sleepers. Exploratory resting-state EEG analyses before and after the tDCS protocols were consistent with the notion of increased cortical arousal after anodal stimulation and decreased cortical arousal after cathodal stimulation. The study provides proof-of-concept that TST can be decreased by non-invasive bi-frontal anodal tDCS in healthy humans. Further elucidating the 'top-down' pathway of sleep-wake regulation is expected to increase knowledge on the fundamentals of sleep-wake regulation and to contribute to the development of novel treatments for clinical conditions of disturbed arousal and sleep.

  14. On the Laws of Total Local Times for -Paths and Bridges of Symmetric Lévy Processes

    Directory of Open Access Journals (Sweden)

    Masafumi Hayashi

    2013-01-01

    Full Text Available The joint law of the total local times at two levels for -paths of symmetric Lévy processes is shown to admit an explicit representation in terms of the laws of the squared Bessel processes of dimensions two and zero. The law of the total local time at a single level for bridges is also discussed.

  15. Conserved-peptide upstream open reading frames (CPuORFs are associated with regulatory genes in angiosperms

    Directory of Open Access Journals (Sweden)

    Richard A Jorgensen

    2012-08-01

    Full Text Available Upstream open reading frames (uORFs are common in eukaryotic transcripts, but those that encode conserved peptides (CPuORFs occur in less than 1% of transcripts. The peptides encoded by three plant CPuORF families are known to control translation of the downstream ORF in response to a small signal molecule (sucrose, polyamines and phosphocholine. In flowering plants, transcription factors are statistically over-represented among genes that possess CPuORFs, and in general it appeared that many CPuORF genes also had other regulatory functions, though the significance of this suggestion was uncertain (Hayden and Jorgensen, 2007. Five years later the literature provides much more information on the functions of many CPuORF genes. Here we reassess the functions of 27 known CPuORF gene families and find that 22 of these families play a variety of different regulatory roles, from transcriptional control to protein turnover, and from small signal molecules to signal transduction kinases. Clearly then, there is indeed a strong association of CPuORFs with regulatory genes. In addition, 16 of these families play key roles in a variety of different biological processes. Most strikingly, the core sucrose response network includes three different CPuORFs, creating the potential for sophisticated balancing of the network in response to three different molecular inputs. We propose that the function of most CPuORFs is to modulate translation of a downstream major ORF (mORF in response to a signal molecule recognized by the conserved peptide and that because the mORFs of CPuORF genes generally encode regulatory proteins, many of them centrally important in the biology of plants, CPuORFs play key roles in balancing such regulatory networks.

  16. The relationship among CPU utilization, temperature, and thermal power for waste heat utilization

    International Nuclear Information System (INIS)

    Haywood, Anna M.; Sherbeck, Jon; Phelan, Patrick; Varsamopoulos, Georgios; Gupta, Sandeep K.S.

    2015-01-01

    Highlights: • This work graphs a triad relationship among CPU utilization, temperature and power. • Using a custom-built cold plate, we were able capture CPU-generated high quality heat. • The work undertakes a radical approach using mineral oil to directly cool CPUs. • We found that it is possible to use CPU waste energy to power an absorption chiller. - Abstract: This work addresses significant datacenter issues of growth in numbers of computer servers and subsequent electricity expenditure by proposing, analyzing and testing a unique idea of recycling the highest quality waste heat generated by datacenter servers. The aim was to provide a renewable and sustainable energy source for use in cooling the datacenter. The work incorporates novel approaches in waste heat usage, graphing CPU temperature, power and utilization simultaneously, and a mineral oil experimental design and implementation. The work presented investigates and illustrates the quantity and quality of heat that can be captured from a variably tasked liquid-cooled microprocessor on a datacenter server blade. It undertakes a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Results indicate that 123 servers encapsulated in mineral oil can power a 10-ton chiller with a design point of 50.2 kW th . Compared with water-cooling experiments, the mineral oil experiment mitigated the temperature drop between the heat source and discharge line by up to 81%. In addition, due to this reduction in temperature drop, the heat quality in the oil discharge line was up to 12.3 °C higher on average than for water-cooled experiments. Furthermore, mineral oil cooling holds the potential to eliminate the 50% cooling expenditure which initially motivated this project

  17. Surgical time and complications of total transvaginal (total-NOTES, single-port laparoscopic-assisted and conventional ovariohysterectomy in bitches

    Directory of Open Access Journals (Sweden)

    M.A.M. Silva

    2015-06-01

    Full Text Available The recently developed minimally invasive techniques of ovariohysterectomy (OVH have been studied in dogs in order to optimize their benefits and decrease risks to the patients. The purpose of this study was to compare surgical time, complications and technical difficulties of transvaginal total-NOTES, single-port laparoscopic-assisted and conventional OVH in bitches. Twelve bitches were submitted to total-NOTES (NOTES group, while 13 underwent single-port laparoscopic-assisted (SPLA group and 15 were submitted to conventional OVH (OPEN group. Intra-operative period was divided into 7 stages: (1 access to abdominal cavity; (2 pneumoperitoneum; approach to the right (3 and left (4 ovarian pedicle and uterine body (5; (6 abdominal or vaginal synthesis, performed in 6 out of 12 patients of NOTES; (7 inoperative time. Overall and stages operative times, intra and postoperative complications and technical difficulties were compared among groups. Mean overall surgical time in NOTES (25.7±6.8 minutes and SPLA (23.1±4.0 minutes groups were shorter than in the OPEN group (34.0±6.4 minutes (P<0.05. The intraoperative stage that required the longest time was the approach to the uterine body in the NOTES group and abdominal and cutaneous sutures in the OPEN group. There was no difference regarding the rates of complications. Major complications included postoperative bleeding requiring reoperation in a bitch in the OPEN group, while minor complications included mild vaginal discharge in four patients in the NOTES group and seroma in three bitches in the SPLA group. In conclusion, total-NOTES and SPLA OVH were less time-consuming then conventional OVH in bitches. All techniques presented complications, which were properly managed.

  18. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    Science.gov (United States)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  19. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease.

    Science.gov (United States)

    Shamonin, Denis P; Bron, Esther E; Lelieveldt, Boudewijn P F; Smits, Marion; Klein, Stefan; Staring, Marius

    2013-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4-5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15-60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license.

  20. Fast Parallel Image Registration on CPU and GPU for Diagnostic Classification of Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Denis P Shamonin

    2014-01-01

    Full Text Available Nonrigid image registration is an important, but time-consuming taskin medical image analysis. In typical neuroimaging studies, multipleimage registrations are performed, i.e. for atlas-based segmentationor template construction. Faster image registration routines wouldtherefore be beneficial.In this paper we explore acceleration of the image registrationpackage elastix by a combination of several techniques: iparallelization on the CPU, to speed up the cost function derivativecalculation; ii parallelization on the GPU building on andextending the OpenCL framework from ITKv4, to speed up the Gaussianpyramid computation and the image resampling step; iii exploitationof certain properties of the B-spline transformation model; ivfurther software optimizations.The accelerated registration tool is employed in a study ondiagnostic classification of Alzheimer's disease and cognitivelynormal controls based on T1-weighted MRI. We selected 299participants from the publicly available Alzheimer's DiseaseNeuroimaging Initiative database. Classification is performed with asupport vector machine based on gray matter volumes as a marker foratrophy. We evaluated two types of strategies (voxel-wise andregion-wise that heavily rely on nonrigid image registration.Parallelization and optimization resulted in an acceleration factorof 4-5x on an 8-core machine. Using OpenCL a speedup factor of ~2was realized for computation of the Gaussian pyramids, and 15-60 forthe resampling step, for larger images. The voxel-wise and theregion-wise classification methods had an area under thereceiver operator characteristic curve of 88% and 90%,respectively, both for standard and accelerated registration.We conclude that the image registration package elastix wassubstantially accelerated, with nearly identical results to thenon-optimized version. The new functionality will become availablein the next release of elastix as open source under the BSD license.

  1. Joint association of physical activity in leisure and total sitting time with metabolic syndrome amongst 15,235 Danish adults

    DEFF Research Database (Denmark)

    Petersen, Christina Bjørk; Nielsen, Asser Jon; Bauman, Adrian

    2014-01-01

    and total daily sitting time were assessed by self-report in 15,235 men and women in the Danish Health Examination Survey 2007-2008. Associations between leisure time physical activity, total sitting time and metabolic syndrome were investigated in logistic regression analysis. RESULTS: Adjusted odds ratios......BACKGROUND: Recent studies suggest that physical inactivity as well as sitting time are associated with metabolic syndrome. Our aim was to examine joint associations of leisure time physical activity and total daily sitting time with metabolic syndrome. METHODS: Leisure time physical activity...... (OR) for metabolic syndrome were 2.14 (95% CI: 1.88-2.43) amongst participants who were inactive in leisure time compared to the most active, and 1.42 (95% CI: 1.26-1.61) amongst those who sat for ≥10h/day compared to physical activity, sitting time...

  2. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2015-01-01

    Full Text Available The Smith-Waterman (SW algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  3. Observed and simulated time evolution of HCl, ClONO2, and HF total columns

    Science.gov (United States)

    Ruhnke, Roland; Geomon, Ndacc Infrared, Modelling Working Group

    2010-05-01

    Institute of Technology (KIT), IMK-IFU, Garmisch-Partenkirchen, Germany, (16) University of Denver, Dept. of Physics and Astronomy, Denver, CO, USA, (17) National Center for Atmospheric Research (NCAR), Boulder, CO, USA, (18) NASA Langley Research Center, Hampton, VA, USA, (19) Karlsruhe Institute of Technology (KIT), Steinbuch Centre for Computing, Karlsruhe, Germany Total column abundances of HCl and ClONO2, the primary components of the stratospheric inorganic chlorine (Cly) budget, and of HF have been retrieved from ground-based, high-resolution infrared solar absorption spectra recorded at 17 sites of the Network for the Detection of Atmospheric Composition Change (NDACC) located at latitudes between 80.05°N and 77.82°S. These data extend over more than 20 years (through 2007) during a period when the growth in atmospheric halogen loading has slowed in response to the Montreal Protocol (and ammendments). These observed time series are interpreted with calculations performed with a 2-D model, the 3-D chemistry-transport models (CTMs) KASIMA and SLIMCAT, and the 3-D chemistry-climate models (CCMs) EMAC and SOCOLv2.0. The observed Cly and in particular HCl column abundances decreases significantely since the end of the nineties at all stations, which is consistent with the observed changes in the halocarbon source gases, with an increasing rate in the last years. In contrast to Cly, the trend values for total column HF at the different stations show a less consistent behaviour pointing to the fact that the time development of the HF columns is peaking. There is a good overall qualitative agreement regarding trends between models and data. With respect to the CTMs the agreement improves if simulation results for measurement days only are used in the trend analysis instead of simulation results for each day.

  4. Storm-time total electron content and its response to penetration electric fields over South America

    Directory of Open Access Journals (Sweden)

    P. M. de Siqueira

    2011-10-01

    Full Text Available In this work the response of the ionosphere due to the severe magnetic storm of 7–10 November 2004 is investigated by analyzing GPS Total Electron Content (TEC maps constructed for the South America sector. In order to verify the disturbed zonal electric fields in South America during the superstorm, ionospheric vertical drift data obtained from modeling results are used in the analysis. The vertical drifts were inferred from ΔH magnetometer data (Jicamarca-Piura following the methodology presented by Anderson et al. (2004. Also used were vertical drifts measured by the Jicamarca ISR. Data from a digisonde located at São Luís, Brazil (2.33° S, 44.2° W, dip latitude 0.25° are presented to complement the Jicamarca equatorial data. Penetration electric fields were observed by the comparison between the equatorial vertical drifts and the Interplanetary Electric Field (IEF. The TEC maps obtained from GPS data reflect the ionospheric response over the South America low-latitude and equatorial region. They reveal unexpected plasma distributions and TEC levels during the main phase of the superstorm on 7 November, which is coincident with the local post-sunset hours. At this time an increase in the pre-reversal enhancement was expected to develop the Equatorial Ionization Anomaly (EIA but we observed the absence of EIA. The results also reveal well known characteristics of the plasma distributions on 8, 9, and 10 November. The emphasized features are the expansion and intensification of EIA due to prompt penetration electric fields on 9 November and the inhibition of EIA during post-sunset hours on 7, 8, and 10 November. One important result is that the TEC maps provided a bi-dimensional view of the ionospheric changes offering a spatial description of the electrodynamics involved, which is an advantage over TEC measured by isolated GPS receivers.

  5. When is it safe to resume driving after total hip and total knee arthroplasty? a meta-analysis of literature on post-operative brake reaction times.

    Science.gov (United States)

    van der Velden, C A; Tolk, J J; Janssen, R P A; Reijman, M

    2017-05-01

    The aim of this study was to assess the current available evidence about when patients might resume driving after elective, primary total hip (THA) or total knee arthroplasty (TKA) undertaken for osteoarthritis (OA). In February 2016, EMBASE, MEDLINE, Web of Science, Scopus, Cochrane, PubMed Publisher, CINAHL, EBSCO and Google Scholar were searched for clinical studies reporting on 'THA', 'TKA', 'car driving', 'reaction time' and 'brake response time'. Two researchers (CAV and JJT) independently screened the titles and abstracts for eligibility and assessed the risk of bias. Both fixed and random effects were used to pool data and calculate mean differences (MD) and 95% confidence intervals (CI) between pre- and post-operative total brake response time (TBRT). A total of 19 studies were included. The assessment of the risk of bias showed that one study was at high risk, six studies at moderate risk and 12 studies at low risk. Meta-analysis of TBRT showed a MD decrease of 25.54 ms (95% CI -32.02 to 83.09) two weeks after right-sided THA, and of 18.19 ms (95% CI -6.13 to 42.50) four weeks after a right-sided TKA, when compared with the pre-operative value. The TBRT returned to baseline two weeks after a right-sided THA and four weeks after a right-sided TKA. These results may serve as guidelines for orthopaedic surgeons when advising patients when to resume driving. However, the advice should be individualised. Cite this article: Bone Joint J 2017;99-B:566-76. ©2017 The British Editorial Society of Bone & Joint Surgery.

  6. Leveraging the checkpoint-restart technique for optimizing CPU efficiency of ATLAS production applications on opportunistic platforms

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2017-01-01

    Data processing applications of the ATLAS experiment, such as event simulation and reconstruction, spend considerable amount of time in the initialization phase. This phase includes loading a large number of shared libraries, reading detector geometry and condition data from external databases, building a transient representation of the detector geometry and initializing various algorithms and services. In some cases the initialization step can take as long as 10-15 minutes. Such slow initialization, being inherently serial, has a significant negative impact on overall CPU efficiency of the production job, especially when the job is executed on opportunistic, often short-lived, resources such as commercial clouds or volunteer computing. In order to improve this situation, we can take advantage of the fact that ATLAS runs large numbers of production jobs with similar configuration parameters (e.g. jobs within the same production task). This allows us to checkpoint one job at the end of its configuration step a...

  7. Der ATLAS LVL2-Trigger mit FPGA-Prozessoren : Entwicklung, Aufbau und Funktionsnachweis des hybriden FPGA/CPU-basierten Prozessorsystems ATLANTIS

    CERN Document Server

    Singpiel, Holger

    2000-01-01

    This thesis describes the conception and implementation of the hybrid FPGA/CPU based processing system ATLANTIS as trigger processor for the proposed ATLAS experiment at CERN. CompactPCI provides the close coupling of a multi FPGA system and a standard CPU. The system is scalable in computing power and flexible in use due to its partitioning into dedicated FPGA boards for computation, I/O tasks and a private communication. Main focus of the research activities based on the usage of the ATLANTIS system are two areas in the second level trigger (LVL2). First, the acceleration of time critical B physics trigger algorithms is the major aim. The execution of the full scan TRT algorithm on ATLANTIS, which has been used as a demonstrator, results in a speedup of 5.6 compared to a standard CPU. Next, the ATLANTIS system is used as a hardware platform for research work in conjunction with the ATLAS readout systems. For further studies a permanent installation of the ATLANTIS system in the LVL2 application testbed is f...

  8. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-28

    Many database applications, such as sequence comparing, sequence searching, and sequence matching, etc, process large database sequences. we introduce a novel and efficient technique to improve the performance of database applica- tions by using a Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm. The experimental results show that our Hybrid GPU/CPU technique improves the average performance by a factor of 2.2, and improves the peak performance by a factor of 2.8 when compared to earlier implementations. Copyright © 2011 by ASME.

  9. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  10. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  11. Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Fork/Join is a framework for parallelizing calculations using recursive decomposition, also called divide and conquer. These algorithms occasionally end up duplicating work, especially at the beginning of the run. We can reduce wasted CPU cycles by implementing a reserved caching scheme. Before a task starts its calculation, it tries to reserve an entry in the shared map. If it is successful, it immediately begins. If not, it blocks until the other thread has finished its calculation. Unfortunately this might result in a significant number of blocked threads, decreasing CPU utilization. In this talk we will demonstrate this issue and offer a solution in the form of the ManagedBlocker. Combined with the Fork/Join, it can keep parallelism at the desired level.

  12. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  13. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores

    Directory of Open Access Journals (Sweden)

    Wang Kai

    2011-05-01

    Full Text Available Abstract Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs have multiple cores, whereas Graphics Processing Units (GPUs also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1 the interaction of SNPs within it in parallel, and 2 the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  14. A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU

    Science.gov (United States)

    Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha

    2018-03-01

    Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.

  15. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  16. Bulk metal concentrations versus total suspended solids in rivers: Time-invariant & catchment-specific relationships.

    Science.gov (United States)

    Nasrabadi, Touraj; Ruegner, Hermann; Schwientek, Marc; Bennett, Jeremy; Fazel Valipour, Shahin; Grathwohl, Peter

    2018-01-01

    Suspended particles in rivers can act as carriers of potentially bioavailable metal species and are thus an emerging area of interest in river system monitoring. The delineation of bulk metals concentrations in river water into dissolved and particulate components is also important for risk assessment. Linear relationships between bulk metal concentrations in water (CW,tot) and total suspended solids (TSS) in water can be used to easily evaluate dissolved (CW, intercept) and particle-bound metal fluxes (CSUS, slope) in streams (CW,tot = CW + CSUS TSS). In this study, we apply this principle to catchments in Iran (Haraz) and Germany (Ammer, Goldersbach, and Steinlach) that show differences in geology, geochemistry, land use and hydrological characteristics. For each catchment, particle-bound and dissolved concentrations for a suite of metals in water were calculated based on linear regressions of total suspended solids and total metal concentrations. Results were replicable across sampling campaigns in different years and seasons (between 2013 and 2016) and could be reproduced in a laboratory sedimentation experiment. CSUS values generally showed little variability in different catchments and agree well with soil background values for some metals (e.g. lead and nickel) while other metals (e.g. copper) indicate anthropogenic influences. CW was elevated in the Haraz (Iran) catchment, indicating higher bioavailability and potential human and ecological health concerns (where higher values of CSUS/CW are considered as a risk indicator).

  17. Total factor productivity (TFP) growth agriculture in pakistan: trends in different time horizons

    International Nuclear Information System (INIS)

    Ali, A.; Mushtaq, K.; Ashfaq, M.

    2008-01-01

    The present study estimated total factor productivity (TFP) growth of agriculture sector of Pakistan for the period 1971-2006 by employing Tornqvist-Theil (T-T) index number methodology. Most of the conventional inputs were used in constructing the input index. The output index includes major crops, minor crops, important fruits and vegetables and four categories of livestock products. The study estimated TFP growth rates for different decades. The results showed that TFP growth rate was lowest during the decade of 70s (0.96 percent) and highest during the last six years of the study period (2.86 percent). The decade of 80s and 90s registered TFP growth rate of 2.24 percent and 2.46 percent, respectively. The results also explained that TFP growth contributed about 33 percent to total agricultural output growth during the decade of 70s and this contribution increased up to 83 percent during the last six years of the study period. The contribution of TFP growth to total agricultural output growth was 53 and 81 percent during the decades of 80s and 90s, respectively. The study observed that macro level government policies, institutional factors and weather conditions are the major key factors that influenced TFP growth. (author)

  18. Critical Care Admissions following Total Laryngectomy: Is It Time to Change Our Practice?

    Science.gov (United States)

    Walijee, Hussein; Morgan, Alexandria; Gibson, Bethan; Berry, Sandeep; Jaffery, Ali

    2016-01-01

    Critical Care Unit (CCU) beds are a limited resource and in increasing demand. Studies have shown that complex head and neck patients can be safely managed on a ward setting given the appropriate staffing and support. This retrospective case series aims to quantify the CCU care received by patients following total laryngectomy (TL) at a District General Hospital (DGH) and compare patient outcomes in an attempt to inform current practice. Data relating to TL were collected over a 5-year period from 1st January 2010 to 31st December 2015. A total of 22 patients were included. All patients were admitted to CCU postoperatively for an average length of stay of 25.5 hours. 95% of these patients were admitted to CCU for the purpose of close monitoring only, not requiring any active treatment prior to discharge to the ward. 73% of total complications were encountered after the first 24 hours postoperatively at which point patients had been stepped down to ward care. Avoiding the use of CCU beds and instead providing the appropriate level of care on the ward would result in a potential cost saving of approximately £8,000 with no influence on patient morbidity and mortality.

  19. Correlates of occupational, leisure and total sitting time in working adults: results from the Singapore multi-ethnic cohort.

    Science.gov (United States)

    Uijtdewilligen, Léonie; Yin, Jason Dean-Chen; van der Ploeg, Hidde P; Müller-Riemenschneider, Falk

    2017-12-13

    Evidence on the health risks of sitting is accumulating. However, research identifying factors influencing sitting time in adults is limited, especially in Asian populations. This study aimed to identify socio-demographic and lifestyle correlates of occupational, leisure and total sitting time in a sample of Singapore working adults. Data were collected between 2004 and 2010 from participants of the Singapore Multi Ethnic Cohort (MEC). Medical exclusion criteria for cohort participation were cancer, heart disease, stroke, renal failure and serious mental illness. Participants who were not working over the past 12 months and without data on sitting time were excluded from the analyses. Multivariable regression analyses were used to examine cross-sectional associations of self-reported age, gender, ethnicity, marital status, education, smoking, caloric intake and moderate-to-vigorous leisure time physical activity (LTPA) with self-reported occupational, leisure and total sitting time. Correlates were also studied separately for Chinese, Malays and Indians. The final sample comprised 9384 participants (54.8% male): 50.5% were Chinese, 24.0% Malay, and 25.5% Indian. For the total sample, mean occupational sitting time was 2.71 h/day, mean leisure sitting time was 2.77 h/day and mean total sitting time was 5.48 h/day. Sitting time in all domains was highest among Chinese. Age, gender, education, and caloric intake were associated with higher occupational sitting time, while ethnicity, marital status and smoking were associated with lower occupational sitting time. Marital status, smoking, caloric intake and LTPA were associated with higher leisure sitting time, while age, gender and ethnicity were associated with lower leisure sitting time. Gender, marital status, education, caloric intake and LTPA were associated with higher total sitting time, while ethnicity was associated with lower total sitting time. Stratified analyses revealed different associations within

  20. Total donor ischemic time: relationship to early hemodynamics and intensive care morbidity in pediatric cardiac transplant recipients.

    Science.gov (United States)

    Rodrigues, Warren; Carr, Michelle; Ridout, Deborah; Carter, Katherine; Hulme, Sara Louise; Simmonds, Jacob; Elliott, Martin; Hoskote, Aparna; Burch, Michael; Brown, Kate L

    2011-11-01

    Single-center studies have failed to link modest increases in total donor ischemic time to mortality after pediatric orthotopic heart transplant. We aimed to investigate whether prolonged total donor ischemic time is linked to pediatric intensive care morbidity after orthotopic heart transplant. Retrospective cohort review. Tertiary pediatric transplant center in the United Kingdom. Ninety-three pediatric orthotopic heart transplants between 2002 and 2006. Total donor ischemic time was investigated for association with early post-orthotopic heart transplant hemodynamics and intensive care unit morbidities. Of 43 males and 50 females with median age 7.2 (interquartile range 2.2, 13.0) yrs, 62 (68%) had dilated cardiomyopathy, 20 (22%) had congenital heart disease, and nine (10%) had restrictive cardiomyopathy. The mean total donor ischemic time was 225.9 (sd 65.6) mins. In the first 24 hrs after orthotopic heart transplant, age-adjusted mean arterial blood pressure increased (p total donor ischemic time was significantly associated with lower mean arterial blood pressure (p care unit (p = .004), and longer post-orthotopic heart transplant stay in hospital (p = .02). Total donor ischemic time was not related to levels of mean pulmonary arterial pressure (p = .62), left atrial pressure (p = .38), or central venous pressure (p = .76) early after orthotopic heart transplant. Prolonged total donor ischemic time has an adverse effect on the donor organ, contributing to lower mean arterial blood pressure, as well as more prolonged ventilation and intensive care unit and hospital stays post-orthotopic heart transplant, reflecting increased morbidity.

  1. Imageless navigation total hip arthroplasty – an evaluation of operative time

    Directory of Open Access Journals (Sweden)

    Valsamis Epaminondas Markos

    2018-01-01

    Discussion: This is the first study that demonstrates no added operative time when using imageless navigation in THA, achieved with an improved workflow. The results also demonstrate a very reasonable learning curve.

  2. 20 Years of Total and Tropical Ozone Time Series Based on European Satellite Observations

    Science.gov (United States)

    Loyola, D. G.; Heue, K. P.; Coldewey-Egbers, M.

    2016-12-01

    Ozone is an important trace gas in the atmosphere, while the stratospheric ozone layer protects the earth surface from the incident UV radiation, the tropospheric ozone acts as green house gas and causes health damages as well as crop loss. The total ozone column is dominated by the stratospheric column, the tropospheric columns only contributes about 10% to the total column.The ozone column data from the European satellite instruments GOME, SCIAMACHY, OMI, GOME-2A and GOME-2B are available within the ESA Climate Change Initiative project with a high degree of inter-sensor consistency. The tropospheric ozone columns are based on the convective cloud differential algorithm. The datasets encompass a period of more than 20 years between 1995 and 2015, for the trend analysis the data sets were harmonized relative to one of the instruments. For the tropics we found an increase in the tropospheric ozone column of 0.75 ± 0.12 DU decade^{-1} with local variations between 1.8 and -0.8. The largest trends were observed over southern Africa and the Atlantic Ocean. A seasonal trend analysis led to the assumption that the increase is caused by additional forest fires.The trend for the total column was not that certain, based on model predicted trend data and the measurement uncertainty we estimated that another 10 to 15 years of observations will be required to observe a statistical significant trend. In the mid latitudes the trends are currently hidden in the large variability and for the tropics the modelled trends are low. Also the possibility of diverging trends at different altitudes must be considered; an increase in the tropospheric ozone might be accompanied by decreasing stratospheric ozone.The European satellite data record will be extended over the next two decades with the atmospheric satellite missions Sentinel 5 Precursor (launch end of 2016), Sentinel 4 and Sentinel 5.

  3. Billing the CPU Time Used by System Components on Behalf of VMs

    OpenAIRE

    Djomgwe Teabe , Boris; Tchana , Alain-Bouzaïde; Hagimont , Daniel

    2016-01-01

    International audience; Nowadays, virtualization is present in almost all cloud infrastructures. In virtualized cloud, virtual machines (VMs) are the basis for allocating resources. A VM is launched with a fixed allocated computing capacity that should be strictly provided by the hosting system scheduler. Unfortunately, this allocated capacity is not always respected, due to mechanisms provided by the virtual machine monitoring system (also known as hypervisor). For instance, we observe that ...

  4. Billing the CPU Time Used by System Components on Behalf of VMs

    OpenAIRE

    Djomgwe Teabe, Boris; Tchana, Alain-Bouzaïde; Hagimont, Daniel

    2016-01-01

    Nowadays, virtualization is present in almost all cloud infrastructures. In virtualized cloud, virtual machines (VMs) are the basis for allocating resources. A VM is launched with a fixed allocated computing capacity that should be strictly provided by the hosting system scheduler. Unfortunately, this allocated capacity is not always respected, due to mechanisms provided by the virtual machine monitoring system (also known as hypervisor). For instance, we observe that a significant amount of ...

  5. Total sitting time, leisure time physical activity and risk of hospitalization due to low back pain: The Danish Health Examination Survey cohort 2007-2008.

    Science.gov (United States)

    Balling, Mie; Holmberg, Teresa; Petersen, Christina B; Aadahl, Mette; Meyrowitsch, Dan W; Tolstrup, Janne S

    2018-02-01

    This study aimed to test the hypotheses that a high total sitting time and vigorous physical activity in leisure time increase the risk of low back pain and herniated lumbar disc disease. A total of 76,438 adults answered questions regarding their total sitting time and physical activity during leisure time in the Danish Health Examination Survey 2007-2008. Information on low back pain diagnoses up to 10 September 2015 was obtained from The National Patient Register. The mean follow-up time was 7.4 years. Data were analysed using Cox regression analysis with adjustment for potential confounders. Multiple imputations were performed for missing values. During the follow-up period, 1796 individuals were diagnosed with low back pain, of whom 479 were diagnosed with herniated lumbar disc disease. Total sitting time was not associated with low back pain or herniated lumbar disc disease. However, moderate or vigorous physical activity, as compared to light physical activity, was associated with increased risk of low back pain (HR = 1.16, 95% CI: 1.03-1.30 and HR = 1.45, 95% CI: 1.15-1.83). Moderate, but not vigorous physical activity was associated with increased risk of herniated lumbar disc disease. The results suggest that total sitting time is not associated with low back pain, but moderate and vigorous physical activity is associated with increased risk of low back pain compared with light physical activity.

  6. Assessment of Tandem Measurements of pH and Total Gut Transit Time in Healthy Volunteers

    OpenAIRE

    Mikolajczyk, Adam E; Watson, Sydeaka; Surma, Bonnie L; Rubin, David T

    2015-01-01

    Objectives: The variation of luminal pH and transit time in an individual is unknown, yet is necessary to interpret single measurements. This study aimed to assess the intrasubject variability of gut pH and transit time in healthy volunteers using SmartPill devices (Covidien, Minneapolis, MN). Methods: Each subject (n=10) ingested two SmartPill devices separated by 24?h. Mean pH values were calculated for 30?min after gastric emptying (AGE), before the ileocecal (BIC) valve, after the ileocec...

  7. Determinantal Representation of the Time-Dependent Stationary Correlation Function for the Totally Asymmetric Simple Exclusion Model

    Directory of Open Access Journals (Sweden)

    Nikolay M. Bogoliubov

    2009-04-01

    Full Text Available The basic model of the non-equilibrium low dimensional physics the so-called totally asymmetric exclusion process is related to the 'crystalline limit' (q → ∞ of the SU_q(2 quantum algebra. Using the quantum inverse scattering method we obtain the exact expression for the time-dependent stationary correlation function of the totally asymmetric simple exclusion process on a one dimensional lattice with the periodic boundary conditions.

  8. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU

  9. Total time on test processes and applications to failure data analysis

    International Nuclear Information System (INIS)

    Barlow, R.E.; Campo, R.

    1975-01-01

    This paper describes a new method for analyzing data. The method applies to non-negative observations such as times to failure of devices and survival times of biological organisms and involves a plot of the data. These plots are useful in choosing a probabilistic model to represent the failure behavior of the data. They also furnish information about the failure rate function and aid in its estimation. An important feature of these data plots is that incomplete data can be analyzed. The underlying random variables are, however, assumed to be independent and identically distributed. The plots have a theoretical basis, and converge to a transform of the underlying probability distribution as the sample size increases

  10. Effect of temperature, time, and milling process on yield, flavonoid, and total phenolic content of Zingiber officinale water extract

    Science.gov (United States)

    Andriyani, R.; Kosasih, W.; Ningrum, D. R.; Pudjiraharti, S.

    2017-03-01

    Several parameters such as temperature, time of extraction, and size of simplicia play significant role in medicinal herb extraction. This study aimed to investigate the effect of those parameters on yield extract, flavonoid, and total phenolic content in water extract of Zingiber officinale. The temperatures used were 50, 70 and 90°C and the extraction times were 30, 60 and 90 min. Z. officinale in the form of powder and chips were used to study the effect of milling treatment. The correlation among those variables was analysed using ANOVA two-way factors without replication. The result showed that time and temperature did not influence the yield of extract of Powder simplicia. However, time of extraction influenced the extract of simplicia treated without milling process. On the other hand, flavonoid and total phenolic content were not influenced by temperature, time, and milling treatment.

  11. Time- and dose-dependent effects of total-body ionizing radiation on muscle stem cells

    Science.gov (United States)

    Masuda, Shinya; Hisamatsu, Tsubasa; Seko, Daiki; Urata, Yoshishige; Goto, Shinji; Li, Tao-Sheng; Ono, Yusuke

    2015-01-01

    Exposure to high levels of genotoxic stress, such as high-dose ionizing radiation, increases both cancer and noncancer risks. However, it remains debatable whether low-dose ionizing radiation reduces cellular function, or rather induces hormetic health benefits. Here, we investigated the effects of total-body γ-ray radiation on muscle stem cells, called satellite cells. Adult C57BL/6 mice were exposed to γ-radiation at low- to high-dose rates (low, 2 or 10 mGy/day; moderate, 50 mGy/day; high, 250 mGy/day) for 30 days. No hormetic responses in proliferation, differentiation, or self-renewal of satellite cells were observed in low-dose radiation-exposed mice at the acute phase. However, at the chronic phase, population expansion of satellite cell-derived progeny was slightly decreased in mice exposed to low-dose radiation. Taken together, low-dose ionizing irradiation may suppress satellite cell function, rather than induce hormetic health benefits, in skeletal muscle in adult mice. PMID:25869487

  12. Fall Risk Score at the Time of Discharge Predicts Readmission Following Total Joint Arthroplasty.

    Science.gov (United States)

    Ravi, Bheeshma; Nan, Zhang; Schwartz, Adam J; Clarke, Henry D

    2017-07-01

    Readmission among Medicare recipients is a leading driver of healthcare expenditure. To date, most predictive tools are too coarse for direct clinical application. Our objective in this study is to determine if a pre-existing tool to identify patients at increased risk for inpatient falls, the Hendrich Fall Risk Score, could be used to accurately identify Medicare patients at increased risk for readmission following arthroplasty, regardless of whether the readmission was due to a fall. This study is a retrospective cohort study. We identified 2437 Medicare patients who underwent a primary elective total joint arthroplasty (TJA) of the hip or knee for osteoarthritis between 2011 and 2014. The Hendrich Fall Risk score was recorded for each patient preoperatively and postoperatively. Our main outcome measure was hospital readmission within 30 days of discharge. Of 2437 eligible TJA recipients, there were 226 (9.3%) patients who had a score ≥6. These patients were more likely to have an unplanned readmission (unadjusted odds ratio 2.84, 95% confidence interval 1.70-4.76, P 3 days (49.6% vs 36.6%, P = .0001), and were less likely to be sent home after discharge (20.8% vs 35.8%, P fall risk score after TJA is strongly associated with unplanned readmission. Application of this tool will allow hospitals to identify these patients and plan their discharge. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Saddlepoint approximation to the distribution of the total distance of the continuous time random walk

    Science.gov (United States)

    Gatto, Riccardo

    2017-12-01

    This article considers the random walk over Rp, with p ≥ 2, where a given particle starts at the origin and moves stepwise with uniformly distributed step directions and step lengths following a common distribution. Step directions and step lengths are independent. The case where the number of steps of the particle is fixed and the more general case where it follows an independent continuous time inhomogeneous counting process are considered. Saddlepoint approximations to the distribution of the distance from the position of the particle to the origin are provided. Despite the p-dimensional nature of the random walk, the computations of the saddlepoint approximations are one-dimensional and thus simple. Explicit formulae are derived with dimension p = 3: for uniformly and exponentially distributed step lengths, for fixed and for Poisson distributed number of steps. In these situations, the high accuracy of the saddlepoint approximations is illustrated by numerical comparisons with Monte Carlo simulation. Contribution to the "Topical Issue: Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.

  14. A close-form solution to predict the total melting time of an ablating slab in contact with a plasma

    International Nuclear Information System (INIS)

    Yeh, F.-B.

    2007-01-01

    An exact melt-through time is derived for a one-dimensional heated slab in contact with a plasma when the melted material is immediately removed. The plasma is composed of a collisionless presheath and sheath on a slab, which partially reflects and secondarily emits ions and electrons. The energy transport from plasma to the surface accounting for the presheath and sheath is determined from the kinetic analysis. This work proposes a semi-analytical model to calculate the total melting time of a slab based on a direct integration of the unsteady heat conduction equation, and provides quantitative results applicable to control the total melting time of the slab. The total melting time as a function of plasma parameters and thermophysical properties of the slab are obtained. The predicted energy transmission factor as a function of dimensionless wall potential agrees well with the experimental data. The effects of reflectivities of the ions and electrons on the wall, electron-to-ion source temperature ratio at the presheath edge, charge number, ion-to-electron mass ratio, ionization energy, plasma flow work-to-heat conduction ratios, Stefan number, melting temperature, Biot number and bias voltage on the total melting time of the slab are quantitatively provided in this work

  15. Does the brake response time of the right leg change after left total knee arthroplasty? A prospective study.

    Science.gov (United States)

    Marques, Carlos J; Barreiros, João; Cabri, Jan; Carita, Ana I; Friesecke, Christian; Loehr, Jochen F

    2008-08-01

    Patients undergoing total knee arthroplasty often ask when they can safely resume car driving. There is little evidence available on which physicians can rely when advising patients on this issue. In a prospective study we assessed the brake response time of 24 patients admitted to the clinic for left total knee arthroplasty preoperatively and then 10 days after surgery. On each measurement day the patients performed two tasks, a simple and a complex brake response time task in a car simulator. Ten days after left TKA the brake response time for the simple task had decreased by 3.6% (p=0.24), the reaction time by 3.1% (p=0.34) and the movement time by 6.6% (p=0.07). However, the performance improvement was not statistically significant. Task complexity increased brake response time at both time points. A 5.8% increase was significant (p=0.01) at 10 days after surgery. Based on our results, we suggest that patients who have undergone left total knee arthroplasty may resume car driving 10 days after surgery as long as they drive a car with automatic transmission.

  16. Productive Large Scale Personal Computing: Fast Multipole Methods on GPU/CPU Systems, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — To be used naturally in design optimization, parametric study and achieve quick total time-to-solution, simulation must naturally and personally be available to the...

  17. VMware vSphere performance designing CPU, memory, storage, and networking for performance-intensive workloads

    CERN Document Server

    Liebowitz, Matt; Spies, Rynardt

    2014-01-01

    Covering the latest VMware vSphere software, an essential book aimed at solving vSphere performance problems before they happen VMware vSphere is the industry's most widely deployed virtualization solution. However, if you improperly deploy vSphere, performance problems occur. Aimed at VMware administrators and engineers and written by a team of VMware experts, this resource provides guidance on common CPU, memory, storage, and network-related problems. Plus, step-by-step instructions walk you through techniques for solving problems and shed light on possible causes behind the problems. Divu

  18. Simulation of small-angle scattering patterns using a CPU-efficient algorithm

    Science.gov (United States)

    Anitas, E. M.

    2017-12-01

    Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.

  19. [Total quality management in times of crisis. The case of Argentina].

    Science.gov (United States)

    Larroca, Norberto

    2003-01-01

    Healthcare organizations were faced with so great a challenge following the financial slump that they were forced to 'sharpen their wits' in order to survive. Integrated Quality Management (in Spanish GIC), proved the ideal instrument. GIC has four foundational elements, the application of which allow for successful management of crisis situations. They are as follows: TRAINING of human resources EVALUATION of healthcare institutions SELF-EVALUATION by institutions QUALITY ACCREDITATION of institutions All our organizations have the appropriate tools to carry out these activities which form the basis of our project: CAES (Argentinean Chamber of Healthcare Institutions) -training-, CIDCAM (Inter-institutional Committee for Quality Development in Medical Care) -evaluation and self-evaluation-, CENAS (Specialist Centre for the Standardization and Accreditation in Health Care)-accreditation-. In times of crisis, we play an active part, that is, instead of withdrawing our efforts, we do our best to achieve the best and most adequate objective in order to meet the needs of the population through Integrated Quality Management. Eventually, when the results are examined, medical care that meets the best quality standards is found to be, after all, the most economical (that is best results, better satisfaction of healthcare users and providers as well as less mistakes).

  20. Total testosterone levels are often more than three times elevated in patients with androgen-secreting tumours

    DEFF Research Database (Denmark)

    Glintborg, Dorte; Lambaa Altinok, Magda; Petersen, Kresten Rubeck

    2015-01-01

    surgery. Terminal hair growth on lip and chin gradually increases after menopause, which complicates distinction from normal physiological variation. Precise testosterone assays have just recently become available in the daily clinic. We present three women diagnosed with testosterone-producing tumours...... when total testosterone levels are above three times the upper reference limit....

  1. Independent and combined associations of total sedentary time and television viewing time with food intake patterns of 9- to 11-year-old Canadian children.

    Science.gov (United States)

    Borghese, Michael M; Tremblay, Mark S; Leduc, Genevieve; Boyer, Charles; Bélanger, Priscilla; LeBlanc, Allana G; Francis, Claire; Chaput, Jean-Philippe

    2014-08-01

    The relationships among sedentary time, television viewing time, and dietary patterns in children are not fully understood. The aim of this paper was to determine which of self-reported television viewing time or objectively measured sedentary time is a better correlate of the frequency of consumption of healthy and unhealthy foods. A cross-sectional study was conducted of 9- to 11-year-old children (n = 523; 57.1% female) from Ottawa, Ontario, Canada. Accelerometers were used to determine total sedentary time, and questionnaires were used to determine the number of hours of television watching and the frequency of consumption of foods per week. Television viewing was negatively associated with the frequency of consumption of fruits, vegetables, and green vegetables, and positively associated with the frequency of consumption of sweets, soft drinks, diet soft drinks, pastries, potato chips, French fries, fruit juices, ice cream, fried foods, and fast food. Except for diet soft drinks and fruit juices, these associations were independent of covariates, including sedentary time. Total sedentary time was negatively associated with the frequency of consumption of sports drinks, independent of covariates, including television viewing. In combined sedentary time and television viewing analyses, children watching >2 h of television per day consumed several unhealthy food items more frequently than did children watching ≤2 h of television, regardless of sedentary time. In conclusion, this paper provides evidence to suggest that television viewing time is more strongly associated with unhealthy dietary patterns than is total sedentary time. Future research should focus on reducing television viewing time, as a means of improving dietary patterns and potentially reducing childhood obesity.

  2. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  3. Influence of different maceration time and temperatures on total phenols, colour and sensory properties of Cabernet Sauvignon wines.

    Science.gov (United States)

    Şener, Hasan; Yildirim, Hatice Kalkan

    2013-12-01

    Maceration and fermentation time and temperatures are important factors affecting wine quality. In this study different maceration times (3 and 6 days) and temperatures (15  and 25 ) during production of red wine (Vitis vinifera L. Cabernet Sauvignon) were investigated. In all wines standard wine chemical parameters and some specific parameters as total phenols, tartaric esters, total flavonols and colour parameters (CD, CI, T, dA%, %Y, %R, %B, CIELAB values) were determined. Sensory evaluation was performed by descriptive sensory analysis. The results demonstrated not only the importance of skin contact time and temperature during maceration but also the effects of transition temperatures (different maceration and fermentation temperatures) on wine quality as a whole. The results of sensory descriptive analyses revealed that the temperature significantly affected the aroma and flavour attributes of wines. The highest scores for 'cassis', 'clove', 'fresh fruity' and 'rose' characters were obtained in wines produced at low temperature (15 ) of maceration (6 days) and fermentation.

  4. Brake response time is significantly impaired after total knee arthroplasty: investigation of performing an emergency stop while driving a car.

    Science.gov (United States)

    Jordan, Maurice; Hofmann, Ulf-Krister; Rondak, Ina; Götze, Marco; Kluba, Torsten; Ipach, Ingmar

    2015-09-01

    The objective of this study was to investigate whether total knee arthroplasty (TKA) impairs the ability to perform an emergency stop. An automatic transmission brake simulator was developed to evaluate total brake response time. A prospective repeated-measures design was used. Forty patients (20 left/20 right) were measured 8 days and 6, 12, and 52 wks after surgery. Eight days postoperative total brake response time increased significantly by 30% in right TKA and insignificantly by 2% in left TKA. Brake force significantly decreased by 35% in right TKA and by 25% in left TKA during this period. Baseline values were reached at week 12 in right TKA; the impairment of outcome measures, however, was no longer significant at week 6 compared with preoperative values. Total brake response time and brake force in left TKA fell below baseline values at weeks 6 and 12. Brake force in left TKA was the only outcome measure significantly impaired 8 days postoperatively. This study highlights that categorical statements cannot be provided. This study's findings on automatic transmission driving suggest that right TKA patients may resume driving 6 wks postoperatively. Fitness to drive in left TKA is not fully recovered 8 days postoperatively. If testing is not available, patients should refrain from driving until they return from rehabilitation.

  5. The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

    Directory of Open Access Journals (Sweden)

    Ra Inta

    2012-01-01

    Full Text Available The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”, built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard. Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

  6. Associations of Total and Domain-Specific Sedentary Time With Type 2 Diabetes in Taiwanese Older Adults

    Directory of Open Access Journals (Sweden)

    Ming-Chun Hsueh

    2016-07-01

    Full Text Available Background: The increasing prevalence of type 2 diabetes in older adults has become a public health concern. We investigated the associations of total and domain-specific sedentary time with risk of type 2 diabetes in older adults. Methods: The sample comprised 1046 older people (aged ≥65 years. Analyses were performed using crosssectional data collected via computer-assisted telephone-based interviews in 2014. Data on six self-reported domains of sedentary time (Measure of Older Adults’ Sedentary Time, type 2 diabetes status, and sociodemographic variables were included in the study. Binary logistic regression analysis was performed to calculate the adjusted odds ratios (ORs and 95% confidence intervals (CIs for total and individual sedentary behavior components and likelihood of type 2 diabetes. Results: A total of 17.5% of the participants reported type 2 diabetes. No significant associations were found between total sitting time and risk of type 2 diabetes, after controlling for confounding factors. After total sedentary behavior was stratified into six domains, only watching television for more than 2 hours per day was associated with higher odds of type 2 diabetes (OR 1.56; 95% CI, 1.10–2.21, but no significant associations were found between other domains of sedentary behavior (computer use, reading, socializing, transport, and hobbies and risk of type 2 diabetes. Conclusions: These findings suggest that, among domain-specific sedentary behavior, excessive television viewing might increase the risk of type 2 diabetes among older adults more than other forms of sedentary behavior.

  7. Lot-Order Assignment Applying Priority Rules for the Single-Machine Total Tardiness Scheduling with Nonnegative Time-Dependent Processing Times

    Directory of Open Access Journals (Sweden)

    Jae-Gon Kim

    2015-01-01

    Full Text Available Lot-order assignment is to assign items in lots being processed to orders to fulfill the orders. It is usually performed periodically for meeting the due dates of orders especially in a manufacturing industry with a long production cycle time such as the semiconductor manufacturing industry. In this paper, we consider the lot-order assignment problem (LOAP with the objective of minimizing the total tardiness of the orders with distinct due dates. We show that we can solve the LOAP optimally by finding an optimal sequence for the single-machine total tardiness scheduling problem with nonnegative time-dependent processing times (SMTTSP-NNTDPT. Also, we address how the priority rules for the SMTTSP can be modified to those for the SMTTSP-NNTDPT to solve the LOAP. In computational experiments, we discuss the performances of the suggested priority rules and show the result of the proposed approach outperforms that of the commercial optimization software package.

  8. Terahertz time-domain attenuated total reflection spectroscopy applied to the rapid discrimination of the botanical origin of honeys

    Science.gov (United States)

    Liu, Wen; Zhang, Yuying; Yang, Si; Han, Donghai

    2018-05-01

    A new technique to identify the floral resources of honeys is demanded. Terahertz time-domain attenuated total reflection spectroscopy combined with chemometrics methods was applied to discriminate different categorizes (Medlar honey, Vitex honey, and Acacia honey). Principal component analysis (PCA), cluster analysis (CA) and partial least squares-discriminant analysis (PLS-DA) have been used to find information of the botanical origins of honeys. Spectral range also was discussed to increase the precision of PLS-DA model. The accuracy of 88.46% for validation set was obtained, using PLS-DA model in 0.5-1.5 THz. This work indicated terahertz time-domain attenuated total reflection spectroscopy was an available approach to evaluate the quality of honey rapidly.

  9. Total and segmental colon transit time in constipated children assessed by scintigraphy with 111In-DTPA given orally.

    Science.gov (United States)

    Vattimo, A; Burroni, L; Bertelli, P; Messina, M; Meucci, D; Tota, G

    1993-12-01

    Serial colon scintigraphy using 111In-DTPA (2 MBq) given orally was performed in 39 children referred for constipation, and the total and segmental colon transit times were measured. The bowel movements during the study were recorded and the intervals between defecations (ID) were calculated. This method proved able to identify children with normal colon morphology (no. = 32) and those with dolichocolon (no. = 7). Normal children were not included for ethical reasons and we used the normal range determined by others using x-ray methods (29 +/- 4 hours). Total and segmental colon transit times were found to be prolonged in all children with dolichocolon (TC: 113.55 +/- 41.20 hours; RC: 39.85 +/- 26.39 hours; LC: 43.05 +/- 18.30 hours; RS: 30.66 +/- 26.89 hours). In the group of children with a normal colon shape, 13 presented total and segmental colon transit times within the referred normal value (TC: 27.79 +/- 4.10 hours; RC: 9.11 +/- 2.53 hours; LC: 9.80 +/- 3.50 hours; RS: 8.88 +/- 4.09 hours) and normal bowel function (ID: 23.37 +/- 5.93 hours). In the remaining children, 5 presented prolonged retention in the rectum (RS: 53.36 +/- 29.66 hours), and 14 a prolonged transit time in all segments. A good correlation was found between the transit time and bowel function. From the point of view of radiation dosimetry, the most heavily irradiated organs were the lower large intestine and the ovaries, and the level of radiation burden depended on the colon transit time. We can conclude that the described method results safe, accurate and fully diagnostic.

  10. Objectively measured physical environmental neighbourhood factors are not associated with accelerometer-determined total sedentary time in adults

    OpenAIRE

    Compernolle, Sofie; De Cocker, Katrien; Mackenbach, Joreintje D.; Van Nassau, Femke; Lakerveld, Jeroen; Cardon, Greet; De Bourdeaudhuij, Ilse

    2017-01-01

    Background: The physical neighbourhood environment may influence adults' sedentary behaviour. Yet, most studies examining the association between the physical neighbourhood environment and sedentary behaviour rely on self-reported data of either the physical neighbourhood environment and/or sedentary behaviour. The aim of this study was to investigate the associations between objectively measured physical environmental neighbourhood factors and accelerometer-determined total sedentary time in...

  11. Minimizing total weighted tardiness for the single machine scheduling problem with dependent setup time and precedence constraints

    Directory of Open Access Journals (Sweden)

    Hamidreza Haddad

    2012-04-01

    Full Text Available This paper tackles the single machine scheduling problem with dependent setup time and precedence constraints. The primary objective of this paper is minimization of total weighted tardiness. Since the complexity of the resulted problem is NP-hard we use metaheuristics method to solve the resulted model. The proposed model of this paper uses genetic algorithm to solve the problem in reasonable amount of time. Because of high sensitivity of GA to its initial values of parameters, a Taguchi approach is presented to calibrate its parameters. Computational experiments validate the effectiveness and capability of proposed method.

  12. Clinical responses after total body irradiation by over permissible dose of γ-rays in one time

    International Nuclear Information System (INIS)

    Jiang Benrong; Wang Guilin; Liu Huilan; Tang Xingsheng; Ai Huisheng

    1990-01-01

    The clinical responses of patients after total body over permissilbe dose γ-ray irradiation were observed and analysed. The results showed: when the dose was above 5 cGy, there was some immunological depression, but no significant change in hematopoietic functions. 5 cases showed some transient changes of ECG, perhaps due to vagotonia caused by psychological imbalance, One case vomitted 3-4 times after 28 cGy irradiation, this suggested that a few times of vomitting had no significance in the estimation of the irradiated dose and the whole clinical manifestations must be concretely analysed

  13. Comparison of the CPU and memory performance of StatPatternRecognitions (SPR) and Toolkit for MultiVariate Analysis (TMVA)

    International Nuclear Information System (INIS)

    Palombo, G.

    2012-01-01

    High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the learning process scale versus data set size. As classifiers, we consider Random Forests, Boosted Decision Trees and Neural Networks only, each with specific settings. For our tests, we employ a data set widely used in the machine learning community, “Threenorm” data set, as well as data tailored for testing various edge cases. For each data set, we constantly increase its size and check CPU time and memory needed to build the classifiers implemented in SPR and TMVA. We show that SPR is often significantly faster and consumes significantly less memory. For example, the SPR implementation of Random Forest is by an order of magnitude faster and consumes an order of magnitude less memory than TMVA on Threenorm data.

  14. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  15. Optimization of scan time in MRI for total hip prostheses. SEMAC tailoring for prosthetic implants containing different types of metals

    Energy Technology Data Exchange (ETDEWEB)

    Deligianni, X. [University of Basel Hospital, Basel (Switzerland). Div. of Radiological Physics; Merian Iselin Klinik, Basel (Switzerland). Inst. of Radiology; Bieri, O. [University of Basel Hospital, Basel (Switzerland). Div. of Radiological Physics; Elke, R. [Orthomerian, Basel (Switzerland); Wischer, T.; Egelhof, T. [Merian Iselin Klinik, Basel (Switzerland). Inst. of Radiology

    2015-12-15

    Magnetic resonance imaging (MRI) of soft tissues after total hip arthroplasty is of clinical interest for the diagnosis of various pathologies that are usually invisible with other imaging modalities. As a result, considerable effort has been put into the development of metal artifact reduction MRI strategies, such as slice encoding for metal artifact correction (SEMAC). Generally, the degree of metal artifact reduction with SEMAC directly relates to the overall time spent for acquisition, but there is no specific consensus about the most efficient sequence setup depending on the implant material. The aim of this article is to suggest material-tailored SEMAC protocol settings. Five of the most common total hip prostheses (1. Revision prosthesis (S-Rom), 2. Titanium alloy, 3. Mueller type (CoNiCRMo alloy), 4. Old Charnley prosthesis (Exeter/Stryker), 5. MS-30 stem (stainless-steel)) were scanned on a 1.5 T MRI clinical scanner with a SEMAC sequence with a range of artifact-resolving slice encoding steps (SES: 2 - 23) along the slice direction (yielding a total variable scan time ranging from 1 to 10 min). The reduction of the artifact volume in comparison with maximal artifact suppression was evaluated both quantitatively and qualitatively in order to establish a recommended number of steps for each case. The number of SES that reduced the artifact volume below approximately 300 mm{sup 3} ranged from 3 to 13, depending on the material. Our results showed that although 3 SES steps can be sufficient for artifact reduction for titanium prostheses, at least 11 SES should be used for prostheses made of materials such as certain alloys of stainless steel. Tailoring SES to the implant material and to the desired degree of metal artifact reduction represents a simple tool for workflow optimization of SEMAC imaging near total hip arthroplasty in a clinical setting.

  16. Optimization of scan time in MRI for total hip prostheses. SEMAC tailoring for prosthetic implants containing different types of metals

    International Nuclear Information System (INIS)

    Deligianni, X.; Wischer, T.; Egelhof, T.

    2015-01-01

    Magnetic resonance imaging (MRI) of soft tissues after total hip arthroplasty is of clinical interest for the diagnosis of various pathologies that are usually invisible with other imaging modalities. As a result, considerable effort has been put into the development of metal artifact reduction MRI strategies, such as slice encoding for metal artifact correction (SEMAC). Generally, the degree of metal artifact reduction with SEMAC directly relates to the overall time spent for acquisition, but there is no specific consensus about the most efficient sequence setup depending on the implant material. The aim of this article is to suggest material-tailored SEMAC protocol settings. Five of the most common total hip prostheses (1. Revision prosthesis (S-Rom), 2. Titanium alloy, 3. Mueller type (CoNiCRMo alloy), 4. Old Charnley prosthesis (Exeter/Stryker), 5. MS-30 stem (stainless-steel)) were scanned on a 1.5 T MRI clinical scanner with a SEMAC sequence with a range of artifact-resolving slice encoding steps (SES: 2 - 23) along the slice direction (yielding a total variable scan time ranging from 1 to 10 min). The reduction of the artifact volume in comparison with maximal artifact suppression was evaluated both quantitatively and qualitatively in order to establish a recommended number of steps for each case. The number of SES that reduced the artifact volume below approximately 300 mm 3 ranged from 3 to 13, depending on the material. Our results showed that although 3 SES steps can be sufficient for artifact reduction for titanium prostheses, at least 11 SES should be used for prostheses made of materials such as certain alloys of stainless steel. Tailoring SES to the implant material and to the desired degree of metal artifact reduction represents a simple tool for workflow optimization of SEMAC imaging near total hip arthroplasty in a clinical setting.

  17. Impact of operative time on early joint infection and deep vein thrombosis in primary total hip arthroplasty.

    Science.gov (United States)

    Wills, B W; Sheppard, E D; Smith, W R; Staggers, J R; Li, P; Shah, A; Lee, S R; Naranje, S M

    2018-03-22

    Infections and deep vein thrombosis (DVT) after total hip arthroplasty (THA) are challenging problems for both the patient and surgeon. Previous studies have identified numerous risk factors for infections and DVT after THA but have often been limited by sample size. We aimed to evaluate the effect of operative time on early postoperative infection as well as DVT rates following THA. We hypothesized that an increase in operative time would result in increased odds of acquiring an infection as well as a DVT. We conducted a retrospective analysis of prospectively collected data using the American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database from 2006 to 2015 for all patients undergoing primary THA. Associations between operative time and infection or DVT were evaluated with multivariable logistic regressions controlling for demographics and several known risks factors for infection. Three different types of infections were evaluated: (1) superficial surgical site infection (SSI), an infection involving the skin or subcutaneous tissue, (2) deep SSI, an infection involving the muscle or fascial layers beneath the subcutaneous tissue, and (3) organ/space infection, an infection involving any part of the anatomy manipulated during surgery other than the incisional components. In total, 103,044 patients who underwent THA were included in our study. Our results suggested a significant association between superficial SSIs and operative time. Specifically, the adjusted odds of suffering a superficial SSI increased by 6% (CI=1.04-1.08, ptime. When using dichotomized operative time (90minutes), the adjusted odds of suffering a superficial SSI was 56% higher for patients with prolonged operative time (CI=1.05-2.32, p=0.0277). The adjusted odds of suffering a deep SSI increased by 7% for every 10-minute increase in operative time (CI=1.01-1.14, p=0.0335). No significant associations were detected between organ/space infection, wound

  18. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Berg, Eric, E-mail: eberg@ucdavis.edu; Roncali, Emilie; Du, Junwei; Cherry, Simon R. [Department of Biomedical Engineering, University of California, Davis, One Shields Avenue, Davis, California 95616 (United States); Kapusta, Maciej [Molecular Imaging, Siemens Healthcare, Knoxville, Tennessee 37932 (United States)

    2016-02-15

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals.

  19. A combined time-of-flight and depth-of-interaction detector for total-body positron emission tomography

    International Nuclear Information System (INIS)

    Berg, Eric; Roncali, Emilie; Du, Junwei; Cherry, Simon R.; Kapusta, Maciej

    2016-01-01

    Purpose: In support of a project to build a total-body PET scanner with an axial field-of-view of 2 m, the authors are developing simple, cost-effective block detectors with combined time-of-flight (TOF) and depth-of-interaction (DOI) capabilities. Methods: This work focuses on investigating the potential of phosphor-coated crystals with conventional PMT-based block detector readout to provide DOI information while preserving timing resolution. The authors explored a variety of phosphor-coating configurations with single crystals and crystal arrays. Several pulse shape discrimination techniques were investigated, including decay time, delayed charge integration (DCI), and average signal shapes. Results: Pulse shape discrimination based on DCI provided the lowest DOI positioning error: 2 mm DOI positioning error was obtained with single phosphor-coated crystals while 3–3.5 mm DOI error was measured with the block detector module. Minimal timing resolution degradation was observed with single phosphor-coated crystals compared to uncoated crystals, and a timing resolution of 442 ps was obtained with phosphor-coated crystals in the block detector compared to 404 ps without phosphor coating. Flood maps showed a slight degradation in crystal resolvability with phosphor-coated crystals; however, all crystals could be resolved. Energy resolution was degraded by 3%–7% with phosphor-coated crystals compared to uncoated crystals. Conclusions: These results demonstrate the feasibility of obtaining TOF–DOI capabilities with simple block detector readout using phosphor-coated crystals

  20. Real-time fusion of coronary CT angiography with x-ray fluoroscopy during chronic total occlusion PCI.

    Science.gov (United States)

    Ghoshhajra, Brian B; Takx, Richard A P; Stone, Luke L; Girard, Erin E; Brilakis, Emmanouil S; Lombardi, William L; Yeh, Robert W; Jaffer, Farouc A

    2017-06-01

    The purpose of this study was to demonstrate the feasibility of real-time fusion of coronary computed tomography angiography (CTA) centreline and arterial wall calcification with x-ray fluoroscopy during chronic total occlusion (CTO) percutaneous coronary intervention (PCI). Patients undergoing CTO PCI were prospectively enrolled. Pre-procedural CT scans were integrated with conventional coronary fluoroscopy using prototype software. We enrolled 24 patients who underwent CTO PCI using the prototype CT fusion software, and 24 consecutive CTO PCI patients without CT guidance served as a control group. Mean age was 66 ± 11 years, and 43/48 patients were men. Real-time CTA fusion during CTO PCI provided additional information regarding coronary arterial calcification and tortuosity that generated new insights into antegrade wiring, antegrade dissection/reentry, and retrograde wiring during CTO PCI. Overall CTO success rates and procedural outcomes remained similar between the two groups, despite a trend toward higher complexity in the fusion CTA group. This study demonstrates that real-time automated co-registration of coronary CTA centreline and calcification onto live fluoroscopic images is feasible and provides new insights into CTO PCI, and in particular, antegrade dissection reentry-based CTO PCI. • Real-time semi-automated fusion of CTA/fluoroscopy is feasible during CTO PCI. • CTA fusion data can be toggled on/off as desired during CTO PCI • Real-time CT calcium and centreline overlay could benefit antegrade dissection/reentry-based CTO PCI.

  1. The Time Course of Knee Swelling Post Total Knee Arthroplasty and Its Associations with Quadriceps Strength and Gait Speed.

    Science.gov (United States)

    Pua, Yong-Hao

    2015-07-01

    This study examines the time course of knee swelling post total knee arthroplasty (TKA) and its associations with quadriceps strength and gait speed. Eighty-five patients with unilateral TKA participated. Preoperatively and on post-operative days (PODs) 1, 4, 14, and 90, knee swelling was measured using bioimpedance spectrometry. Preoperatively and on PODs 14 and 90, quadriceps strength was measured using isokinetic dynamometry while fast gait speed was measured using the timed 10-meter walk. On POD1, knee swelling increased ~35% from preoperative levels after which, knee swelling reduced but remained at ~11% above preoperative levels on POD90. In longitudinal, multivariable analyses, knee swelling was associated with quadriceps weakness (P<0.01) and slower gait speed (P=0.03). Interventions to reduce post-TKA knee swelling may be indicated to improve quadriceps strength and gait speed. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. The total length of myocytes and capillaries, and total number of myocyte nuclei in the rat heart are time-dependently increased by growth hormone

    DEFF Research Database (Denmark)

    Brüel, Annemarie; Oxlund, Hans; Nyengaard, Jens Randel

    2005-01-01

    /kg/day) or vehicle for 5, 10, 20, 40, or 80 days. From the left ventricle (LV) histological sections were made and stereological methods applied. Linear regression showed that GH time-dependently increased: LV volume (r=0.96, P

  3. GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task Scheduling

    OpenAIRE

    Kim, Sookyoung

    2008-01-01

    This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes t...

  4. IMU-based Real-time Pose Measurement system for Anterior Pelvic Plane in Total Hip Replacement Surgeries.

    Science.gov (United States)

    Zhe Cao; Shaojie Su; Hao Tang; Yixin Zhou; Zhihua Wang; Hong Chen

    2017-07-01

    With the aging of population, the number of Total Hip Replacement Surgeries (THR) increased year by year. In THR, inaccurate position of the implanted prosthesis may lead to the failure of the operation. In order to reduce the failure rate and acquire the real-time pose of Anterior Pelvic Plane (APP), we propose a measurement system in this paper. The measurement system includes two parts: Initial Pose Measurement Instrument (IPMI) and Real-time Pose Measurement Instrument (RPMI). IPMI is used to acquire the initial pose of the APP, and RPMI is used to estimate the real-time pose of the APP. Both are composed of an Inertial Measurement Unit (IMU) and magnetometer sensors. To estimate the attitude of the measurement system, the Extended Kalman Filter (EKF) is adopted in this paper. The real-time pose of the APP could be acquired together with the algorithm designed in the paper. The experiment results show that the Root Mean Square Error (RMSE) is within 1.6 degrees, which meets the requirement of THR operations.

  5. Reconstruction of MODIS total suspended matter time series maps by DINEOF and validation with autonomous platform data

    Science.gov (United States)

    Nechad, Bouchra; Alvera-Azcaràte, Aida; Ruddick, Kevin; Greenwood, Naomi

    2011-08-01

    In situ measurements of total suspended matter (TSM) over the period 2003-2006, collected with two autonomous platforms from the Centre for Environment, Fisheries and Aquatic Sciences (Cefas) measuring the optical backscatter (OBS) in the southern North Sea, are used to assess the accuracy of TSM time series extracted from satellite data. Since there are gaps in the remote sensing (RS) data, due mainly to cloud cover, the Data Interpolating Empirical Orthogonal Functions (DINEOF) is used to fill in the TSM time series and build a continuous daily "recoloured" dataset. The RS datasets consist of TSM maps derived from MODIS imagery using the bio-optical model of Nechad et al. (Rem Sens Environ 114: 854-866, 2010). In this study, the DINEOF time series are compared to the in situ OBS measured in moderately to very turbid waters respectively in West Gabbard and Warp Anchorage, in the southern North Sea. The discrepancies between instantaneous RS, DINEOF-filled RS data and Cefas data are analysed in terms of TSM algorithm uncertainties, space-time variability and DINEOF reconstruction uncertainty.

  6. Design of a Message Passing Model for Use in a Heterogeneous CPU-NFP Framework for Network Analytics

    CSIR Research Space (South Africa)

    Pennefather, S

    2017-09-01

    Full Text Available of applications written in the Go programming language to be executed on a Network Flow Processor (NFP) for enhanced performance. This paper explores the need and feasibility of implementing a message passing model for data transmission between the NFP and CPU...

  7. Overtaking CPU DBMSes with a GPU in whole-query analytic processing with parallelism-friendly execution plan optimization

    NARCIS (Netherlands)

    A. Agbaria (Adnan); D. Minor (David); N. Peterfreund (Natan); E. Rozenberg (Eyal); O. Rosenberg (Ofer); Huawei Research

    2016-01-01

    textabstractExisting work on accelerating analytic DB query processing with (discrete) GPUs fails to fully realize their potential for speedup through parallelism: Published results do not achieve significant speedup over more performant CPU-only DBMSes when processing complete queries. This

  8. Real-time fusion of coronary CT angiography with X-ray fluoroscopy during chronic total occlusion PCI

    Energy Technology Data Exchange (ETDEWEB)

    Ghoshhajra, Brian B.; Takx, Richard A.P. [Harvard Medical School, Cardiac MR PET CT Program, Massachusetts General Hospital, Department of Radiology and Division of Cardiology, Boston, MA (United States); Stone, Luke L.; Yeh, Robert W.; Jaffer, Farouc A. [Harvard Medical School, Cardiac Cathetrization Laboratory, Cardiology Division, Massachusetts General Hospital, Boston, MA (United States); Girard, Erin E. [Siemens Healthcare, Princeton, NJ (United States); Brilakis, Emmanouil S. [Cardiology Division, Dallas VA Medical Center and UT Southwestern Medical Center, Dallas, TX (United States); Lombardi, William L. [University of Washington, Cardiology Division, Seattle, WA (United States)

    2017-06-15

    The purpose of this study was to demonstrate the feasibility of real-time fusion of coronary computed tomography angiography (CTA) centreline and arterial wall calcification with X-ray fluoroscopy during chronic total occlusion (CTO) percutaneous coronary intervention (PCI). Patients undergoing CTO PCI were prospectively enrolled. Pre-procedural CT scans were integrated with conventional coronary fluoroscopy using prototype software. We enrolled 24 patients who underwent CTO PCI using the prototype CT fusion software, and 24 consecutive CTO PCI patients without CT guidance served as a control group. Mean age was 66 ± 11 years, and 43/48 patients were men. Real-time CTA fusion during CTO PCI provided additional information regarding coronary arterial calcification and tortuosity that generated new insights into antegrade wiring, antegrade dissection/reentry, and retrograde wiring during CTO PCI. Overall CTO success rates and procedural outcomes remained similar between the two groups, despite a trend toward higher complexity in the fusion CTA group. This study demonstrates that real-time automated co-registration of coronary CTA centreline and calcification onto live fluoroscopic images is feasible and provides new insights into CTO PCI, and in particular, antegrade dissection reentry-based CTO PCI. (orig.)

  9. A Hybrid Metaheuristic Approach for Minimizing the Total Flow Time in A Flow Shop Sequence Dependent Group Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Antonio Costa

    2014-07-01

    Full Text Available Production processes in Cellular Manufacturing Systems (CMS often involve groups of parts sharing the same technological requirements in terms of tooling and setup. The issue of scheduling such parts through a flow-shop production layout is known as the Flow-Shop Group Scheduling (FSGS problem or, whether setup times are sequence-dependent, the Flow-Shop Sequence-Dependent Group Scheduling (FSDGS problem. This paper addresses the FSDGS issue, proposing a hybrid metaheuristic procedure integrating features from Genetic Algorithms (GAs and Biased Random Sampling (BRS search techniques with the aim of minimizing the total flow time, i.e., the sum of completion times of all jobs. A well-known benchmark of test cases, entailing problems with two, three, and six machines, is employed for both tuning the relevant parameters of the developed procedure and assessing its performances against two metaheuristic algorithms recently presented by literature. The obtained results and a properly arranged ANOVA analysis highlight the superiority of the proposed approach in tackling the scheduling problem under investigation.

  10. A Flexible Job Shop Scheduling Problem with Controllable Processing Times to Optimize Total Cost of Delay and Processing

    Directory of Open Access Journals (Sweden)

    Hadi Mokhtari

    2015-11-01

    Full Text Available In this paper, the flexible job shop scheduling problem with machine flexibility and controllable process times is studied. The main idea is that the processing times of operations may be controlled by consumptions of additional resources. The purpose of this paper to find the best trade-off between processing cost and delay cost in order to minimize the total costs. The proposed model, flexible job shop scheduling with controllable processing times (FJCPT, is formulated as an integer non-linear programming (INLP model and then it is converted into an integer linear programming (ILP model. Due to NP-hardness of FJCPT, conventional analytic optimization methods are not efficient. Hence, in order to solve the problem, a Scatter Search (SS, as an efficient metaheuristic method, is developed. To show the effectiveness of the proposed method, numerical experiments are conducted. The efficiency of the proposed algorithm is compared with that of a genetic algorithm (GA available in the literature for solving FJSP problem. The results showed that the proposed SS provide better solutions than the existing GA.

  11. Dissociated time course between peak torque and total work recovery following bench press training in resistance trained men.

    Science.gov (United States)

    Ferreira, Diogo V; Gentil, Paulo; Ferreira-Junior, João B; Soares, Saulo R S; Brown, Lee E; Bottaro, Martim

    2017-10-01

    To evaluate the time course of peak torque and total work recovery after a resistance training session involving the bench press exercise. Repeated measures with a within subject design. Twenty-six resistance-trained men (age: 23.7±3.7years; height: 176.0±5.7cm; mass: 79.65±7.61kg) performed one session involving eight sets of the bench press exercise performed to momentary muscle failure with 2-min rest between sets. Shoulder horizontal adductors peak torque (PT), total work (TW), delayed onset muscle soreness (DOMS) and subjective physical fitness were measured pre, immediately post, 24, 48, 72 and 96h following exercise. The exercise protocol resulted in significant pectoralis major DOMS that lasted for 72h. Immediately after exercise, the reduction in shoulder horizontal adductors TW (25%) was greater than PT (17%). TW, as a percentage of baseline values, was also less than PT at 24, 48 and 96h after exercise. Additionally, PT returned to baseline at 96h, while TW did not. Resistance trained men presented dissimilar PT and TW recovery following free weight bench press exercise. This indicates that recovery of maximal voluntary contraction does not reflect the capability to perform multiple contractions. Strength and conditioning professionals should be cautious when evaluating muscle recovery by peak torque, since it can lead to the repetition of a training session sooner than recommended. Copyright © 2017. Published by Elsevier Inc.

  12. Total neutron-counting plutonium inventory measurement systems (PIMS) and their potential application to near real time materials accountancy (NRTMA)

    International Nuclear Information System (INIS)

    Driscall, I.; Fox, G.H.; Orr, C.H.; Whitehouse, K.R.

    1988-01-01

    A radiometric method of determining the inventory of an operating plutonium plant is described. An array of total neutron counters distributed across the plant is used to estimate hold-up at each plant item. Corrections for the sensitivity of detectors to plutonium in adjacent plant items are achieved through a matrix approach. This paper describes our experience in design, calibration and operation of a Plutonium Inventory Measurement System (PIMS) on an oxalate precipitation plutonium finishing line. Data from a recent trial of Near-Real-Time Materials Accounting (NRTMA) using the PIMS are presented and used to illustrate its present performance and problem areas. The reader is asked to consider what role PIMS might have in future accountancy systems

  13. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    Directory of Open Access Journals (Sweden)

    Yu Wang

    Full Text Available Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome. Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  14. Extending total parenteral nutrition hang time in the neonatal intensive care unit: is it safe and cost effective?

    Science.gov (United States)

    Balegar V, Kiran Kumar; Azeem, Mohammad Irfan; Spence, Kaye; Badawi, Nadia

    2013-01-01

    To investigate the effects of prolonging hang time of total parenteral nutrition (TPN) fluid on central line-associated blood stream infection (CLABSI), TPN-related cost and nursing workload. A before-after observational study comparing the practice of hanging TPN bags for 48 h (6 February 2009-5 February 2010) versus 24 h (6 February 2008-5 February 2009) in a tertiary neonatal intensive care unit was conducted. The main outcome measures were CLABSI, TPN-related expenses and nursing workload. One hundred thirty-six infants received 24-h TPN bags and 124 received 48-h TPN bags. Median (inter-quartile range) gestation (37 weeks (33,39) vs. 36 weeks (33,39)), mean (±standard deviation) admission weight of 2442 g (±101) versus 2476 g (±104) and TPN duration (9.7 days (±12.7) vs. 9.9 days (±13.4)) were similar (P > 0.05) between the 24- and 48-h TPN groups. There was no increase in CLABSI with longer hang time (0.8 vs. 0.4 per 1000 line days in the 24-h vs. 48-h group; P < 0.05). Annual cost saving using 48-h TPN was AUD 97,603.00. By using 48-h TPN, 68.3% of nurses indicated that their workload decreased and 80.5% indicated that time spent changing TPN reduced. Extending TPN hang time from 24 to 48 h did not alter CLABSI rate and was associated with a reduced TPN-related cost and perceived nursing workload. Larger randomised controlled trials are needed to more clearly delineate these effects. © 2012 The Authors. Journal of Paediatrics and Child Health © 2012 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  15. Discrepancy Between Clinician and Research Assistant in TIMI Score Calculation (TRIAGED CPU

    Directory of Open Access Journals (Sweden)

    Taylor, Brian T.

    2014-11-01

    Full Text Available Introduction: Several studies have attempted to demonstrate that the Thrombolysis in Myocardial Infarction (TIMI risk score has the ability to risk stratify emergency department (ED patients with potential acute coronary syndromes (ACS. Most of the studies we reviewed relied on trained research investigators to determine TIMI risk scores rather than ED providers functioning in their normal work capacity. We assessed whether TIMI risk scores obtained by ED providers in the setting of a busy ED differed from those obtained by trained research investigators. Methods: This was an ED-based prospective observational cohort study comparing TIMI scores obtained by 49 ED providers admitting patients to an ED chest pain unit (CPU to scores generated by a team of trained research investigators. We examined provider type, patient gender, and TIMI elements for their effects on TIMI risk score discrepancy. Results: Of the 501 adult patients enrolled in the study, 29.3% of TIMI risk scores determined by ED providers and trained research investigators were generated using identical TIMI risk score variables. In our low-risk population the majority of TIMI risk score differences were small; however, 12% of TIMI risk scores differed by two or more points. Conclusion: TIMI risk scores determined by ED providers in the setting of a busy ED frequently differ from scores generated by trained research investigators who complete them while not under the same pressure of an ED provider. [West J Emerg Med. 2015;16(1:24–33.

  16. A Programming Framework for Scientific Applications on CPU-GPU Systems

    Energy Technology Data Exchange (ETDEWEB)

    Owens, John

    2013-03-24

    At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiple parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.

  17. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    Science.gov (United States)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result

  18. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  19. Time-series MODIS image-based retrieval and distribution analysis of total suspended matter concentrations in Lake Taihu (China).

    Science.gov (United States)

    Zhang, Yuchao; Lin, Shan; Liu, Jianping; Qian, Xin; Ge, Yi

    2010-09-01

    Although there has been considerable effort to use remotely sensed images to provide synoptic maps of total suspended matter (TSM), there are limited studies on universal TSM retrieval models. In this paper, we have developed a TSM retrieval model for Lake Taihu using TSM concentrations measured in situ and a time series of quasi-synchronous MODIS 250 m images from 2005. After simple geometric and atmospheric correction, we found a significant relationship (R = 0.8736, N = 166) between in situ measured TSM concentrations and MODIS band normalization difference of band 3 and band 1. From this, we retrieved TSM concentrations in eight regions of Lake Taihu in 2007 and analyzed the characteristic distribution and variation of TSM. Synoptic maps of model-estimated TSM of 2007 showed clear geographical and seasonal variations. TSM in Central Lake and Southern Lakeshore were consistently higher than in other regions, while TSM in East Taihu was generally the lowest among the regions throughout the year. Furthermore, a wide range of TSM concentrations appeared from winter to summer. TSM in winter could be several times that in summer.

  20. A Novel Ant Colony Algorithm for the Single-Machine Total Weighted Tardiness Problem with Sequence Dependent Setup Times

    Directory of Open Access Journals (Sweden)

    Fardin Ahmadizar

    2011-08-01

    Full Text Available This paper deals with the NP-hard single-machine total weighted tardiness problem with sequence dependent setup times. Incorporating fuzzy sets and genetic operators, a novel ant colony optimization algorithm is developed for the problem. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. To calculate the heuristic information, three well-known priority rules are adopted as fuzzy sets and then aggregated. When all artificial ants have terminated their constructions, genetic operators such as crossover and mutation are applied to generate new regions of the solution space. A local search is then performed to improve the performance quality of some of the solutions found. Moreover, at run-time the pheromone trails are locally as well as globally updated, and limited between lower and upper bounds. The proposed algorithm is experimented on a set of benchmark problems from the literature and compared with other metaheuristics.

  1. Optimisation of high-quality total ribonucleic acid isolation from cartilaginous tissues for real-time polymerase chain reaction analysis.

    Science.gov (United States)

    Peeters, M; Huang, C L; Vonk, L A; Lu, Z F; Bank, R A; Helder, M N; Doulabi, B Zandieh

    2016-11-01

    Studies which consider the molecular mechanisms of degeneration and regeneration of cartilaginous tissues are seriously hampered by problematic ribonucleic acid (RNA) isolations due to low cell density and the dense, proteoglycan-rich extracellular matrix of cartilage. Proteoglycans tend to co-purify with RNA, they can absorb the full spectrum of UV light and they are potent inhibitors of polymerase chain reaction (PCR). Therefore, the objective of the present study is to compare and optimise different homogenisation methods and RNA isolation kits for an array of cartilaginous tissues. Tissue samples such as the nucleus pulposus (NP), annulus fibrosus (AF), articular cartilage (AC) and meniscus, were collected from goats and homogenised by either the MagNA Lyser or Freezer Mill. RNA of duplicate samples was subsequently isolated by either TRIzol (benchmark), or the RNeasy Lipid Tissue, RNeasy Fibrous Tissue, or Aurum Total RNA Fatty and Fibrous Tissue kits. RNA yield, purity, and integrity were determined and gene expression levels of type II collagen and aggrecan were measured by real-time PCR. No differences between the two homogenisation methods were found. RNA isolation using the RNeasy Fibrous and Lipid kits resulted in the purest RNA (A260/A280 ratio), whereas TRIzol isolations resulted in RNA that is not as pure, and show a larger difference in gene expression of duplicate samples compared with both RNeasy kits. The Aurum kit showed low reproducibility. For the extraction of high-quality RNA from cartilaginous structures, we suggest homogenisation of the samples by the MagNA Lyser. For AC, NP and AF we recommend the RNeasy Fibrous kit, whereas for the meniscus the RNeasy Lipid kit is advised.Cite this article: M. Peeters, C. L. Huang, L. A. Vonk, Z. F. Lu, R. A. Bank, M. N. Helder, B. Zandieh Doulabi. Optimisation of high-quality total ribonucleic acid isolation from cartilaginous tissues for real-time polymerase chain reaction analysis. Bone Joint Res 2016

  2. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  3. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  4. Five times sit-to-stand test in subjects with total knee replacement: Reliability and relationship with functional mobility tests.

    Science.gov (United States)

    Medina-Mirapeix, Francesc; Vivo-Fernández, Iván; López-Cañizares, Juan; García-Vidal, José A; Benítez-Martínez, Josep Carles; Del Baño-Aledo, María Elena

    2018-01-01

    The objective was to determine the inter-observer and test/retest reliability of the "Five-repetition sit-to-stand" (5STS) test in patients with total knee replacement (TKR). To explore correlation between 5STS and two mobility tests. A reliability study was conducted among 24 (mean age 72.13, S.D. 10.67; 50% were women) outpatients with TKR. They were recruited from a traumatology unit of a public hospital via convenience sampling. A physiotherapist and trauma physician assessed each patient at the same time. The same physiotherapist realized a 5STS second measurement 45-60min after the first one. Reliability was assessed with intraclass correlation coefficients (ICCs) and Bland-Altman plots. Pearson coefficient was calculated to assess the correlation between 5STS, time up to go test (TUG) and four meters gait speed (4MGS). ICC for inter-observer and test-retest reliability of the 5STS were 0.998 (95% confidence interval [CI], 0.995-0.999) and 0.982 (95% CI, 0.959-0.992). Bland-Altman plot inter-observer showed limits between -0.82 and 1.06 with a mean of 0.11 and no heteroscedasticity within the data. Bland-Altman plot for test-retest showed the limits between 1.76 and 4.16, a mean of 1.20 and heteroscedasticity within the data. Pearson correlation coefficient revealed significant correlation between 5STS and TUG (r=0.7, ptest-retest reliability when it is used in people with TKR, and also significant correlation with other functional mobility tests. These findings support the use of 5STS as outcome measure in TKR population. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Performance evaluation of linear time-series ionospheric Total Electron Content model over low latitude Indian GPS stations

    Science.gov (United States)

    Dabbakuti, J. R. K. Kumar; Venkata Ratnam, D.

    2017-10-01

    Precise modeling of the ionospheric Total Electron Content (TEC) is a critical aspect of Positioning, Navigation, and Timing (PNT) services intended for the Global Navigation Satellite Systems (GNSS) applications as well as Earth Observation System (EOS), satellite communication, and space weather forecasting applications. In this paper, linear time series modeling has been carried out on ionospheric TEC at two different locations at Koneru Lakshmaiah University (KLU), Guntur (geographic 16.44° N, 80.62° E; geomagnetic 7.55° N) and Bangalore (geographic 12.97° N, 77.59° E; geomagnetic 4.53° N) at the northern low-latitude region, for the year 2013 in the 24th solar cycle. The impact of the solar and geomagnetic activity on periodic oscillations of TEC has been investigated. Results confirm that the correlation coefficient of the estimated TEC from the linear model TEC and the observed GPS-TEC is around 93%. Solar activity is the key component that influences ionospheric daily averaged TEC while periodic component reveals the seasonal dependency of TEC. Furthermore, it is observed that the influence of geomagnetic activity component on TEC is different at both the latitudes. The accuracy of the model has been assessed by comparing the International Reference Ionosphere (IRI) 2012 model TEC and TEC measurements. Moreover, the absence of winter anomaly is remarkable, as determined by the Root Mean Square Error (RMSE) between the linear model TEC and GPS-TEC. On the contrary, the IRI2012 model TEC evidently failed to predict the absence of winter anomaly in the Equatorial Ionization Anomaly (EIA) crest region. The outcome of this work will be useful for improving the ionospheric now-casting models under various geophysical conditions.

  6. Characterizing the hydration state of L-threonine in solution using terahertz time-domain attenuated total reflection spectroscopy

    Science.gov (United States)

    Huang, Huachuan; Liu, Qiao; Zhu, Liguo; Li, Zeren

    2018-01-01

    The hydration of biomolecules is closely related to the dynamic process of their functional expression, therefore, characterizing hydration phenomena is a subject of keen interest. However, direct measurements on the global hydration state of biomolecules couldn't have been acquired using traditional techniques such as thermodynamics, ultrasound, microwave spectroscopy or viscosity, etc. In order to realize global hydration characterization of amino acid such as L-threonine, terahertz time-domain attenuated total reflectance spectroscopy (THz-TDS-ATR) was adopted in this paper. By measuring the complex permittivity of L-threonine solutions with various concentrations in the THz region, the hydration state and its concentration dependence were obtained, indicating that the number of hydrous water decreased with the increase of concentration. The hydration number was evaluated to be 17.8 when the molar concentration of L-threonine was 0.34 mol/L, and dropped to 13.2 when the molar concentration increased to 0.84 mol/L, when global hydration was taken into account. According to the proposed direct measurements, it is believed that the THz-TDS-ATR technique is a powerful tool for studying the picosecond molecular dynamics of amino acid solutions.

  7. Cross-sectional associations of total sitting and leisure screen time with cardiometabolic risk in adults. Results from the HUNT Study, Norway

    NARCIS (Netherlands)

    Chau, J.Y.; Grunseit, A.; Midthjell, K.; Holmen, J.; Holmen, T.L.; Bauman, A.E.; van der Ploeg, H.P.

    2014-01-01

    Objectives: To examine associations of total sitting time, TV-viewing and leisure-time computer use with cardiometabolic risk biomarkers in adults. Design: Population based cross-sectional study. Methods: Waist circumference, BMI, total cholesterol, HDL cholesterol, blood pressure, non-fasting

  8. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  9. Winter-time size distribution and source apportionment of total suspended particulate matter and associated metals in Delhi

    Science.gov (United States)

    Srivastava, Arun; Gupta, Sandeep; Jain, V. K.

    2009-03-01

    A study of the winter time size distribution and source apportionment of total suspended particulate matter (TSPM) and associated heavy metal concentrations have been carried out for the city of Delhi. This study is important from the point of view of implementation of compressed natural gas (CNG) as alternate of diesel fuel in the public transport system in 2001 to reduce the pollution level. TSPM were collected using a five-stage cascade impactor at six sites in the winters of 2005-06. The results of size distribution indicate that a major portion (~ 40%) of TSPM concentration is in the form of PM0.7 (heavy metals associated with various size fractions of TSPM. A very good correlation between coarse and fine size fraction of TSPM was observed. It was also observed that the metals associated with coarse particles have more chances of correlation with other metals; rather they are associated with fine particles. Source apportionment was carried out separately in coarse and fine size modes of TSPM by Chemical Mass Balance Receptor Model (CMB8) as well as by Principle Component Analysis (PCA) of SPSS. Source apportionment by PCA reveals that there are two major sources (possibly vehicular and crustal re-suspension) in both coarse and fine size fractions. Results obtained by CMB8 show the dominance of vehicular pollutants and crustal dust in fine and coarse size mode respectively. Noticeably the dominance of vehicular pollutants are now confined to fine size only whilst during pre CNG era it dominated both coarse and fine size mode. An increase of 42.5, 44.4, 48.2, 38.6 and 38.9% in the concentrations of TSPM, PM10.9, coarse particles, fine particles and lead respectively was observed during pre (2001) to post CNG (2005-06) period.

  10. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  11. iFLOOD: A Real Time Flood Forecast System for Total Water Modeling in the National Capital Region

    Science.gov (United States)

    Sumi, S. J.; Ferreira, C.

    2017-12-01

    Extreme flood events are the costliest natural hazards impacting the US and frequently cause extensive damages to infrastructure, disruption to economy and loss of lives. In 2016, Hurricane Matthew brought severe damage to South Carolina and demonstrated the importance of accurate flood hazard predictions that requires the integration of riverine and coastal model forecasts for total water prediction in coastal and tidal areas. The National Weather Service (NWS) and the National Ocean Service (NOS) provide flood forecasts for almost the entire US, still there are service-gap areas in tidal regions where no official flood forecast is available. The National capital region is vulnerable to multi-flood hazards including high flows from annual inland precipitation events and surge driven coastal inundation along the tidal Potomac River. Predicting flood levels on such tidal areas in river-estuarine zone is extremely challenging. The main objective of this study is to develop the next generation of flood forecast systems capable of providing accurate and timely information to support emergency management and response in areas impacted by multi-flood hazards. This forecast system is capable of simulating flood levels in the Potomac and Anacostia River incorporating the effects of riverine flooding from the upstream basins, urban storm water and tidal oscillations from the Chesapeake Bay. Flood forecast models developed so far have been using riverine data to simulate water levels for Potomac River. Therefore, the idea is to use forecasted storm surge data from a coastal model as boundary condition of this system. Final output of this validated model will capture the water behavior in river-estuary transition zone far better than the one with riverine data only. The challenge for this iFLOOD forecast system is to understand the complex dynamics of multi-flood hazards caused by storm surges, riverine flow, tidal oscillation and urban storm water. Automated system

  12. The Impact of Total Ischemic Time, Donor Age and the Pathway of Donor Death on Graft Outcomes After Deceased Donor Kidney Transplantation.

    Science.gov (United States)

    Wong, Germaine; Teixeira-Pinto, Armando; Chapman, Jeremy R; Craig, Jonathan C; Pleass, Henry; McDonald, Stephen; Lim, Wai H

    2017-06-01

    Prolonged ischemia is a known risk factor for delayed graft function (DGF) and its interaction with donor characteristics, the pathways of donor death, and graft outcomes may have important implications for allocation policies. Using data from the Australian and New Zealand Dialysis and Transplant registry (1994-2013), we examined the relationship between total ischemic time with graft outcomes among recipients who received their first deceased donor kidney transplants. Total ischemic time (in hours) was defined as the time of the donor renal artery interruption or aortic clamp, until the time of release of the clamp on the renal artery in the recipient. A total of 7542 recipients were followed up over a median follow-up time of 5.3 years (interquartile range of 8.2 years). Of these, 1823 (24.6%) experienced DGF and 2553 (33.9%) experienced allograft loss. Recipients with total ischemic time of 14 hours or longer experienced an increased odd of DGF compared with those with total ischemic time less than 14 hours. This effect was most marked among those with older donors (P value for interaction = 0.01). There was a significant interaction between total ischemic time, donor age, and graft loss (P value for interaction = 0.03). There was on average, a 9% increase in the overall risk of graft loss per hour increase in the total ischemic time (adjusted hazard ratio, 1.09; 95% confidence interval, 1.01-1.18; P = 0.02) in recipients with older donation after circulatory death grafts. There is a clinically important interaction between donor age, the pathway of donor death, and total ischemic time on graft outcomes, such that the duration of ischemic time has the greatest impact on graft survival in recipients with older donation after circulatory death kidneys.

  13. Cross-sectional associations of total sitting and leisure screen time with cardiometabolic risk in adults. Results from the HUNT Study, Norway.

    Science.gov (United States)

    Chau, Josephine Y; Grunseit, Anne; Midthjell, Kristian; Holmen, Jostein; Holmen, Turid L; Bauman, Adrian E; van der Ploeg, Hidde P

    2014-01-01

    To examine associations of total sitting time, TV-viewing and leisure-time computer use with cardiometabolic risk biomarkers in adults. Population based cross-sectional study. Waist circumference, BMI, total cholesterol, HDL cholesterol, blood pressure, non-fasting glucose, gamma glutamyltransferase (GGT) and triglycerides were measured in 48,882 adults aged 20 years or older from the Nord-Trøndelag Health Study 2006-2008 (HUNT3). Adjusted multiple regression models were used to test for associations between these biomarkers and self-reported total sitting time, TV-viewing and leisure-time computer use in the whole sample and by cardiometabolic disease status sub-groups. In the whole sample, reporting total sitting time ≥10 h/day was associated with poorer BMI, waist circumference, total cholesterol, HDL cholesterol, diastolic blood pressure, systolic blood pressure, non-fasting glucose, GGT and triglyceride levels compared to those reporting total sitting time Leisure-time computer use ≥1 h/day was associated with poorer BMI, total cholesterol, diastolic blood pressure, GGT and triglycerides compared with those reporting no leisure-time computing. Sub-group analyses by cardiometabolic disease status showed similar patterns in participants free of cardiometabolic disease, while similar albeit non-significant patterns were observed in those with cardiometabolic disease. Total sitting time, TV-viewing and leisure-time computer use are associated with poorer cardiometabolic risk profiles in adults. Reducing sedentary behaviour throughout the day and limiting TV-viewing and leisure-time computer use may have health benefits. Copyright © 2013 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  14. 38 CFR 3.22 - DIC benefits for survivors of certain veterans rated totally disabled at time of death.

    Science.gov (United States)

    2010-07-01

    ... benefits under paragraph (a) of this section receives any money or property pursuant to a judicial... amount of money received and the fair market value of the property received. The provisions of this... veteran. The amount to be reported is the total of the amount of money received and the fair market value...

  15. A real-time PCR method for quantification of the total and major variant strains of the deformed wing virus.

    Directory of Open Access Journals (Sweden)

    Emma L Bradford

    Full Text Available European honey bees (Apis mellifera are critically important to global food production by virtue of their pollination services but are severely threatened by deformed wing virus (DWV especially in the presence of the external parasite Varroa destructor. DWV exists as many viral strains with the two major variants (DWV-A and DWV-B varying in virulence. A single plasmid standard was constructed containing three sections for the specific determination of DWV-A (VP2 capsid region, DWV-B (IRES and a conserved region suitable for total DWV (helicase region. The assays were confirmed as specific and discriminatory with limits of detections of 25, 25 and 50 genome equivalents for DWV-A, DWV-B and total-DWV, respectively. The methods were successfully tested on Apis mellifera and V. destructor samples with varying DWV profiles. The new method determined a more accurate total DWV titre in samples with substantial DWV-B than the method currently described in the COLOSS Beebook. The proposed assays could be utilized for the screening of large quantities of bee material for both a total DWV load overview along with more detailed investigations into DWV-A and DWV-B profiles.

  16. Total internal reflection fluorescence (TIRF) microscopy for real-time imaging of nanoparticle-cell plasma membrane interaction

    DEFF Research Database (Denmark)

    Parhamifar, Ladan; Moghimi, Seyed Moien

    2012-01-01

    Nanoparticulate systems are widely used for site-specific drug and gene delivery as well as for medical imaging. The mode of nanoparticle-cell interaction may have a significant effect on the pathway of nanoparticle internalization and subsequent intracellular trafficking. Total internal reflection...

  17. CPU0213, a novel endothelin type A and type B receptor antagonist, protects against myocardial ischemia/reperfusion injury in rats

    Directory of Open Access Journals (Sweden)

    Z.Y. Wang

    2011-11-01

    Full Text Available The efficacy of endothelin receptor antagonists in protecting against myocardial ischemia/reperfusion (I/R injury is controversial, and the mechanisms remain unclear. The aim of this study was to investigate the effects of CPU0123, a novel endothelin type A and type B receptor antagonist, on myocardial I/R injury and to explore the mechanisms involved. Male Sprague-Dawley rats weighing 200-250 g were randomized to three groups (6-7 per group: group 1, Sham; group 2, I/R + vehicle. Rats were subjected to in vivo myocardial I/R injury by ligation of the left anterior descending coronary artery and 0.5% sodium carboxymethyl cellulose (1 mL/kg was injected intraperitoneally immediately prior to coronary occlusion. Group 3, I/R + CPU0213. Rats were subjected to identical surgical procedures and CPU0213 (30 mg/kg was injected intraperitoneally immediately prior to coronary occlusion. Infarct size, cardiac function and biochemical changes were measured. CPU0213 pretreatment reduced infarct size as a percentage of the ischemic area by 44.5% (I/R + vehicle: 61.3 ± 3.2 vs I/R + CPU0213: 34.0 ± 5.5%, P < 0.05 and improved ejection fraction by 17.2% (I/R + vehicle: 58.4 ± 2.8 vs I/R + CPU0213: 68.5 ± 2.2%, P < 0.05 compared to vehicle-treated animals. This protection was associated with inhibition of myocardial inflammation and oxidative stress. Moreover, reduction in Akt (protein kinase B and endothelial nitric oxide synthase (eNOS phosphorylation induced by myocardial I/R injury was limited by CPU0213 (P < 0.05. These data suggest that CPU0123, a non-selective antagonist, has protective effects against myocardial I/R injury in rats, which may be related to the Akt/eNOS pathway.

  18. An evaluation of total disintegration time for three different doses of sublingual fentanyl tablets in patients with breakthrough pain.

    Science.gov (United States)

    Nalamachu, Srinivas

    2013-12-01

    Breakthrough pain is common among patients with cancer and presents challenges to effective pain management. Breakthrough pain is characterized by rapid onset, severe intensity, and duration typically lasting disintegration time of three different doses of sublingual fentanyl tablets in opioid-tolerant patients. This was a single-center, non-randomized, open-label study. Opioid-tolerant adult patients (N = 30) with chronic pain were assigned to one of three dose groups and self-administered a single 100, 200, or 300 μg sublingual fentanyl tablet (Abstral(®), Galena Biopharma, Portland, OR, USA). Time to complete disintegration was measured by each patient with a stopwatch and independently verified by study personnel. Disintegration time (mean ± SD) for sublingual fentanyl tablets (all doses) was 88.2 ± 55.1 s. Mean disintegration times tended to be slightly longer for the 200 μg (96.7 ± 57.9 s) and 300 μg doses (98.6 ± 64.8 s) compared to the 100 μg dose (69.5 ± 40.5 s). Differences were not statistically significant. Disintegration time was not significantly different between men and women and was not affected by age. Sublingual fentanyl tablets dissolved rapidly (average time <2 min) in all patients, with the higher doses taking slightly more time to dissolve.

  19. [High time for a total ban on smoking in the hotel, restaurant and catering industry: the arguments are mounting].

    Science.gov (United States)

    Hassink, R J; Franke, L J A

    2007-02-24

    Active and passive smoking are well-known causes of disease, including respiratory and cardiovascular disease and cancer. In 2004 the Dutch government introduced new legislation to regulate smoking in the workplace. However, smoking is still allowed in hotels, bars and restaurants, despite the fact that two-thirds of the Dutch population support a total ban on smoking in public places. Several other European countries and American states have banned smoking in public places. Studies performed in these regions show that the new smoking regulations have had no negative economic effects. Moreover, various studies have shown that smoking bans have a positive impact on public health, even in the short-term, including a significant decrease in respiratory and cardiovascular disease. There is therefore no reason to continue to exclude hotels, bars and restaurants from the smoking ban in all public places in The Netherlands.

  20. Simulated effect of timing and Pt quantity injected on On-line NobleChem application on total fuel liftoff

    International Nuclear Information System (INIS)

    Pop, M.G.; Riddle, J.M.; Lamanna, L.S.; Gregorich, C.; Hoornik, A.

    2015-01-01

    Total liftoff is a measure of fuel performance and a risk indicator for fuel reliability. Fuel operability and license limits are directly related to the expected total lifetime liftoff. AREVA's continued commitment to zero fuel failure is expressed, among other efforts, in the continued development and improvement of its fuel cladding corrosion and crud risk assessment tools. The AREVA models used to assess and predict crud deposition on BWR cores over their lifespan have been refined by the development and incorporation of the PEZOG tool in response to the move in the industry to the On-Line NobleChem TM (OLNC) technology. PEZOG models the platinum-enhanced zirconium oxide growth of fuel cladding when exposed to platinum during operation. Depending on the local chemistry and radiation condition, noble metals act as catalysts for many reactions, including but not limited to hydrogen oxidation and oxygen reduction. OLNC's intention is to catalyze the hydrogen and oxygen recombination reaction for core internals protection. However, research has indicated that noble metals catalyze the oxygen reduction under the chemistry and radiation conditions as experienced in the pores of crud deposits, and hence, can increase the corrosion rate of zirconium alloy cladding. The developed PEZOG module calculates the oxide thickness as a function of platinum injection strategy. The stratified nature of oxide and crud layers formed on fuel cladding surfaces is reflected in the calculations as are the different platinum interaction in each of the layers. This paper presents examples of the evaluation of various aspects of the platinum injection strategies and their influence on the oxide growth enhancement as applied to conditions of a U.S. plant. (authors)

  1. Effect of the ionizing radiation and aging time on total flavonoids contents in Brazilian sugarcane spirit composed with green propolis

    International Nuclear Information System (INIS)

    Baptista, Antonio S.; Alencar, Severino M. de; Tiveron, Ana P.; Prado, Adna; Bergamaschi, Keityane B.; Veiga, Lucimara F. da; Aguiar, Claudio L. de; Baptista, Aparecido S.; Horii, Jorge

    2009-01-01

    Propolis is a natural product from vegetable origin, but, this substance, in general, is collected in the beehives. This product is largely known because its heath benefit attributed to its biological properties. On the other hand, Brazilian sugarcane spirit, 'cachaca', is an interesting alcoholic beverage with an increasing importance in the segment in many markets in the world. Therefore, was evaluating the addition of the propolis into cachaca and the effect of ionizing radiation on propolis compounds with biological properties. Samples of cachaca with propolis used in irradiation experiments were prepared from cachaca (40 deg GL) composed with propolis (0.1%). Eight treatments, with four repetitions each, were considered in this study. Three doses of ionizing energy from electron beam and gamma radiation from 60 Co were applied on the cachaca samples, i.e. 0.5, 1.0, and 2.0 kGy, with the goal to accelerate the aging time of the cachaca. The sugarcane spirits samples were storage during two periods (immediately after the radiation treatment and 30 months after the treatments) and their flavonoids contents were analyzed. Flavonoids contents in sugarcane spirit were statistically different between both storage time. The samples of cachaca treated with electron beam at 2.0 kGy presented the highest reduction in flavonoids contents, approximately 30.0 % in relation to the first analysis time. In conclusion, the time of storage to promote reduction on the flavonoids contents and the ionizing radiation also promoted reduction on the contents of these compounds, mainly in the first period of storage. (author)

  2. Effect of the ionizing radiation and aging time on total flavonoids contents in Brazilian sugarcane spirit composed with green propolis

    Energy Technology Data Exchange (ETDEWEB)

    Baptista, Antonio S.; Alencar, Severino M. de; Tiveron, Ana P.; Prado, Adna; Bergamaschi, Keityane B.; Veiga, Lucimara F. da; Aguiar, Claudio L. de; Baptista, Aparecido S.; Horii, Jorge [Escola Superior de Agricultura Luiz de Queiroz (ESALQ/USP), Piracicaba, SP (Brazil). Dept. de Agroindustria, Alimentos e Nutricao], e-mail: asbaptis@esalq.usp.br, e-mail: alencar@esalq.usp.br, e-mail: anptiver@esalq.usp.br, e-mail: adprado@esalq.usp.br, e-mail: kbergamas@esalq.usp.br, e-mail: lcfernan@esalq.usp.br, e-mail: claguiar@esalq.usp.br, e-mail: pmatao@gmail.com, e-mail: jhorii@esalq.usp.br; Arthur, Valter [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil)], e-mail: arthur@cena.usp.br

    2009-07-01

    Propolis is a natural product from vegetable origin, but, this substance, in general, is collected in the beehives. This product is largely known because its heath benefit attributed to its biological properties. On the other hand, Brazilian sugarcane spirit, 'cachaca', is an interesting alcoholic beverage with an increasing importance in the segment in many markets in the world. Therefore, was evaluating the addition of the propolis into cachaca and the effect of ionizing radiation on propolis compounds with biological properties. Samples of cachaca with propolis used in irradiation experiments were prepared from cachaca (40 deg GL) composed with propolis (0.1%). Eight treatments, with four repetitions each, were considered in this study. Three doses of ionizing energy from electron beam and gamma radiation from {sup 60}Co were applied on the cachaca samples, i.e. 0.5, 1.0, and 2.0 kGy, with the goal to accelerate the aging time of the cachaca. The sugarcane spirits samples were storage during two periods (immediately after the radiation treatment and 30 months after the treatments) and their flavonoids contents were analyzed. Flavonoids contents in sugarcane spirit were statistically different between both storage time. The samples of cachaca treated with electron beam at 2.0 kGy presented the highest reduction in flavonoids contents, approximately 30.0 % in relation to the first analysis time. In conclusion, the time of storage to promote reduction on the flavonoids contents and the ionizing radiation also promoted reduction on the contents of these compounds, mainly in the first period of storage. (author)

  3. Linear and nonlinear attributes of ultrasonic time series recorded from experimentally loaded rock samples and total failure prediction

    Czech Academy of Sciences Publication Activity Database

    Rudajev, Vladimír; Číž, R.

    2007-01-01

    Roč. 44, č. 3 (2007), s. 457-467 ISSN 1365-1609 R&D Projects: GA ČR GA205/06/0906 Institutional research plan: CEZ:AV0Z30130516; CEZ:AV0Z30460519 Keywords : ultrasonic emission * microfracturing * time series * autocorrelation * fractal dimensions * neural networks Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 0.735, year: 2007

  4. Distributed Task Rescheduling With Time Constraints for the Optimization of Total Task Allocations in a Multirobot System.

    Science.gov (United States)

    Turner, Joanna; Meng, Qinggang; Schaefer, Gerald; Whitbrook, Amanda; Soltoggio, Andrea

    2017-09-28

    This paper considers the problem of maximizing the number of task allocations in a distributed multirobot system under strict time constraints, where other optimization objectives need also be considered. It builds upon existing distributed task allocation algorithms, extending them with a novel method for maximizing the number of task assignments. The fundamental idea is that a task assignment to a robot has a high cost if its reassignment to another robot creates a feasible time slot for unallocated tasks. Multiple reassignments among networked robots may be required to create a feasible time slot and an upper limit to this number of reassignments can be adjusted according to performance requirements. A simulated rescue scenario with task deadlines and fuel limits is used to demonstrate the performance of the proposed method compared with existing methods, the consensus-based bundle algorithm and the performance impact (PI) algorithm. Starting from existing (PI-generated) solutions, results show up to a 20% increase in task allocations using the proposed method.

  5. Integer batch scheduling problems for a single-machine with simultaneous effect of learning and forgetting to minimize total actual flow time

    Directory of Open Access Journals (Sweden)

    Rinto Yusriski

    2015-09-01

    Full Text Available This research discusses an integer batch scheduling problems for a single-machine with position-dependent batch processing time due to the simultaneous effect of learning and forgetting. The decision variables are the number of batches, batch sizes, and the sequence of the resulting batches. The objective is to minimize total actual flow time, defined as total interval time between the arrival times of parts in all respective batches and their common due date. There are two proposed algorithms to solve the problems. The first is developed by using the Integer Composition method, and it produces an optimal solution. Since the problems can be solved by the first algorithm in a worst-case time complexity O(n2n-1, this research proposes the second algorithm. It is a heuristic algorithm based on the Lagrange Relaxation method. Numerical experiments show that the heuristic algorithm gives outstanding results.

  6. Characterizing Methane Emissions at Local Scales with a 20 Year Total Hydrocarbon Time Series, Imaging Spectrometry, and Web Facilitated Analysis

    Science.gov (United States)

    Bradley, Eliza Swan

    Methane is an important greenhouse gas for which uncertainty in local emission strengths necessitates improved source characterizations. Although CH4 plume mapping did not motivate the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) design and municipal air quality monitoring stations were not intended for studying marine geological seepage, these assets have capabilities that can make them viable for studying concentrated (high flux, highly heterogeneous) CH4 sources, such as the Coal Oil Point (COP) seep field (˜0.015 Tg CH4 yr-1) offshore Santa Barbara, California. Hourly total hydrocarbon (THC) data, spanning 1990 to 2008 from an air pollution station located near COP, were analyzed and showed geologic CH4 emissions as the dominant local source. A band ratio approach was developed and applied to high glint AVIRIS data over COP, resulting in local-scale mapping of natural atmospheric CH4 plumes. A Cluster-Tuned Matched Filter (CTMF) technique was applied to Gulf of Mexico AVIRIS data to detect CH4 venting from offshore platforms. Review of 744 platform-centered CTMF subsets was facilitated through a flexible PHP-based web portal. This dissertation demonstrates the value of investigating municipal air quality data and imaging spectrometry for gathering insight into concentrated methane source emissions and highlights how flexible web-based solutions can help facilitate remote sensing research.

  7. Ichnotaxa for bite traces of thetrapods : A new area of research or a total waste of time?

    DEFF Research Database (Denmark)

    Jacobsen, Aase Roland; Bromley, Richard Granville

    to the naming of biting trace fossils in bone substrates. Study of tetrapod bite trace fossils has revealed feeding behaviour, jaw mechanism, face-biting behaviour, social behaviour etc., as well as palaeoenvironmental conditions. But should naming of scratches and holes produced by teeth be considered...... a worthless waste of time? Is naming of this group of trace fossils considered a productive move? We have extended this work, suggesting new ichnotaxa for bite traces to focus on their potential value for identifying the tracemaker and thereby feeding behaviour. Bite traces also have a great potential...

  8. SAFARI digital processing unit: performance analysis of the SpaceWire links in case of a LEON3-FT based CPU

    Science.gov (United States)

    Giusi, Giovanni; Liu, Scige J.; Di Giorgio, Anna M.; Galli, Emanuele; Pezzuto, Stefano; Farina, Maria; Spinoglio, Luigi

    2014-08-01

    SAFARI (SpicA FAR infrared Instrument) is a far-infrared imaging Fourier Transform Spectrometer for the SPICA mission. The Digital Processing Unit (DPU) of the instrument implements the functions of controlling the overall instrument and implementing the science data compression and packing. The DPU design is based on the use of a LEON family processor. In SAFARI, all instrument components are connected to the central DPU via SpaceWire links. On these links science data, housekeeping and commands flows are in some cases multiplexed, therefore the interface control shall be able to cope with variable throughput needs. The effective data transfer workload can be an issue for the overall system performances and becomes a critical parameter for the on-board software design, both at application layer level and at lower, and more HW related, levels. To analyze the system behavior in presence of the expected SAFARI demanding science data flow, we carried out a series of performance tests using the standard GR-CPCI-UT699 LEON3-FT Development Board, provided by Aeroflex/Gaisler, connected to the emulator of the SAFARI science data links, in a point-to-point topology. Two different communication protocols have been used in the tests, the ECSS-E-ST-50-52C RMAP protocol and an internally defined one, the SAFARI internal data handling protocol. An incremental approach has been adopted to measure the system performances at different levels of the communication protocol complexity. In all cases the performance has been evaluated by measuring the CPU workload and the bus latencies. The tests have been executed initially in a custom low level execution environment and finally using the Real- Time Executive for Multiprocessor Systems (RTEMS), which has been selected as the operating system to be used onboard SAFARI. The preliminary results of the carried out performance analysis confirmed the possibility of using a LEON3 CPU processor in the SAFARI DPU, but pointed out, in agreement

  9. MID-VASTUS VS MEDIAL PARA-PATELLAR APPROACH IN TOTAL KNEE REPLACEMENT—TIME TO DISCHARGE

    Science.gov (United States)

    Mukherjee, P.; Press, J.; Hockings, M.

    2009-01-01

    Background It has been shown before that when compared with the medial para-patellar approach, the mid-vastus approach for TKR results in less post-operative pain for patients and more rapid recovery of straight leg raise. As far as we are aware the post-operative length of stay of the two groups of patients has not been compared. We postulated that the reduced pain and more rapid recovery of straight leg raise would translate into an earlier, safe, discharge home for the mid-vastus patients compared with those who underwent a traditional medial para-patellar approach. Methods Twenty patients operated on by each of five established knee arthroplasty surgeons were evaluated prospectively with regard to their pre and post-operative range of movement, time to achieve straight leg raise post-operatively and length of post-operative hospital stay. Only one of the surgeons performed the mid-vastus approach, and the measurements were recorded by physiotherapists who were blinded as to the approach used on each patient. Results The results were analysed using a standard statistical software package, and although the mean length of stay was lower for the mid-vastus patients, the difference did not reach a level of significance (p = 0.13). The time taken to achieve straight leg raise post-operatively was significantly less in the mid-vastus group (p<0.001). Conclusion Although this study confirms previous findings that the mid-vastus approach reduces the time taken for patients to achieve straight leg raise, when compared with the medial para-patellar approach, on its own it does not translate into a significantly shorter length of hospital stay. In order to reduce the length of post-operative hospital stay with an accelerated rehabilitation program for TKR, a multi-disciplinary approach is required. Patient expectations, GP support, physiotherapists and nursing staff all have a role to play and the mid-vastus approach, in permitting earlier straight leg raising

  10. Effect of ionizing radiation and aging time on total phenolics in Brazilian sugarcane spirit with green propolis

    International Nuclear Information System (INIS)

    Aguiar, Claudio L. de; Baptista, Antonio S.; Alencar, Severino M. de; Tiveron, Ana P.; Prado, Adna; Bergamaschi, Keityane B.; Veiga, Lucimara F. da; Baptista, Aparecido S.; Horii, Jorge

    2009-01-01

    Propolis is a natural product from vegetable origin that is generally collected from beehives. This product is well-known for its heath benefits attributed to its biological properties. On the other hand, Brazilian sugarcane spirit, cachaca, shows increasing interest and importance in the alcoholic beverage segment in many markets in the world. Therefore, it was evaluated the addition of propolis into cachaca and the effect of ionizing radiation on propolis compounds with biological activity. Samples of cachaca with propolis used in irradiation experiments were prepared from cachaca (40 deg GL) composed with propolis (0,1 %). Eight treatments, with four repetitions each, were carried out in this study. Three doses of ionizing radiation from electron beam and gamma radiation by 60 Co were applied on the cachaca samples, i.e. 0.5, 1.0, and 2.0 kGy, aiming to accelerate the aging of the cachaca samples. The spirits samples were stored for two periods (immediately after the radiation treatment and 30 months after the treatments) and their phenolic compounds contents were analyzed. Phenolic compounds contents were statistically different between both storage times of the cachaca. The samples of cachaca treated with electron beam at 2.0 kGy presented higher reduction in phenolic compounds contents, approximately 6 % in the first analysis and 11 % in the second analysis. In conclusion, the time of storage to promote reduction on the phenolics compounds and the ionizing radiations from electron beams affect more the contents of these compounds than gamma radiation. (author)

  11. Effect of ionizing radiation and aging time on total phenolics in Brazilian sugarcane spirit with green propolis

    Energy Technology Data Exchange (ETDEWEB)

    Aguiar, Claudio L. de; Baptista, Antonio S.; Alencar, Severino M. de; Tiveron, Ana P.; Prado, Adna; Bergamaschi, Keityane B.; Veiga, Lucimara F. da; Baptista, Aparecido S.; Horii, Jorge [Escola Superior de Agricultura Luiz de Queiroz (ESALQ/USP), Piracicaba, SP (Brazil). Dept. de Agroindustria, Alimentos e Nutricao], e-mail: claguiar@esalq.usp.br, e-mail: asbaptis@esalq.usp.br, e-mail: alencar@esalq.usp.br, e-mail: anptiver@esalq.usp.br, e-mail: adprado@esalq.usp.br, e-mail: kbergamas@esalq.usp.br, e-mail: lcfernan@esalq.usp.br, e-mail: pmatao@gmail.com, e-mail: jhorii@esalq.usp.br; Arthur, Valter [Centro de Energia Nuclear na Agricultura (CENA/USP), Piracicaba, SP (Brazil)], e-mail: arthur@cena.usp.br

    2009-07-01

    Propolis is a natural product from vegetable origin that is generally collected from beehives. This product is well-known for its heath benefits attributed to its biological properties. On the other hand, Brazilian sugarcane spirit, cachaca, shows increasing interest and importance in the alcoholic beverage segment in many markets in the world. Therefore, it was evaluated the addition of propolis into cachaca and the effect of ionizing radiation on propolis compounds with biological activity. Samples of cachaca with propolis used in irradiation experiments were prepared from cachaca (40 deg GL) composed with propolis (0,1 %). Eight treatments, with four repetitions each, were carried out in this study. Three doses of ionizing radiation from electron beam and gamma radiation by {sup 60}Co were applied on the cachaca samples, i.e. 0.5, 1.0, and 2.0 kGy, aiming to accelerate the aging of the cachaca samples. The spirits samples were stored for two periods (immediately after the radiation treatment and 30 months after the treatments) and their phenolic compounds contents were analyzed. Phenolic compounds contents were statistically different between both storage times of the cachaca. The samples of cachaca treated with electron beam at 2.0 kGy presented higher reduction in phenolic compounds contents, approximately 6 % in the first analysis and 11 % in the second analysis. In conclusion, the time of storage to promote reduction on the phenolics compounds and the ionizing radiations from electron beams affect more the contents of these compounds than gamma radiation. (author)

  12. Gender differences in total cholesterol levels in patients with acute heart failure and its importance for short and long time prognosis.

    Science.gov (United States)

    Spinarova, Lenka; Spinar, Jindrich; Vitovec, Jiri; Linhart, Ales; Widimsky, Petr; Fedorco, Marian; Malek, Filip; Cihalik, Cestmir; Miklik, Roman; Dusek, Ladislav; Zidova, Klaudia; Jarkovsky, Jiri; Littnerova, Simona; Parenica, Jiri

    2012-03-01

    The purpose of this study was to evaluate whether there are gender differences in total cholesterol levels in patients with acute heart failure and if there is an association of this parameter with short and long time mortality. The AHEAD MAIN registry is a database conducted in 7 university hospitals, all with 24 h cath lab service, in 4 cities in the Czech Republic. The database included 4 153 patients hospitalised for acute heart failure in the period 2006-2009. 2 384 patients had a complete record of their total cholesterol levels. 946 females and 1437 males were included in this analysis. According to the admission total cholesterol levels, patients were divided into 5 groups: 6.0 mmol/l (group E). The median total cholesterol levels were 4.24 in males and 4.60 in females (Ppercentage of women with total cholesterol levels above 6 mmol/l and lower percentage in the group below 4.5 mmol/l than in men. In all, total cholesterol categories women were older than men. Total cholesterol levels are important for in- hospital mortality and long term survival of patients admitted for acute heart failure.

  13. An Experimental Evaluation of Real-Time DVFS Scheduling Algorithms

    OpenAIRE

    Saha, Sonal

    2011-01-01

    Dynamic voltage and frequency scaling (DVFS) is an extensively studied energy manage- ment technique, which aims to reduce the energy consumption of computing platforms by dynamically scaling the CPU frequency. Real-Time DVFS (RT-DVFS) is a branch of DVFS, which reduces CPU energy consumption through DVFS, while at the same time ensures that task time constraints are satisfied by constructing appropriate real-time task schedules. The literature presents numerous RT-DVFS schedul...

  14. Coupled Heuristic Prediction of Long Lead-Time Accumulated Total Inflow of a Reservoir during Typhoons Using Deterministic Recurrent and Fuzzy Inference-Based Neural Network

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-11-01

    Full Text Available This study applies Real-Time Recurrent Learning Neural Network (RTRLNN and Adaptive Network-based Fuzzy Inference System (ANFIS with novel heuristic techniques to develop an advanced prediction model of accumulated total inflow of a reservoir in order to solve the difficulties of future long lead-time highly varied uncertainty during typhoon attacks while using a real-time forecast. For promoting the temporal-spatial forecasted precision, the following original specialized heuristic inputs were coupled: observed-predicted inflow increase/decrease (OPIID rate, total precipitation, and duration from current time to the time of maximum precipitation and direct runoff ending (DRE. This study also investigated the temporal-spatial forecasted error feature to assess the feasibility of the developed models, and analyzed the output sensitivity of both single and combined heuristic inputs to determine whether the heuristic model is susceptible to the impact of future forecasted uncertainty/errors. Validation results showed that the long lead-time–predicted accuracy and stability of the RTRLNN-based accumulated total inflow model are better than that of the ANFIS-based model because of the real-time recurrent deterministic routing mechanism of RTRLNN. Simulations show that the RTRLNN-based model with coupled heuristic inputs (RTRLNN-CHI, average error percentage (AEP/average forecast lead-time (AFLT: 6.3%/49 h can achieve better prediction than the model with non-heuristic inputs (AEP of RTRLNN-NHI and ANFIS-NHI: 15.2%/31.8% because of the full consideration of real-time hydrological initial/boundary conditions. Besides, the RTRLNN-CHI model can promote the forecasted lead-time above 49 h with less than 10% of AEP which can overcome the previous forecasted limits of 6-h AFLT with above 20%–40% of AEP.

  15. Total protein

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003483.htm Total protein To use the sharing features on this page, please enable JavaScript. The total protein test measures the total amount of two classes ...

  16. A new system of computer-assisted navigation leading to reduction in operating time in uncemented total hip replacement in a matched population.

    Science.gov (United States)

    Chaudhry, Fouad A; Ismail, Sanaa Z; Davis, Edward T

    2018-05-01

    Computer-assisted navigation techniques are used to optimise component placement and alignment in total hip replacement. It has developed in the last 10 years but despite its advantages only 0.3% of all total hip replacements in England and Wales are done using computer navigation. One of the reasons for this is that computer-assisted technology increases operative time. A new method of pelvic registration has been developed without the need to register the anterior pelvic plane (BrainLab hip 6.0) which has shown to improve the accuracy of THR. The purpose of this study was to find out if the new method reduces the operating time. This was a retrospective analysis of comparing operating time in computer navigated primary uncemented total hip replacement using two methods of registration. Group 1 included 128 cases that were performed using BrainLab versions 2.1-5.1. This version relied on the acquisition of the anterior pelvic plane for registration. Group 2 included 128 cases that were performed using the newest navigation software, BrainLab hip 6.0 (registration possible with the patient in the lateral decubitus position). The operating time was 65.79 (40-98) minutes using the old method of registration and was 50.87 (33-74) minutes using the new method of registration. This difference was statistically significant. The body mass index (BMI) was comparable in both groups. The study supports the use of new method of registration in improving the operating time in computer navigated primary uncemented total hip replacements.

  17. MonetDB/X100 - A DBMS in the CPU cache

    NARCIS (Netherlands)

    M. Zukowski (Marcin); P.A. Boncz (Peter); N.J. Nes (Niels); S. Héman (Sándor)

    2005-01-01

    textabstractX100 is a new execution engine for the MonetDB system, that improves execution speed and overcomes its main memory limitation. It introduces the concept of in-cache vectorized processing that strikes a balance between the existing column-at-a-time MIL execution primitives of MonetDB and

  18. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease

    NARCIS (Netherlands)

    Shamonin, D.P.; Bron, E.E.; Lelieveldt, B.P.F.; Smits, M.; Klein, S.; Staring, M.

    2014-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial.

  19. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease

    NARCIS (Netherlands)

    D.P. Shamonin (Denis); E.E. Bron (Esther); B.P.F. Lelieveldt (Boudewijn); M. Smits (Marion); S. Klein (Stefan); M. Staring (Marius)

    2014-01-01

    textabstractNonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be

  20. Neither pre-operative education or a minimally invasive procedure have any influence on the recovery time after total hip replacement.

    Science.gov (United States)

    Biau, David Jean; Porcher, Raphael; Roren, Alexandra; Babinet, Antoine; Rosencher, Nadia; Chevret, Sylvie; Poiraudeau, Serge; Anract, Philippe

    2015-08-01

    The purpose of this study was to evaluate pre-operative education versus no education and mini-invasive surgery versus standard surgery to reach complete independence. We conducted a four-arm randomized controlled trial of 209 patients. The primary outcome criterion was the time to reach complete functional independence. Secondary outcomes included the operative time, the estimated total blood loss, the pain level, the dose of morphine, and the time to discharge. There was no significant effect of either education (HR: 1.1; P = 0.77) or mini-invasive surgery (HR: 1.0; 95 %; P = 0.96) on the time to reach complete independence. The mini-invasive surgery group significantly reduced the total estimated blood loss (P = 0.0035) and decreased the dose of morphine necessary for titration in the recovery (P = 0.035). Neither pre-operative education nor mini-invasive surgery reduces the time to reach complete functional independence. Mini-invasive surgery significantly reduces blood loss and the need for morphine consumption.

  1. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    Science.gov (United States)

    Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.

    2016-02-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.

  2. An Integer Batch Scheduling Model for a Single Machine with Simultaneous Learning and Deterioration Effects to Minimize Total Actual Flow Time

    International Nuclear Information System (INIS)

    Yusriski, R; Sukoyo; Samadhi, T M A A; Halim, A H

    2016-01-01

    In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%. (paper)

  3. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, they expect to be able to access close to ten processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  4. Multi-CPU plasma fluid turbulence calculations on a CRAY Y-MP C90

    International Nuclear Information System (INIS)

    Lynch, V.E.; Carreras, B.A.; Leboeuf, J.N.; Curtis, B.C.; Troutman, R.L.

    1993-01-01

    Significant improvements in real-time efficiency have been obtained for plasma fluid turbulence calculations by microtasking the nonlinear fluid code KITE in which they are implemented on the CRAY Y-MP C90 at the National Energy Research Supercomputer Center (NERSC). The number of processors accessed concurrently scales linearly with problem size. Close to six concurrent processors have so far been obtained with a three-dimensional nonlinear production calculation at the currently allowed memory size of 80 Mword. With a calculation size corresponding to the maximum allowed memory of 200 Mword in the next system configuration, we expect to be able to access close to nine processors of the C90 concurrently with a commensurate improvement in real-time efficiency. These improvements in performance are comparable to those expected from a massively parallel implementation of the same calculations on the Intel Paragon

  5. A Real Time PCR Platform for the Simultaneous Quantification of Total and Extrachromosomal HIV DNA Forms in Blood of HIV-1 Infected Patients

    Science.gov (United States)

    Canovari, Benedetta; Scotti, Maddalena; Acetoso, Marcello; Valentini, Massimo; Petrelli, Enzo; Magnani, Mauro

    2014-01-01

    Background The quantitative measurement of various HIV-1 DNA forms including total, unintegrated and integrated provirus play an increasingly important role in HIV-1 infection monitoring and treatment-related research. We report the development and validation of a SYBR Green real time PCR (TotUFsys platform) for the simultaneous quantification of total and extrachromosomal HIV-1 DNA forms in patients. This innovative technique makes it possible to obtain both measurements in a single PCR run starting from frozen blood employing the same primers and standard curve. Moreover, due to identical amplification efficiency, it allows indirect estimation of integrated level. To specifically detect 2-LTR a qPCR method was also developed. Methodology/Findings Primers used for total HIV-1 DNA quantification spanning a highly conserved region were selected and found to detect all HIV-1 clades of group M and the unintegrated forms of the same. A total of 195 samples from HIV-1 patients in a wide range of clinical conditions were analyzed with a 100% success rate, even in patients with suppressed plasma viremia, regardless of CD4+ or therapy. No significant correlation was observed between the two current prognostic markers, CD4+ and plasma viremia, while a moderate or high inverse correlation was found between CD4+ and total HIV DNA, with strong values for unintegrated HIV DNA. Conclusions/Significance Taken together, the results support the use of HIV DNA as another tool, in addition to traditional assays, which can be used to estimate the state of viral infection, the risk of disease progression and to monitor the effects of ART. The TotUFsys platform allowed us to obtain a final result, expressed as the total and unintegrated HIV DNA copy number per microgram of DNA or 104 CD4+, for 12 patients within two working days. PMID:25364909

  6. Dynamic modelling of a 3-CPU parallel robot via screw theory

    Directory of Open Access Journals (Sweden)

    L. Carbonari

    2013-04-01

    Full Text Available The article describes the dynamic modelling of I.Ca.Ro., a novel Cartesian parallel robot recently designed and prototyped by the robotics research group of the Polytechnic University of Marche. By means of screw theory and virtual work principle, a computationally efficient model has been built, with the final aim of realising advanced model based controllers. Then a dynamic analysis has been performed in order to point out possible model simplifications that could lead to a more efficient run time implementation.

  7. The Attributable Proportion of Specific Leisure-Time Physical Activities to Total Leisure Activity Volume Among US Adults, National Health and Nutrition Examination Survey 1999-2006.

    Science.gov (United States)

    Watson, Kathleen Bachtel; Dai, Shifan; Paul, Prabasaj; Carlson, Susan A; Carroll, Dianna D; Fulton, Janet

    2016-11-01

    Previous studies have examined participation in specific leisure-time physical activities (PA) among US adults. The purpose of this study was to identify specific activities that contribute substantially to total volume of leisure-time PA in US adults. Proportion of total volume of leisure-time PA moderate-equivalent minutes attributable to 9 specific types of activities was estimated using self-reported data from 21,685 adult participants (≥ 18 years) in the National Health and Nutrition Examination Survey 1999-2006. Overall, walking (28%), sports (22%), and dancing (9%) contributed most to PA volume. Attributable proportion was higher among men than women for sports (30% vs. 11%) and higher among women than men for walking (36% vs. 23%), dancing (16% vs. 4%), and conditioning exercises (10% vs. 5%). The proportion was lower for walking, but higher for sports, among active adults than those insufficiently active and increased with age for walking. Compared with other racial/ethnic groups, the proportion was lower for sports among non-Hispanic white men and for dancing among non-Hispanic white women. Walking, sports, and dance account for the most activity time among US adults overall, yet some demographic variations exist. Strategies for PA promotion should be tailored to differences across population subgroups.

  8. Multipurpose assessment for the quantification of Vibrio spp. and total bacteria in fish and seawater using multiplex real-time polymerase chain reaction

    Science.gov (United States)

    Kim, Ji Yeun; Lee, Jung-Lim

    2014-01-01

    Background This study describes the first multiplex real-time polymerase chain reaction assay developed, as a multipurpose assessment, for the simultaneous quantification of total bacteria and three Vibrio spp. (V. parahaemolyticus, V. vulnificus and V. anguillarum) in fish and seawater. The consumption of raw finfish as sushi or sashimi has been increasing the chance of Vibrio outbreaks in consumers. Freshness and quality of fishery products also depend on the total bacterial populations present. Results The detection sensitivity of the specific targets for the multiplex assay was 1 CFU mL−1 in pure culture and seawater, and 10 CFU g−1 in fish. While total bacterial counts by the multiplex assay were similar to those obtained by cultural methods, the levels of Vibrio detected by the multiplex assay were generally higher than by cultural methods of the same populations. Among the natural samples without Vibrio spp. inoculation, eight out of 10 seawater and three out of 20 fish samples were determined to contain Vibrio spp. Conclusion Our data demonstrate that this multiplex assay could be useful for the rapid detection and quantification of Vibrio spp. and total bacteria as a multipurpose tool for surveillance of fish and water quality as well as diagnostic method. © 2014 The Authors. Journal of the Science of Food and Agriculture published by JohnWiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:24752974

  9. Time- and radiation-dose dependent changes in the plasma proteome after total body irradiation of non-human primates: Implications for biomarker selection.

    Directory of Open Access Journals (Sweden)

    Stephanie D Byrum

    Full Text Available Acute radiation syndrome (ARS is a complex multi-organ disease resulting from total body exposure to high doses of radiation. Individuals can be exposed to total body irradiation (TBI in a number of ways, including terrorist radiological weapons or nuclear accidents. In order to determine whether an individual has been exposed to high doses of radiation and needs countermeasure treatment, robust biomarkers are needed to estimate radiation exposure from biospecimens such as blood or urine. In order to identity such candidate biomarkers of radiation exposure, high-resolution proteomics was used to analyze plasma from non-human primates following whole body irradiation (Co-60 at 6.7 Gy and 7.4 Gy with a twelve day observation period. A total of 663 proteins were evaluated from the plasma proteome analysis. A panel of plasma proteins with characteristic time- and dose-dependent changes was identified. In addition to the plasma proteomics study reported here, we recently identified candidate biomarkers using urine from these same non-human primates. From the proteomic analysis of both plasma and urine, we identified ten overlapping proteins that significantly differentiate both time and dose variables. These shared plasma and urine proteins represent optimal candidate biomarkers of radiation exposure.

  10. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  11. Total and isoform-specific quantitative assessment of circulating Fibulin-1 using selected reaction monitoring mass spectrometry and time-resolved immunofluorometry

    DEFF Research Database (Denmark)

    Overgaard, Martin; Cangemi, Claudia; Jensen, Martin L

    2015-01-01

    biomarker fibulin-1 and its circulating isoforms in human plasma. EXPERIMENTAL DESIGN:: We used bioinformatics analysis to predict total and isoform-specific tryptic peptides for absolute quantitation using SRM-MS. Fibulin-1 was quantitated in plasma by nanoflow-LC-SRM-MS in undepleted plasma and time......PURPOSE:: Targeted proteomics using SRM-MS combined with stable isotope dilution has emerged as a promising quantitative technique for the study of circulating protein biomarkers. The purpose of this study was to develop and characterize robust quantitative assays for the emerging cardiovascular......-resolved immunofluorometric assay (TRIFMA). Both methods were validated and compared to a commercial ELISA (CircuLex). Molecular size determination was performed under native conditions by SEC analysis coupled to SRM-MS and TRIFMA. RESULTS:: Absolute quantitation of total fibulin-1, isoforms -1C and -1D was performed by SRM...

  12. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.

    Science.gov (United States)

    Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin

    2014-10-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

  13. Preoperative management of surgical patients by "shortened fasting time": a study on the amount of total body water by multi-frequency impedance method.

    Science.gov (United States)

    Taniguchi, Hideki; Sasaki, Toshio; Fujita, Hisae

    2012-01-01

    Preoperative fasting is an established procedure to be practiced for patients before surgery, but optimal preoperative fasting time still remains controversial. The aim of this study was to investigate the effect of "shortened preoperative fasting time" on the change in the amount of total body water (TBW) in elective surgical patients. TBW was measured by multi-frequency impedance method. The patients, who were scheduled to undergo surgery for stomach cancer, were divided into two groups of 15 patients each. Before surgery, patients in the control group were managed with conventional preoperative fasting time, while patients in the "enhanced recovery after surgery (ERAS)" group were managed with "shortened preoperative fasting time" and "reduced laxative medication." TBW was measured on the day before surgery and the day of surgery before entering the operating room. Defecation times and anesthesia-related vomiting and aspiration were monitored. TBW values on the day of surgery showed changes in both groups as compared with those on the day before surgery, but the rate of change was smaller in the ERAS group than in the control group (2.4±6.8% [12 patients] vs. -10.6±4.6% [14 patients], pfasting time" and "reduced administration of laxatives" is effective in the maintenance of TBW in elective surgical patients.

  14. An Integrated Pipeline of Open Source Software Adapted for Multi-CPU Architectures: Use in the Large-Scale Identification of Single Nucleotide Polymorphisms

    Directory of Open Access Journals (Sweden)

    B. Jayashree

    2007-01-01

    Full Text Available The large amounts of EST sequence data available from a single species of an organism as well as for several species within a genus provide an easy source of identification of intra- and interspecies single nucleotide polymorphisms (SNPs. In the case of model organisms, the data available are numerous, given the degree of redundancy in the deposited EST data. There are several available bioinformatics tools that can be used to mine this data; however, using them requires a certain level of expertise: the tools have to be used sequentially with accompanying format conversion and steps like clustering and assembly of sequences become time-intensive jobs even for moderately sized datasets. We report here a pipeline of open source software extended to run on multiple CPU architectures that can be used to mine large EST datasets for SNPs and identify restriction sites for assaying the SNPs so that cost-effective CAPS assays can be developed for SNP genotyping in genetics and breeding applications. At the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT, the pipeline has been implemented to run on a Paracel high-performance system consisting of four dual AMD Opteron processors running Linux with MPICH. The pipeline can be accessed through user-friendly web interfaces at http://hpc.icrisat.cgiar.org/PBSWeb and is available on request for academic use. We have validated the developed pipeline by mining chickpea ESTs for interspecies SNPs, development of CAPS assays for SNP genotyping, and confirmation of restriction digestion pattern at the sequence level.

  15. High-frequency Total Focusing Method (TFM) imaging in strongly attenuating materials with the decomposition of the time reversal operator associated with orthogonal coded excitations

    Science.gov (United States)

    Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire

    2017-02-01

    In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.

  16. A real-time monitoring and assessment method for calculation of total amounts of indoor air pollutants emitted in subway stations.

    Science.gov (United States)

    Oh, TaeSeok; Kim, MinJeong; Lim, JungJin; Kang, OnYu; Shetty, K Vidya; SankaraRao, B; Yoo, ChangKyoo; Park, Jae Hyung; Kim, Jeong Tai

    2012-05-01

    Subway systems are considered as main public transportation facility in developed countries. Time spent by people in indoors, such as underground spaces, subway stations, and indoor buildings, has gradually increased in the recent past. Especially, operators or old persons who stay in indoor environments more than 15 hr per day usually influenced a greater extent by indoor air pollutants. Hence, regulations on indoor air pollutants are needed to ensure good health of people. Therefore, in this study, a new cumulative calculation method for the estimation of total amounts of indoor air pollutants emitted inside the subway station is proposed by taking cumulative amounts of indoor air pollutants based on integration concept. Minimum concentration of individual air pollutants which naturally exist in indoor space is referred as base concentration of air pollutants and can be found from the data collected. After subtracting the value of base concentration from data point of each data set of indoor air pollutant, the primary quantity of emitted air pollutant is calculated. After integration is carried out with these values, adding the base concentration to the integration quantity gives the total amount of indoor air pollutant emitted. Moreover the values of new index for cumulative indoor air quality obtained for 1 day are calculated using the values of cumulative air quality index (CAI). Cumulative comprehensive indoor air quality index (CCIAI) is also proposed to compare the values of cumulative concentrations of indoor air pollutants. From the results, it is clear that the cumulative assessment approach of indoor air quality (IAQ) is useful for monitoring the values of total amounts of indoor air pollutants emitted, in case of exposure to indoor air pollutants for a long time. Also, the values of CCIAI are influenced more by the values of concentration of NO2, which is released due to the use of air conditioners and combustion of the fuel. The results obtained in

  17. Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2016-01-01

    Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.

  18. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  19. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    International Nuclear Information System (INIS)

    Leggett, C; Jackson, K; Tatarkhanov, M; Yao, Y; Binet, S; Levinthal, D

    2011-01-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  20. The effect of education and supervised exercise vs. education alone on the time to total hip replacement in patients with severe hip osteoarthritis. A randomized clinical trial protocol.

    Science.gov (United States)

    Jensen, Carsten; Roos, Ewa M; Kjærsgaard-Andersen, Per; Overgaard, Søren

    2013-01-14

    The age- and gender-specific incidence of total hip replacement surgery has increased over the last two decades in all age groups. Recent studies indicate that non-surgical interventions are effective in reducing pain and disability, even at later stages of the disease when joint replacement is considered. We hypothesize that the time to hip replacement can be postponed in patients with severe hip osteoarthritis following participation in a patient education and supervised exercise program when compared to patients receiving patient education alone. A prospective, blinded, parallel-group multi-center trial (2 sites), with balanced randomization [1:1]. Patients with hip osteoarthritis and an indication for hip replacement surgery, aged 40 years and above, will be consecutively recruited and randomized into two treatment groups. The active treatment group will receive 3 months of supervised exercise consisting of 12 sessions of individualized, goal-based neuromuscular training, and 12 sessions of intensive resistance training plus patient education (3 sessions). The control group will receive only patient education (3 sessions). The primary end-point for assessing the effectiveness of the intervention is 12 months after baseline. However, follow-ups will also be performed once a year for at least 5 years. The primary outcome measure is the time to hip replacement surgery measured on a Kaplain-Meier survival curve from time of inclusion. Secondary outcome measures are the five subscales of the Hip disability and Osteoarthritis Outcome Score, physical activity level (UCLA activity score), and patient's global perceived effect. Other measures include pain after exercise, joint-specific adverse events, exercise adherence, general health status (EQ-5D-5L), mechanical muscle strength and performance in physical tests. A cost-effectiveness analysis will also be performed. To our knowledge, this is the first randomized clinical trial comparing a patient education plus

  1. Influence of overall treatment time in a fractionated total lymphoid irradiation as an immunosuppressive therapy in allogeneic bone marrow transplantation in mice

    International Nuclear Information System (INIS)

    Waer, M.; Ang, K.K.; Vandeputte, M.; Van der Schueren, E.

    1982-01-01

    Three groups of C 57 /BL/Ka mice received total lymphoid irradiation (TLI) in a total dose of 34 Gy in three different fractionation schedules. The tolerance of all different schedules was excellent. No difference in the peripheral white blood cell and lymphocyte counts nor the degree of immunosuppression as measured by phytohaemaglutinin or concanavalin A induced blastogenesis and mixed lymphocyte reaction were observed at the end of the treatment and up to 200 days. When bone marrow transplantation was performed one day after the end of each schedule, chimerism without signs of graft versus host disease was induced in all the groups. However, from the results in a limited number of animals it seems that concentrated schedules were less effective for chimerism induction. It has been demonstrated that it is possible to reduce drastically the overall treatment time for TLI before bone marrow transplantation. Further investigations are necessary in order to determine the optimal time-dose-fractionation factors and the different perameters involved in the transplantation

  2. Effects on mortality, treatment, and time management as a result of routine use of total body computed tomography in blunt high-energy trauma patients.

    Science.gov (United States)

    van Vugt, Raoul; Kool, Digna R; Deunk, Jaap; Edwards, Michael J R

    2012-03-01

    Currently, total body computed tomography (TBCT) is rapidly implemented in the evaluation of trauma patients. With this review, we aim to evaluate the clinical implications-mortality, change in treatment, and time management-of the routine use of TBCT in adult blunt high-energy trauma patients compared with a conservative approach with the use of conventional radiography, ultrasound, and selective computed tomography. A literature search for original studies on TBCT in blunt high-energy trauma patients was performed. Two independent observers included studies concerning mortality, change of treatment, and/or time management as outcome measures. For each article, relevant data were extracted and analyzed. In addition, the quality according to the Oxford levels of evidence was assessed. From 183 articles initially identified, the observers included nine original studies in consensus. One of three studies described a significant difference in mortality; four described a change of treatment in 2% to 27% of patients because of the use of TBCT. Five studies found a gain in time with the use of immediate routine TBCT. Eight studies scored a level of evidence of 2b and one of 3b. Current literature has predominantly suboptimal design to prove terminally that the routine use of TBCT results in improved survival of blunt high-energy trauma patients. TBCT can give a change of treatment and improves time intervals in the emergency department as compared with its selective use.

  3. Appropriate control time constant in relation to characteristics of the baroreflex vascular system in 1/R control of the total artificial heart.

    Science.gov (United States)

    Mizuta, Sora; Saito, Itsuro; Isoyama, Takashi; Hara, Shintaro; Yurimoto, Terumi; Li, Xinyang; Murakami, Haruka; Ono, Toshiya; Mabuchi, Kunihiko; Abe, Yusuke

    2017-09-01

    1/R control is a physiological control method of the total artificial heart (TAH) with which long-term survival was obtained with animal experiments. However, 1/R control occasionally diverged in the undulation pump TAH (UPTAH) animal experiment. To improve the control stability of the 1/R control, appropriate control time constant in relation to characteristics of the baroreflex vascular system was investigated with frequency analysis and numerical simulation. In the frequency analysis, data of five goats in which the UPTAH was implanted were analyzed with first Fourier transform technique to examine the vasomotion frequency. The numerical simulation was carried out repeatedly changing baroreflex parameters and control time constant using the elements-expanded Windkessel model. Results of the frequency analysis showed that the 1/R control tended to diverge when very low frequency band that was an indication of the vasomotion frequency was relative high. In numerical simulation, divergence of the 1/R control could be reproduced and the boundary curves between the divergence and convergence of the 1/R control varied depending on the control time constant. These results suggested that the 1/R control tended to be unstable when the TAH recipient had high reflex speed in the baroreflex vascular system. Therefore, the control time constant should be adjusted appropriately with the individual vasomotion frequency.

  4. Totally James

    Science.gov (United States)

    Owens, Tom

    2006-01-01

    This article presents an interview with James Howe, author of "The Misfits" and "Totally Joe". In this interview, Howe discusses tolerance, diversity and the parallels between his own life and his literature. Howe's four books in addition to "The Misfits" and "Totally Joe" and his list of recommended books with lesbian, gay, bisexual, transgender,…

  5. Comparison of turnaround time and total cost of HIV testing before and after implementation of the 2014 CDC/APHL Laboratory Testing Algorithm for diagnosis of HIV infection.

    Science.gov (United States)

    Chen, Derrick J; Yao, Joseph D

    2017-06-01

    Updated recommendations for HIV diagnostic laboratory testing published by the Centers for Disease Control and Prevention and the Association of Public Health Laboratories incorporate 4th generation HIV immunoassays, which are capable of identifying HIV infection prior to seroconversion. The purpose of this study was to compare turnaround time and cost between 3rd and 4th generation HIV immunoassay-based testing algorithms for initially reactive results. The clinical microbiology laboratory database at Mayo Clinic, Rochester, MN was queried for 3rd generation (from November 2012 to May 2014) and 4th generation (from May 2014 to November 2015) HIV immunoassay results. All results from downstream supplemental testing were recorded. Turnaround time (defined as the time of initial sample receipt in the laboratory to the time the final supplemental test in the algorithm was resulted) and cost (based on 2016 Medicare reimbursement rates) were assessed. A total of 76,454 and 78,998 initial tests were performed during the study period using the 3rd generation and 4th generation HIV immunoassays, respectively. There were 516 (0.7%) and 581 (0.7%) total initially reactive results, respectively. Of these, 304 (58.9%) and 457 (78.7%) were positive by supplemental testing. There were 10 (0.01%) cases of acute HIV infection identified with the 4th generation algorithm. The most frequent tests performed to confirm an HIV-positive case using the 3rd generation algorithm, which were reactive initial immunoassay and positive HIV-1 Western blot, took a median time of 1.1 days to complete at a cost of $45.00. In contrast, the most frequent tests performed to confirm an HIV-positive case using the 4th generation algorithm, which included a reactive initial immunoassay and positive HIV-1/-2 antibody differentiation immunoassay for HIV-1, took a median time of 0.4 days and cost $63.25. Overall median turnaround time was 2.2 and 1.5 days, and overall median cost was $63.90 and $72.50 for

  6. Trace analysis of total naphthenic acids in aqueous environmental matrices by liquid chromatography/mass spectrometry-quadrupole time of flight mass spectrometry direct injection.

    Science.gov (United States)

    Brunswick, Pamela; Shang, Dayue; van Aggelen, Graham; Hindle, Ralph; Hewitt, L Mark; Frank, Richard A; Haberl, Maxine; Kim, Marcus

    2015-07-31

    A rapid and sensitive liquid chromatography quadrupole time of flight method has been established for the determination of total naphthenic acid concentrations in aqueous samples. This is the first methodology that has been adopted for routine, high resolution, high throughput analysis of total naphthenic acids at trace levels in unprocessed samples. A calibration range from 0.02 to 1.0μgmL(-1) total Merichem naphthenic acids was validated and demonstrated excellent accuracy (97-111% recovery) and precision (1.9% RSD at 0.02μgmL(-1)). Quantitative validation was also demonstrated in a non-commercial oil sands process water (OSPW) acid extractable organics (AEOs) fraction containing a higher percentage of polycarboxylic acid isomers than the Merichem technical mix. The chromatographic method showed good calibration linearity of ≥0.999 RSQ to 0.005μgmL(-1) total naphthenic acids with a precision <3.1% RSD and a calculated detection limit of 0.0004μgmL(-1) employing Merichem technical mix reference material. The method is well suited to monitoring naturally occurring and industrially derived naphthenic acids (and other AEOs) present in surface and ground waters in the vicinity of mining developments. The advantage of the current method is its direct application to unprocessed environmental samples and to examine natural naphthenic acid isomer profiles. It is noted that where the isomer profile of samples differs from that of the reference material, results should be considered semi-quantitative due to the lack of matching isomer content. The fingerprint profile of naphthenic acids is known to be transitory during aging and the present method has the ability to adapt to monitoring of these changes in naphthenic acid content. The method's total ion scan approach allows for data previously collected to be examined retrospectively for specific analyte mass ions of interest. A list of potential naphthenic acid isomers that decrease in response with aging is proposed

  7. Time series models for prediction the total and dissolved heavy metals concentration in road runoff and soil solution of roadside embankments

    Science.gov (United States)

    Aljoumani, Basem; Kluge, Björn; sanchez, Josep; Wessolek, Gerd

    2017-04-01

    Highways and main roads are potential sources of contamination for the surrounding environment. High traffic rates result in elevated heavy metal concentrations in road runoff, soil and water seepage, which has attracted much attention in the recent past. Prediction of heavy metals transfer near the roadside into deeper soil layers are very important to prevent the groundwater pollution. This study was carried out on data of a number of lysimeters which were installed along the A115 highway (Germany) with a mean daily traffic of 90.000 vehicles per day. Three polyethylene (PE) lysimeters were installed at the A115 highway. They have the following dimensions: length 150 cm, width 100 cm, height 60 cm. The lysimeters were filled with different soil materials, which were recently used for embankment construction in Germany. With the obtained data, we will develop a time series analysis model to predict total and dissolved metal concentration in road runoff and in soil solution of the roadside embankments. The time series consisted of monthly measurements of heavy metals and was transformed to a stationary situation. Subsequently, the transformed data will be used to conduct analyses in the time domain in order to obtain the parameters of a seasonal autoregressive integrated moving average (ARIMA) model. Four phase approaches for identifying and fitting ARIMA models will be used: identification, parameter estimation, diagnostic checking, and forecasting. An automatic selection criterion, such as the Akaike information criterion, will use to enhance this flexible approach to model building

  8. Iterated greedy algorithms to minimize the total family flow time for job-shop scheduling with job families and sequence-dependent set-ups

    Science.gov (United States)

    Kim, Ji-Su; Park, Jung-Hyeon; Lee, Dong-Ho

    2017-10-01

    This study addresses a variant of job-shop scheduling in which jobs are grouped into job families, but they are processed individually. The problem can be found in various industrial systems, especially in reprocessing shops of remanufacturing systems. If the reprocessing shop is a job-shop type and has the component-matching requirements, it can be regarded as a job shop with job families since the components of a product constitute a job family. In particular, sequence-dependent set-ups in which set-up time depends on the job just completed and the next job to be processed are also considered. The objective is to minimize the total family flow time, i.e. the maximum among the completion times of the jobs within a job family. A mixed-integer programming model is developed and two iterated greedy algorithms with different local search methods are proposed. Computational experiments were conducted on modified benchmark instances and the results are reported.

  9. An Enhanced Discrete Artificial Bee Colony Algorithm to Minimize the Total Flow Time in Permutation Flow Shop Scheduling with Limited Buffers

    Directory of Open Access Journals (Sweden)

    Guanlong Deng

    2016-01-01

    Full Text Available This paper presents an enhanced discrete artificial bee colony algorithm for minimizing the total flow time in the flow shop scheduling problem with buffer capacity. First, the solution in the algorithm is represented as discrete job permutation to directly convert to active schedule. Then, we present a simple and effective scheme called best insertion for the employed bee and onlooker bee and introduce a combined local search exploring both insertion and swap neighborhood. To validate the performance of the presented algorithm, a computational campaign is carried out on the Taillard benchmark instances, and computations and comparisons show that the proposed algorithm is not only capable of solving the benchmark set better than the existing discrete differential evolution algorithm and iterated greedy algorithm, but also capable of performing better than two recently proposed discrete artificial bee colony algorithms.

  10. Potential and limitation of mid-infrared attenuated total reflectance spectroscopy for real time analysis of raw milk in milking lines.

    Science.gov (United States)

    Linker, Raphael; Etzion, Yael

    2009-02-01

    Real-time information about milk composition would be very useful for managing the milking process. Mid-infrared spectroscopy, which relies on fundamental modes of molecular vibrations, is routinely used for off-line analysis of milk and the purpose of the present study was to investigate the potential of attenuated total reflectance mid-infrared spectroscopy for real-time analysis of milk in milking lines. The study was conducted with 189 samples from over 70 cows that were collected during an 18 months period. Principal component analysis, wavelets and neural networks were used to develop various models for predicting protein and fat concentration. Although reasonable protein models were obtained for some seasonal sub-datasets (determination errors protein), the models lacked robustness and it was not possible to develop a model suitable for all the data. Determination of fat concentration proved even more problematic and the determination errors remained unacceptably large regardless of the sub-dataset analyzed or of the spectral intervals used. These poor results can be explained by the limited penetration depth of the mid-infrared radiation that causes the spectra to be very sensitive to the presence of fat globules or fat biofilms in the boundary layer that forms at the interface between the milk and the crystal that serves both as radiation waveguide and sensing element. Since manipulations such as homogenisation are not permissible for in-line analysis, these results show that the potential of mid-infrared attenuated total reflectance spectroscopy for in-line milk analysis is indeed quite limited.

  11. Total body propofol clearance (TBPC) after living-donor liver transplantation (LDLT) surgery is decreased in patients with a long warm ischemic time.

    Science.gov (United States)

    Al-Jahdari, Wael S; Kunimoto, Fumio; Saito, Shigeru; Yamamoto, Koujirou; Koyama, Hiroshi; Horiuchi, Ryuya; Goto, Fumio

    2006-01-01

    Metabolic capacity after liver transplant surgery may be affected by the graft size and by hepatic injury during the surgery. This study was carried out to investigate the postoperative total body propofol clearance (TBPC) in living-donor liver transplantation (LDLT) patients and to investigate the major factors that contribute to decreased postoperative TBPC in LDLT patients. Fourteen patients scheduled for LDLT were included in this study. Propofol was administered at a rate of 2.0 mg.kg(-1).h(-1) as a sedative in the intensive care unit (ICU) setting. To calculate TBPC, propofol arterial blood concentration was measured by HPLC. Five variables were selected as factors affecting postoperative TBPC; bleeding volume (BLD), warm ischemic time (WIT), cold ischemic time (CIT), graft weight/standard liver volume ratio (GW/SLV), and portal blood flow after surgery (PBF). After factor analysis of six variables, including TBPC, varimax rotation was carried out, and this yielded three interpretable factors that accounted for 75.5% of the total variance in the data set. TBPC, WIT, CIT, and BLD were loaded on the first factor, PBF on the second factor, and GW/SLV on the third factor. The adjusted correlation coefficient between TBPC and WIT showed the highest value (r = -0.61) in the first factor. The LDLT patients were divided into two groups according to WIT; group A (WIT > 100 min) and group B (WIT < 100 min). Mean TBPC values in group A and group B were 14.6 +/- 2.1 and 28.5 +/- 4.1 ml.kg(-1).min(-1), respectively (P < 0.0001). These data suggest that LDLT patients with a long WIT have a risk of deteriorated drug metabolism.

  12. Tranexamic acid reduces intraoperative occult blood loss and tourniquet time in obese knee osteoarthritis patients undergoing total knee arthroplasty: a prospective cohort study.

    Science.gov (United States)

    Meng, Yutong; Li, Zhirui; Gong, Ke; An, Xiao; Dong, Jiyuan; Tang, Peifu

    2018-01-01

    Obesity can result in increased blood loss, which is correlated with poor prognosis in total knee arthroplasty (TKA). Clinical application of tranexamic acid is effective in reducing blood loss in TKA. However, most previous studies focused on the effect of tranexamic acid in the whole population, neglecting patients with specific health conditions, such as obesity. We hypothesized that tranexamic acid would reduce blood loss to a greater extent in obese patients than in those of normal weight. A total of 304 patients with knee osteoarthritis treated with TKA from October 2013 to March 2015 were separated into tranexamic, non-tranexamic, obese, and non-obese groups. The demographic characteristics, surgical indices, and hematological indices were all recorded. We first investigated the ability of intravenous tranexamic acid to reduce intraoperative blood loss in knee osteoarthritis patients undergoing unilateral TKA. Second, we performed subgroup analysis to compare the effects of tranexamic acid between obese and non-obese patients separately. Of the 304 patients, 146 (52.0%) received tranexamic acid and 130 (42.8%) were obese. In the analysis of the whole group, both the actual and occult blood loss volume were lower in the tranexamic acid group (both P tranexamic acid group ( P tranexamic acid was shown to reduce theoretical and actual blood loss in both the obese and non-obese groups ( P Tranexamic acid reduced occult blood loss and tourniquet time in the obese group ( P 0.05). Tranexamic acid can reduce occult blood loss and tourniquet time in obese patients to a greater extent than in patients of normal weight. Therefore, obese knee osteoarthritis patients undergoing TKA can benefit more from tranexamic acid.

  13. Real-time web-based assessment of total population risk of future emergency department utilization: statewide prospective active case finding study.

    Science.gov (United States)

    Hu, Zhongkai; Jin, Bo; Shin, Andrew Y; Zhu, Chunqing; Zhao, Yifan; Hao, Shiying; Zheng, Le; Fu, Changlin; Wen, Qiaojun; Ji, Jun; Li, Zhen; Wang, Yong; Zheng, Xiaolin; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Rogow, Todd; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-01-13

    An easily accessible real-time Web-based utility to assess patient risks of future emergency department (ED) visits can help the health care provider guide the allocation of resources to better manage higher-risk patient populations and thereby reduce unnecessary use of EDs. Our main objective was to develop a Health Information Exchange-based, next 6-month ED risk surveillance system in the state of Maine. Data on electronic medical record (EMR) encounters integrated by HealthInfoNet (HIN), Maine's Health Information Exchange, were used to develop the Web-based surveillance system for a population ED future 6-month risk prediction. To model, a retrospective cohort of 829,641 patients with comprehensive clinical histories from January 1 to December 31, 2012 was used for training and then tested with a prospective cohort of 875,979 patients from July 1, 2012, to June 30, 2013. The multivariate statistical analysis identified 101 variables predictive of future defined 6-month risk of ED visit: 4 age groups, history of 8 different encounter types, history of 17 primary and 8 secondary diagnoses, 8 specific chronic diseases, 28 laboratory test results, history of 3 radiographic tests, and history of 25 outpatient prescription medications. The c-statistics for the retrospective and prospective cohorts were 0.739 and 0.732 respectively. Integration of our method into the HIN secure statewide data system in real time prospectively validated its performance. Cluster analysis in both the retrospective and prospective analyses revealed discrete subpopulations of high-risk patients, grouped around multiple "anchoring" demographics and chronic conditions. With the Web-based population risk-monitoring enterprise dashboards, the effectiveness of the active case finding algorithm has been validated by clinicians and caregivers in Maine. The active case finding model and associated real-time Web-based app were designed to track the evolving nature of total population risk, in a

  14. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  15. Muscle-invasive bladder cancer treated with external beam radiation: influence of total dose, overall treatment time, and treatment interruption on local control

    International Nuclear Information System (INIS)

    Moonen, L.; Voet, H. van der; Nijs, R. de; Horenblas, S.; Hart, A.A.M.; Bartelink, H.

    1998-01-01

    Purpose: To evaluate and eventually quantify a possible influence of tumor proliferation during the external radiation course on local control in muscle invasive bladder cancer. Methods and Materials: The influence of total dose, overall treatment time, and treatment interruption has retrospectively been analyzed in a series of 379 patients with nonmetastasized, muscle-invasive transitional cell carcinoma of the urinary bladder. All patients received external beam radiotherapy at the Netherlands Cancer Institute between 1977 and 1990. Total dose varied between 50 and 75 Gy with a mean of 60.5 Gy and a median of 60.4 Gy. Overall treatment time varied between 20 and 270 days with a mean of 49 days and a median of 41 days. Number of fractions varied between 17 and 36 with a mean of 27 and a median of 26. Two hundred and forty-four patients had a continuous radiation course, whereas 135 had an intended split course or an unintended treatment interruption. Median follow-up was 22 months for all patients and 82 months for the 30 patients still alive at last follow-up. A stepwise procedure using proportional hazard regression has been used to identify prognostic treatment factors with respect to local recurrence as sole first recurrence. Results: One hundred and thirty-six patients experienced a local recurrence and 120 of these occurred before regional or distant metastases. The actuarial local control rate was 40.3% at 5 years and 32.3% at 10 years. In a multivariate analysis total dose showed a significant association with local control (p 0.0039), however in a markedly nonlinear way. In fact only those patients treated with a dose below 57.5 Gy had a significant higher bladder relapse rate, whereas no difference in relapse rate was found among patients treated with doses above 57.5 Gy. This remained the case even after adjustment for overall treatment time and all significant tumor and patient characteristics. The Normalized Tumor Dose (NTD) (α/β = 10) and NTD (

  16. The effects of feeding time on milk production, total-tract digestibility, and daily rhythms of feeding behavior and plasma metabolites and hormones in dairy cows.

    Science.gov (United States)

    Niu, M; Ying, Y; Bartell, P A; Harvatine, K J

    2014-12-01

    The timing of feed intake entrains circadian rhythms regulated by internal clocks in many mammals. The objective of this study was to determine if the timing of feeding entrains daily rhythms in dairy cows. Nine Holstein cows were used in a replicated 3 × 3 Latin square design with 14-d periods. An automated system recorded the timing of feed intake over the last 7 d of each period. Treatments were feeding 1×/d at 0830 h (AM) or 2030 h (PM) and feeding 2×/d in equal amounts at 0830 and 2030 h. All treatments were fed at 110% of daily intake. Cows were milked 2×/d at 0500 and 1700 h. Milk yield and composition were not changed by treatment. Daily intake did not differ, but twice-daily feeding tended to decrease total-tract digestibility of organic matter and neutral detergent fiber (NDF). A treatment by time of day interaction was observed for feeding behavior. The amount of feed consumed in the first 2h after feeding was 70% greater for PM compared with AM feeding. A low rate of intake overnight (2400 to 0500 h; 2.2 ± 0.74% daily intake/h, mean ± SD) and a moderate rate of intake in the afternoon (1200 to 1700 h; 4.8 ± 1.1% daily intake/h) was noted for all treatments, although PM slightly reduced the rate during the afternoon period compared with AM. A treatment by time of day interaction was seen for fecal NDF and indigestible NDF (iNDF) concentration, blood urea nitrogen, plasma glucose and insulin concentrations, body temperature, and lying behavior. Specifically, insulin increased and glucose decreased more after evening feeding than after morning feeding. A cosine function within a 24-h period was used to characterize daily rhythms using a random regression. Rate of feed intake during spontaneous feeding, fecal NDF and iNDF concentration, plasma glucose, insulin, NEFA, body temperature, and lying behavior fit a cosine function within a 24-h period that was modified by treatment. In conclusion, feeding time can reset the daily rhythms of feeding and

  17. First Zenith Total Delay and Integrated Water Vapour Estimates from the Near Real-Time GNSS Data Processing Systems at the University of Luxembourg

    Science.gov (United States)

    Ahmed, F.; Teferle, F. N.; Bingley, R. M.

    2012-04-01

    Since September 2011 the University of Luxembourg in collaboration with the University of Nottingham has been setting up two near real-time processing systems for ground-based GNSS data for the provision of zenith total delay (ZTD) and integrated water vapour (IWV) estimates. Both systems are based on Bernese v5.0, use the double-differenced network processing strategy and operate with a 1-hour (NRT1h) and 15-minutes (NRT15m) update cycle. Furthermore, the systems follow the approach of the E-GVAP METO and IES2 systems in that the normal equations for the latest data are combined with those from the previous four updates during the estimation of the ZTDs. NRT1h currently takes the hourly data from over 130 GNSS stations in Europe whereas NRT15m is primarily using the real-time streams of EUREF-IP. Both networks include additional GNSS stations in Luxembourg, Belgium and France. The a priori station coordinates for all of these stem from a moving average computed over the last 20 to 50 days and are based on the precise point positioning processing strategy. In this study we present the first ZTD and IWV estimates obtained from the NRT1h and NRT15m systems in development at the University of Luxembourg. In a preliminary evaluation we compare their performance to the IES2 system at the University of Nottingham and find the IWV estimates to agree at the sub-millimetre level.

  18. Sleep restriction therapy for insomnia is associated with reduced objective total sleep time, increased daytime somnolence, and objectively impaired vigilance: implications for the clinical management of insomnia disorder.

    Science.gov (United States)

    Kyle, Simon D; Miller, Christopher B; Rogers, Zoe; Siriwardena, A Niroshan; Macmahon, Kenneth M; Espie, Colin A

    2014-02-01

    To investigate whether sleep restriction therapy (SRT) is associated with reduced objective total sleep time (TST), increased daytime somnolence, and impaired vigilance. Within-subject, noncontrolled treatment investigation. Sleep research laboratory. Sixteen patients [10 female, mean age = 47.1 (10.8) y] with well-defined psychophysiological insomnia (PI), reporting TST ≤ 6 h. Patients were treated with single-component SRT over a 4-w protocol, sleeping in the laboratory for 2 nights prior to treatment initiation and for 3 nights (SRT night 1, 8, 22) during the acute interventional phase. The psychomotor vigilance task (PVT) was completed at seven defined time points [day 0 (baseline), day 1,7,8,21,22 (acute treatment) and day 84 (3 mo)]. The Epworth Sleepiness Scale (ESS) was completed at baseline, w 1-4, and 3 mo. Subjective sleep outcomes and global insomnia severity significantly improved before and after SRT. There was, however, a robust decrease in PSG-defined TST during acute implementation of SRT, by an average of 91 min on night 1, 78 min on night 8, and 69 min on night 22, relative to baseline (P insomnia.

  19. The utility of the Philips SRI-100 real time portal imaging device in a case of postoperative irradiation for prevention of heterotopic bone formation following total hip replacement

    International Nuclear Information System (INIS)

    Kiffer, J.D.; Quong, G.; Lawlor, M.; Schumer, W.; Aitken, L.; Wallace, A.

    1994-01-01

    The new Radiation Oncology Department at the Heidelberg Repatriation Hospital in Melbourne, Australia commenced operation in June 1992. As part of quality control the Philips SL-15 linear accelerator was fitted with the Philips SRI-100 Real Time Portal Imaging Device (RTPID), the first such apparatus in Australia. One of its major advantages over older systems is its ability to provide a permanent hard copy of the image of the field treated. The computer image can be immediately manipulated and enhanced on the screen (with respect to such qualities as brightness and contrast) prior to the printing of the hard copy. This is a significant improvement over the more cumbersome older port films that required developing time, without any pre-assessment of the image quality. The utility of the Philips SRI-100 RTPID is demonstrated in the case of a patient irradiated soon after total hip replacement, as prophylaxis against heterotopic bone formation (HBF). The rapidity and quality of image production is a major advantage in these patients where post operative pain may result in positional change between film exposure and image production. Extremely accurate shielding block position is essential to shield the prosthesis(and allow bone ingrowth for fixation) whilst avoiding inadvertent shielding of the areas at risk for HBF. A review of the literature on this topic is provided. 14 refs., 4 figs

  20. Application of fiber-optic attenuated total reflection-FT-IR methods for in situ characterization of protein delivery systems in real time.

    Science.gov (United States)

    McFearin, Cathryn L; Sankaranarayanan, Jagadis; Almutairi, Adah

    2011-05-15

    A fiber-optic coupled attenuated total reflection (ATR)-FT-IR spectroscopy technique was applied to the study of two different therapeutic delivery systems, acid degradable hydrogels and nanoparticles. Real time exponential release of a model protein, human serum albumin (HSA), was observed from two different polymeric hydrogels formulated with a pH sensitive cross-linker. Spectroscopic examination of nanoparticles formulated with an acid degradable polymer shell and encapsulated HSA exhibited vibrational signatures characteristic of both particle and payload when exposed to lowered pH conditions, demonstrating the ability of this methodology to simultaneously measure phenomena arising from a system with a mixture of components. In addition, thorough characterization of these pH sensitive delivery vehicles without encapsulated protein was also accomplished in order to separate the effects of the payload during degradation. When in situ, real time detection in combination with the ability to specifically identify different components in a mixture without involved sample preparation and minimal sample disturbance is provided, the versatility and suitability of this type of experiment for research in the pharmaceutical field is demonstrated.

  1. A real time biofeedback using Kinect and Wii to improve gait for post-total knee replacement rehabilitation: a case study report.

    Science.gov (United States)

    Levinger, Pazit; Zeina, Daniel; Teshome, Assefa K; Skinner, Elizabeth; Begg, Rezaul; Abbott, John Haxby

    2016-01-01

    This study aimed to develop a low-cost real-time biofeedback system to assist with rehabilitation for patients following total knee replacement (TKR) and to assess its feasibility of use in a post-TKR patient case study design with a comparison group. The biofeedback system consisted of Microsoft Kinect(TM) and Nintendo Wii balance board with a dedicated software. A six-week inpatient rehabilitation program was augmented by biofeedback and tested in a single patient following TKR. Three patients underwent a six weeks standard rehabilitation with no biofeedback and served as a control group. Gait, function and pain were assessed and compared before and after the rehabilitation. The biofeedback software incorporated real time visual feedback to correct limb alignment, movement pattern and weight distribution. Improvements in pain, function and quality of life were observed in both groups. The strong improvement in the knee moment pattern demonstrated in the case study indicates feasibility of the biofeedback-augmented intervention. This novel biofeedback software has used simple commercially accessible equipment that can be feasibly incorporated to augment a post-TKR rehabilitation program. Our preliminary results indicate the potential of this biofeedback-assisted rehabilitation to improve knee function during gait. Research is required to test this hypothesis. Implications for Rehabilitation The real-time biofeedback system developed integrated custom-made software and simple low-cost commercially accessible equipment such as Kinect and Wii board to provide augmented information during rehabilitation following TKR. The software incorporated key rehabilitation principles and visual feedback to correct alignment of the lower legs, pelvic and trunk as well as providing feedback on limbs weight distribution. The case study patient demonstrated greater improvement in their knee function where a more normal biphasic knee moment was achieved following the six

  2. Near real-time estimation of ionosphere vertical total electron content from GNSS satellites using B-splines in a Kalman filter

    Science.gov (United States)

    Erdogan, Eren; Schmidt, Michael; Seitz, Florian; Durmaz, Murat

    2017-02-01

    Although the number of terrestrial global navigation satellite system (GNSS) receivers supported by the International GNSS Service (IGS) is rapidly growing, the worldwide rather inhomogeneously distributed observation sites do not allow the generation of high-resolution global ionosphere products. Conversely, with the regionally enormous increase in highly precise GNSS data, the demands on (near) real-time ionosphere products, necessary in many applications such as navigation, are growing very fast. Consequently, many analysis centers accepted the responsibility of generating such products. In this regard, the primary objective of our work is to develop a near real-time processing framework for the estimation of the vertical total electron content (VTEC) of the ionosphere using proper models that are capable of a global representation adapted to the real data distribution. The global VTEC representation developed in this work is based on a series expansion in terms of compactly supported B-spline functions, which allow for an appropriate handling of the heterogeneous data distribution, including data gaps. The corresponding series coefficients and additional parameters such as differential code biases of the GNSS satellites and receivers constitute the set of unknown parameters. The Kalman filter (KF), as a popular recursive estimator, allows processing of the data immediately after acquisition and paves the way of sequential (near) real-time estimation of the unknown parameters. To exploit the advantages of the chosen data representation and the estimation procedure, the B-spline model is incorporated into the KF under the consideration of necessary constraints. Based on a preprocessing strategy, the developed approach utilizes hourly batches of GPS and GLONASS observations provided by the IGS data centers with a latency of 1 h in its current realization. Two methods for validation of the results are performed, namely the self consistency analysis and a comparison

  3. Periodicity and time trends in the prevalence of total births and conceptions with congenital malformations among Jews and Muslims in Israel, 1999-2006: a time series study of 823,966 births.

    Science.gov (United States)

    Agay-Shay, Keren; Friger, Michael; Linn, Shai; Peled, Ammatzia; Amitai, Yona; Peretz, Chava

    2012-06-01

    BACKGROUND Congenital malformations (CMs) are a leading cause of infant disability. Geophysical patterns such as 2-year, yearly, half-year, 3-month, and lunar cycles regulate much of the temporal biology of all life on Earth and may affect birth and birth outcomes in humans. Therefore, the aim of this study was to evaluate and compare trends and periodicity in total births and CM conceptions in two Israeli populations. METHODS Poisson nonlinear models (polynomial) were applied to study and compare trends and geophysical periodicity cycles of weekly births and weekly prevalence rate of CM (CMPR), in a time-series design of conception date within and between Jews and Muslims. The population included all live births and stillbirths (n = 823,966) and CM (three anatomic systems, eight CM groups [n = 2193]) in Israel during 2000 to 2006. Data were obtained from the Ministry of Health. RESULTS We describe the trend and periodicity cycles for total birth conceptions. Of eight groups of CM, periodicity cycles were statistically significant in four CM groups for either Jews or Muslims. Lunar month and biennial periodicity cycles not previously investigated in the literature were found to be statistically significant. Biennial cycle was significant in total births (Jews and Muslims) and syndactyly (Muslims), whereas lunar month cycle was significant in total births (Muslims) and atresia of small intestine (Jews). CONCLUSION We encourage others to use the method we describe as an important tool to investigate the effects of different geophysical cycles on human health and pregnancy outcomes, especially CM, and to compare between populations. Copyright © 2012 Wiley Periodicals, Inc.

  4. Effects of solar eclipse on the electrodynamical processes of the equatorial ionosphere: a case study during 11 August 1999 dusk time total solar eclipse over India

    Directory of Open Access Journals (Sweden)

    R. Sridharan

    Full Text Available The effects on the electrodynamics of the equatorial E- and F-regions of the ionosphere, due to the occurrence of the solar eclipse during sunset hours on 11 August 1999, were investigated in a unique observational campaign involving ground based ionosondes, VHF and HF radars from the equatorial location of Trivandrum (8.5° N; 77° E; dip lat. 0.5° N, India. The study revealed the nature of changes brought about by the eclipse in the evening time E- and F-regions in terms of (i the sudden intensification of a weak blanketing ES-layer and the associated large enhancement of the VHF backscattered returns, (ii significant increase in h' F immediately following the eclipse and (iii distinctly different spatial and temporal structures in the spread-F irregularity drift velocities as observed by the HF radar. The significantly large enhancement of the backscattered returns from the E-region coincident with the onset of the eclipse is attributed to the generation of steep electron density gradients associated with the blanketing ES , possibly triggered by the eclipse phenomena. The increase in F-region base height immediately after the eclipse is explained as due to the reduction in the conductivity of the conjugate E-region in the path of totality connected to the F-region over the equator along the magnetic field lines, and this, with the peculiar local and regional conditions, seems to have reduced the E-region loading of the F-region dynamo, resulting in a larger post sunset F-region height (h' F rise. These aspects of E-and F-region behaviour on the eclipse day are discussed in relation to those observed on the control day.

    Key words. Ionosphere (electric fields and currents; equatorial ionosphere; ionospheric irregularities

  5. Effects of solar eclipse on the electrodynamical processes of the equatorial ionosphere: a case study during 11 August 1999 dusk time total solar eclipse over India

    Directory of Open Access Journals (Sweden)

    R. Sridharan

    2002-12-01

    Full Text Available The effects on the electrodynamics of the equatorial E- and F-regions of the ionosphere, due to the occurrence of the solar eclipse during sunset hours on 11 August 1999, were investigated in a unique observational campaign involving ground based ionosondes, VHF and HF radars from the equatorial location of Trivandrum (8.5° N; 77° E; dip lat. 0.5° N, India. The study revealed the nature of changes brought about by the eclipse in the evening time E- and F-regions in terms of (i the sudden intensification of a weak blanketing ES-layer and the associated large enhancement of the VHF backscattered returns, (ii significant increase in h' F immediately following the eclipse and (iii distinctly different spatial and temporal structures in the spread-F irregularity drift velocities as observed by the HF radar. The significantly large enhancement of the backscattered returns from the E-region coincident with the onset of the eclipse is attributed to the generation of steep electron density gradients associated with the blanketing ES , possibly triggered by the eclipse phenomena. The increase in F-region base height immediately after the eclipse is explained as due to the reduction in the conductivity of the conjugate E-region in the path of totality connected to the F-region over the equator along the magnetic field lines, and this, with the peculiar local and regional conditions, seems to have reduced the E-region loading of the F-region dynamo, resulting in a larger post sunset F-region height (h' F rise. These aspects of E-and F-region behaviour on the eclipse day are discussed in relation to those observed on the control day.Key words. Ionosphere (electric fields and currents; equatorial ionosphere; ionospheric irregularities

  6. Total and cause-specific mortality before and after the onset of the Greek economic crisis: an interrupted time-series analysis.

    Science.gov (United States)

    Laliotis, Ioannis; Ioannidis, John P A; Stavropoulou, Charitini

    2016-12-01

    Greece was one of the countries hit the hardest by the 2008 financial crisis in Europe. Yet, evidence on the effect of the crisis on total and cause-specific mortality remains unclear. We explored whether the economic crisis affected the trend of overall and cause-specific mortality rates. We used regional panel data from the Hellenic Statistical Authority to assess mortality trends by age, sex, region, and cause in Greece between January, 2001, and December, 2013. We used Eurostat data to calculate monthly age-standardised mortality rates per 100 000 inhabitants for each region. Data were divided into two subperiods: before the crisis (January, 2001, to August, 2008) and after the onset of the crisis (September, 2008, to December, 2013). We tested for changes in the slope of mortality by doing an interrupted time-series analysis. Overall mortality continued to decline after the onset of the financial crisis (-0·065, 95% CI -0·080 to -0·049), but at a slower pace than before the crisis (-0·13, -0·15 to -0·10; trend difference 0·062, 95% CI 0·041 to 0·083; pperiod after the onset of the crisis with extrapolated values based on the period before the crisis, we estimate that an extra 242 deaths per month occurred after the onset of the crisis. Mortality trends have been interrupted after the onset of compared with before the crisis, but changes vary by age, sex, and cause of death. The increase in deaths due to adverse events during medical treatment might reflect the effects of deterioration in quality of care during economic recessions. None. Copyright © 2016 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY license. Published by Elsevier Ltd.. All rights reserved.

  7. Total Thyroidectomy

    Directory of Open Access Journals (Sweden)

    Lopez Moris E

    2016-06-01

    Full Text Available Total thyroidectomy is a surgery that removes all the thyroid tissue from the patient. The suspect of cancer in a thyroid nodule is the most frequent indication and it is presume when previous fine needle puncture is positive or a goiter has significant volume increase or symptomes. Less frequent indications are hyperthyroidism when it is refractory to treatment with Iodine 131 or it is contraindicated, and in cases of symptomatic thyroiditis. The thyroid gland has an important anatomic relation whith the inferior laryngeal nerve and the parathyroid glands, for this reason it is imperative to perform extremely meticulous dissection to recognize each one of these elements and ensure their preservation. It is also essential to maintain strict hemostasis, in order to avoid any postoperative bleeding that could lead to a suffocating neck hematoma, feared complication that represents a surgical emergency and endangers the patient’s life.It is essential to run a formal technique, without skipping steps, and maintain prudence and patience that should rule any surgical act.

  8. Associations Between Sedentary Time, Physical Activity, and Dual-Energy X-ray Absorptiometry Measures of Total Body, Android, and Gynoid Fat Mass in Children.

    Science.gov (United States)

    McCormack, Lacey; Meendering, Jessica; Specker, Bonny; Binkley, Teresa

    2016-01-01

    Negative health outcomes are associated with excess body fat, low levels of physical activity (PA), and high sedentary time (ST). Relationships between PA, ST, and body fat distribution, including android and gynoid fat, assessed using dual-energy X-ray absorptiometry (DXA) have not been measured in children. The purpose of this study was to test associations between levels of activity and body composition in children and to evaluate if levels of activity predict body composition by DXA and by body mass index percentile in a similar manner. PA, ST, and body composition from 87 children (8.8-11.8 yr, grades 3-5, 44 boys) were used to test the association among study variables. Accelerometers measured PA and ST. Body composition measured by DXA included bone mineral content (BMC) and fat and lean mass of the total body (TB, less head), android, and gynoid regions. ST (range: 409-685 min/wk) was positively associated with TB percent fat (0.03, 95% confidence interval [CI]: 0.00-0.05) and android fat mass (1.5 g, 95% CI: 0.4-3.0), and inversely associated with the lean mass of the TB (-10.7 g, 95% CI: -20.8 to -0.63) and gynoid regions (-2.2 g, 95% CI: -4.3 to -0.2), and with BMC (-0.43 g, 95% CI: 0.77-0.09). Moderate-to-vigorous PA was associated with lower TB (-53 g, 95% CI: -87 to -18), android (-5 g, 95% CI: -8 to -2]), and gynoid fat (-6 g, 95% CI: -11 to -0.5). Vigorous activity results were similar. Light PA was associated with increased TB (17.1 g, 95% CI: 3.0-31.3) and gynoid lean mass (3.9 g, 95% CI: 1.0-6.8) and BMC (0.59 g, 95% CI: 0.10-1.07). In boys, there were significant associations between activity and DXA percent body fat measures that were not found with the body mass index percentile. Objective measures of PA were inversely associated with TB, android, and gynoid fat, whereas ST was directly associated with TB percent fat and, in particular, android fat. Activity levels predict body composition measures by DXA and, in

  9. Correlates of Total Sedentary Time and Screen Time in 9-11 Year-Old Children around the World: The International Study of Childhood Obesity, Lifestyle and the Environment.

    Science.gov (United States)

    LeBlanc, Allana G; Katzmarzyk, Peter T; Barreira, Tiago V; Broyles, Stephanie T; Chaput, Jean-Philippe; Church, Timothy S; Fogelholm, Mikael; Harrington, Deirdre M; Hu, Gang; Kuriyan, Rebecca; Kurpad, Anura; Lambert, Estelle V; Maher, Carol; Maia, José; Matsudo, Victor; Olds, Timothy; Onywera, Vincent; Sarmiento, Olga L; Standage, Martyn; Tudor-Locke, Catrine; Zhao, Pei; Tremblay, Mark S

    2015-01-01

    Previously, studies examining correlates of sedentary behavior have been limited by small sample size, restricted geographic area, and little socio-cultural variability. Further, few studies have examined correlates of total sedentary time (SED) and screen time (ST) in the same population. This study aimed to investigate correlates of SED and ST in children around the world. The sample included 5,844 children (45.6% boys, mean age = 10.4 years) from study sites in Australia, Brazil, Canada, China, Colombia, Finland, India, Kenya, Portugal, South Africa, the United Kingdom, and the United States. Child- and parent-reported behavioral, household, and neighborhood characteristics and directly measured anthropometric and accelerometer data were obtained. Twenty-one potential correlates of SED and ST were examined using multilevel models, adjusting for sex, age, and highest parental education, with school and study site as random effects. Variables that were moderately associated with SED and/or ST in univariate analyses (pcomputer in the bedroom. In this global sample many common correlates of SED and ST were identified, some of which are easily modifiable (e.g., removing TV from the bedroom), and others that may require more intense behavioral interventions (e.g., increasing physical activity). Future work should incorporate these findings into the development of culturally meaningful public health messages.

  10. Performance of the postwash total motile sperm count as a predictor of pregnancy at the time of intrauterine insemination: a meta-analysis

    NARCIS (Netherlands)

    van Weert, Janne-Meije; Repping, Sjoerd; van Voorhis, Bradley J.; van der Veen, Fulco; Bossuyt, Patrick M. M.; Mol, Ben W. J.

    2004-01-01

    Objective: To assess the performance and clinical value of the postwash total motile sperm count (postwash TMC) as a test to predict intrauterine insemination (IUI) outcome. Design: Meta-analysis of diagnostic tests. Setting: Tertiary fertility center. Patient(s): Patients undergoing IUI.

  11. Impact of total PSA, PSA doubling time and PSA velocity on detection rates of 11C-Choline positron emission tomography in recurrent prostate cancer

    NARCIS (Netherlands)

    Rybalov, Maxim; Breeuwsma, Anthonius J.; Leliveld, Anna M.; Pruim, Jan; Dierckx, Rudi A.; de Jong, Igle J.

    PURPOSE: To evaluate the effect of total PSA (tPSA) and PSA kinetics on the detection rates of (11)C-Choline PET in patients with biochemical recurrence (BCR) after radical prostatectomy (RP) or external beam radiotherapy (EBRT). METHODS: We included 185 patients with BCR after RP (PSA >0.2 ng/ml)

  12. Relative performance of priority rules for hybrid flow shop scheduling with setup times

    Directory of Open Access Journals (Sweden)

    Helio Yochihiro Fuchigami

    2015-12-01

    Full Text Available This paper focuses the hybrid flow shop scheduling problem with explicit and sequence-independent setup times. This production environment is a multistage system with unidirectional flow of jobs, wherein each stage may contain multiple machines available for processing. The optimized measure was the total time to complete the schedule (makespan. The aim was to propose new priority rules to support the schedule and to evaluate their relative performance at the production system considered by the percentage of success, relative deviation, standard deviation of relative deviation, and average CPU time. Computational experiments have indicated that the rules using ascending order of the sum of processing and setup times of the first stage (SPT1 and SPT1_ERD performed better, reaching together more than 56% of success.

  13. As relações entre estratégia de produção, TQM (Total Quality Management ou Gestão da Qualidade Total e JIT (Just-In-Time: estudos de caso em uma empresa do setor automobilístico e em dois de seus fornecedores The relationships between production strategy, TQM (Total Quality Management and JIT (Just-In-Time: a case study in a automobile company and in two of its suppliers

    Directory of Open Access Journals (Sweden)

    Aline Lamon Cerra

    2000-12-01

    Full Text Available Este artigo tem como objetivo discutir as relações existentes entre as Estratégias de Produção e os programas TQM (Total Quality Management ou Gestão da Qualidade Total e JIT (Just-In-Time, destacando a importância da integração destes programas às Estratégias de Produção nas empresas. O estudo desta integração demonstrou que os programas TQM e JIT, embora possam trabalhar separadamente, são complementares e devem estar alinhados com as Estratégias de Produção a fim de promover melhorias na função produção. Além disso, será verificado de que modo a empresa automobilística condiciona a difusão das estratégias e programas adotados em sua cadeia de fornecedores. Serão apresentados os conceitos básicos e questões teóricas relacionados à Estratégia de Produção, ao TQM e ao JIT, e um estudo de caso nas três empresas.This paper has the objective to discuss the existent relationships between the Manufacturing Strategy and the programs TQM - Total Quality Management and JIT - Just-In-Time, highlighting the importance of the integration between the production strategy and the other strategies of the company. Besides, the relationships will be discussed among TQM and JIT, programs that came up with the objective of promoting In the function production; the study demonstrated that those programs, although they can work separately, are complemental and should be aligned with the production strategy. Then, we will present the basic concepts and theoretical subjects related to the Production Strategy, to TQM and JIT, and a case study in a automobile company and two of its suppliers. The three companies are multinational and placed inside São Paulo. Thus, in the automobile company we will also verify the conditions of diffusion of its strategies and its programs in its supply chain.

  14. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  15. OpenMP GNU and Intel Fortran programs for solving the time-dependent Gross-Pitaevskii equation

    Science.gov (United States)

    Young-S., Luis E.; Muruganandam, Paulsamy; Adhikari, Sadhan K.; Lončar, Vladimir; Vudragović, Dušan; Balaž, Antun

    2017-11-01

    reduce the execution time cannot be overemphasized. To address this issue, we provide here such OpenMP Fortran programs, optimized for both Intel and GNU Fortran compilers and capable of using all available CPU cores, which can significantly reduce the execution time. Summary of revisions: Previous Fortran programs [1] for solving the time-dependent GP equation in 1d, 2d, and 3d with different trap symmetries have been parallelized using the OpenMP interface to reduce the execution time on multi-core processors. There are six different trap symmetries considered, resulting in six programs for imaginary-time propagation and six for real-time propagation, totaling to 12 programs included in BEC-GP-OMP-FOR software package. All input data (number of atoms, scattering length, harmonic oscillator trap length, trap anisotropy, etc.) are conveniently placed at the beginning of each program, as before [2]. Present programs introduce a new input parameter, which is designated by Number_of_Threads and defines the number of CPU cores of the processor to be used in the calculation. If one sets the value 0 for this parameter, all available CPU cores will be used. For the most efficient calculation it is advisable to leave one CPU core unused for the background system's jobs. For example, on a machine with 20 CPU cores such that we used for testing, it is advisable to use up to 19 CPU cores. However, the total number of used CPU cores can be divided into more than one job. For instance, one can run three simulations simultaneously using 10, 4, and 5 CPU cores, respectively, thus totaling to 19 used CPU cores on a 20-core computer. The Fortran source programs are located in the directory src, and can be compiled by the make command using the makefile in the root directory BEC-GP-OMP-FOR of the software package. The examples of produced output files can be found in the directory output, although some large density files are omitted, to save space. The programs calculate the values of

  16. Estimating Total Program Cost of a Long-Term, High-Technology, High-Risk Project with Task Durations and Costs That May Increase Over Time

    National Research Council Canada - National Science Library

    Brown, Gerald G; Grose, Roger T; Koyak, Robert A

    2006-01-01

    .... Each task suffers some risk of delay and changed cost. Ignoring budget constraints, we use Monte Carlo simulation of the duration of each task in the project to infer the probability distribution of the project completion time...

  17. Correlates of Total Sedentary Time and Screen Time in 9-11 Year-Old Children around the World: The International Study of Childhood Obesity, Lifestyle and the Environment.

    Directory of Open Access Journals (Sweden)

    Allana G LeBlanc

    Full Text Available Previously, studies examining correlates of sedentary behavior have been limited by small sample size, restricted geographic area, and little socio-cultural variability. Further, few studies have examined correlates of total sedentary time (SED and screen time (ST in the same population. This study aimed to investigate correlates of SED and ST in children around the world.The sample included 5,844 children (45.6% boys, mean age = 10.4 years from study sites in Australia, Brazil, Canada, China, Colombia, Finland, India, Kenya, Portugal, South Africa, the United Kingdom, and the United States. Child- and parent-reported behavioral, household, and neighborhood characteristics and directly measured anthropometric and accelerometer data were obtained. Twenty-one potential correlates of SED and ST were examined using multilevel models, adjusting for sex, age, and highest parental education, with school and study site as random effects. Variables that were moderately associated with SED and/or ST in univariate analyses (p<0.10 were included in the final models. Variables that remained significant in the final models (p<0.05 were considered correlates of SED and/or ST.Children averaged 8.6 hours of daily SED, and 54.2% of children failed to meet ST guidelines. In all study sites, boys reported higher ST, were less likely to meet ST guidelines, and had higher BMI z-scores than girls. In 9 of 12 sites, girls engaged in significantly more SED than boys. Common correlates of higher SED and ST included poor weight status, not meeting physical activity guidelines, and having a TV or a computer in the bedroom.In this global sample many common correlates of SED and ST were identified, some of which are easily modifiable (e.g., removing TV from the bedroom, and others that may require more intense behavioral interventions (e.g., increasing physical activity. Future work should incorporate these findings into the development of culturally meaningful public health

  18. Endoplasmic reticulum stress mediating downregulated StAR and 3-beta-HSD and low plasma testosterone caused by hypoxia is attenuated by CPU86017-RS and nifedipine

    Directory of Open Access Journals (Sweden)

    Liu Gui-Lai

    2012-01-01

    Full Text Available Abstract Background Hypoxia exposure initiates low serum testosterone levels that could be attributed to downregulated androgen biosynthesizing genes such as StAR (steroidogenic acute regulatory protein and 3-beta-HSD (3-beta-hydroxysteroid dehydrogenase in the testis. It was hypothesized that these abnormalities in the testis by hypoxia are associated with oxidative stress and an increase in chaperones of endoplasmic reticulum stress (ER stress and ER stress could be modulated by a reduction in calcium influx. Therefore, we verify that if an application of CPU86017-RS (simplified as RS, a derivative to berberine could alleviate the ER stress and depressed gene expressions of StAR and 3-beta-HSD, and low plasma testosterone in hypoxic rats, these were compared with those of nifedipine. Methods Adult male Sprague-Dawley rats were randomly divided into control, hypoxia for 28 days, and hypoxia treated (mg/kg, p.o. during the last 14 days with nifedipine (Nif, 10 and three doses of RS (20, 40, 80, and normal rats treated with RS isomer (80. Serum testosterone (T and luteinizing hormone (LH were measured. The testicular expressions of biomarkers including StAR, 3-beta-HSD, immunoglobulin heavy chain binding protein (Bip, double-strand RNA-activated protein kinase-like ER kinase (PERK and pro-apoptotic transcription factor C/EBP homologous protein (CHOP were measured. Results In hypoxic rats, serum testosterone levels decreased and mRNA and protein expressions of the testosterone biosynthesis related genes, StAR and 3-beta-HSD were downregulated. These changes were linked to an increase in oxidants and upregulated ER stress chaperones: Bip, PERK, CHOP and distorted histological structure of the seminiferous tubules in the testis. These abnormalities were attenuated significantly by CPU86017-RS and nifedipine. Conclusion Downregulated StAR and 3-beta-HSD significantly contribute to low testosterone in hypoxic rats and is associated with ER stress

  19. Influence of number of deliveries and total breast-feeding time on bone mineral density in premenopausal and young postmenopausal women.

    Science.gov (United States)

    Tsvetov, Gloria; Levy, Sigal; Benbassat, Carlos; Shraga-Slutzky, Ilana; Hirsch, Dania

    2014-03-01

    Pregnancy and lactation have been associated with decline in bone mineral density (BMD). It is not clear if there is a full recovery of BMD to baseline. This study sought to determine if pregnancy or breast-feeding or both have a cumulative effect on BMD in premenopausal and early postmenopausal women. We performed single-center cohort analysis. Five hundred women aged 35-55 years underwent routine BMD screening from February to July 2011 at a tertiary medical center. Patients were questioned about number of total full-term deliveries and duration of breast-feeding and completed a background questionnaire on menarche and menopause, smoking, dairy product consumption, and weekly physical exercise. Weight and height were measured. Dual-energy X-ray absorptiometry was used to measure spinal, dual femoral neck, and total hip BMD. Associations between background characteristics and BMD values were analyzed. Sixty percent of the women were premenopausal. Mean number of deliveries was 2.5 and mean duration of breast-feeding was 9.12 months. On univariate analysis, BMD values were negatively correlated with patient age (p=0.006) and number of births (p=0.013), and positively correlated with body mass index (posteoporosis later in life. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Multi-GPU based acceleration of a list-mode DRAMA toward real-time OpenPET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kinouchi, Shoko [Chiba Univ. (Japan); National Institute of Radiological Sciences, Chiba (Japan); Yamaya, Taiga; Yoshida, Eiji; Tashima, Hideaki [National Institute of Radiological Sciences, Chiba (Japan); Kudo, Hiroyuki [Tsukuba Univ., Ibaraki (Japan); Suga, Mikio [Chiba Univ. (Japan)

    2011-07-01

    OpenPET, which has a physical gap between two detector rings, is our new PET geometry. In order to realize future radiation therapy guided by OpenPET, real-time imaging is required. Therefore we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. Therefore, in our previous study, we developed a new system model which was suited for the GPU implementation. In this paper, we implemented our image reconstruction method using 4 GPUs to get further acceleration. We applied the developed reconstruction method to a small OpenPET prototype. We obtained calculation times of total iteration using 4 GPUs that were 3.4 times faster than using a single GPU. Compared to using a single CPU, we achieved the reconstruction time speed-up of 142 times using 4 GPUs. (orig.)

  1. The effect of education and supervised exercise vs. education alone on the time to total hip replacement in patients with severe hip osteoarthritis. A randomized clinical trial protocol

    DEFF Research Database (Denmark)

    Jensen, Carsten; Roos, Ewa M.; Kjærsgaard-Andersen, Per

    2013-01-01

    Background: The age- and gender-specific incidence of total hip replacement surgery has increased over the last two decades in all age groups. Recent studies indicate that non-surgical interventions are effective in reducing pain and disability, even at later stages of the disease when joint...... will receive 3 months of supervised exercise consisting of 12 sessions of individualized, goal-based neuromuscular training, and 12 sessions of intensive resistance training plus patient education (3 sessions). The control group will receive only patient education (3 sessions). The primary end...... measures are the five subscales of the Hip disability and Osteoarthritis Outcome Score, physical activity (UCLA activity score), and patient’s global perceived effect. Other measures include pain after exercise, joint-specific adverse events, exercise adherence, general health status (EQ-5D-5L), mechanical...

  2. Effect of fermentation time of mixture of solid and liquid wastes from tapioca industry to percentage reduction of TSS (Total Suspended Solids)

    Science.gov (United States)

    Pandia, S.; Tanata, S.; Rachel, M.; Octiva, C.; Sialagan, N.

    2018-02-01

    The waste from tapioca industry is as an organic waste that contains many important compounds such as carbohydrate, protein, and glucose. This research as aimed to know the effect of fermentation time from solid waste combined with waste-water from the tapioca industry to percentage reduction of TSS. The study was started by mixing the solid and liquid wastes from tapioca industry at a ratio of 70:30, 60:40, 50:50, 40:60, and 30:70 (w/w) with a starter from solid waste of cattle in a batch anaerobic digester. The percentage reduction of TSS was 72.2289 at a ratio by weight of the composition of solid and liquid wastes from tapioca industry was 70:30 after 30 days of fermentation time.

  3. Building an application for computing the resource requests such as disk, CPU, and tape and studying the time evolution of computing model

    CERN Document Server

    Noormandipour, Mohammad Reza

    2017-01-01

    The goal of this project was building an application to calculate the computing resources needed by the LHCb experiment for data processing and analysis, and to predict their evolution in future years. The source code was developed in the Python programming language and the application built and developed in CERN GitLab. This application will facilitate the calculation of resources required by LHCb in both qualitative and quantitative aspects. The granularity of computations is improved to a weekly basis, in contrast with the yearly basis used so far. The LHCb computing model will benefit from the new possibilities and options added, as the new predictions and calculations are aimed at giving more realistic and accurate estimates.

  4. Laparoscopic total pancreatectomy

    Science.gov (United States)

    Wang, Xin; Li, Yongbin; Cai, Yunqiang; Liu, Xubao; Peng, Bing

    2017-01-01

    Abstract Rationale: Laparoscopic total pancreatectomy is a complicated surgical procedure and rarely been reported. This study was conducted to investigate the safety and feasibility of laparoscopic total pancreatectomy. Patients and Methods: Three patients underwent laparoscopic total pancreatectomy between May 2014 and August 2015. We reviewed their general demographic data, perioperative details, and short-term outcomes. General morbidity was assessed using Clavien–Dindo classification and delayed gastric emptying (DGE) was evaluated by International Study Group of Pancreatic Surgery (ISGPS) definition. Diagnosis and Outcomes: The indications for laparoscopic total pancreatectomy were intraductal papillary mucinous neoplasm (IPMN) (n = 2) and pancreatic neuroendocrine tumor (PNET) (n = 1). All patients underwent laparoscopic pylorus and spleen-preserving total pancreatectomy, the mean operative time was 490 minutes (range 450–540 minutes), the mean estimated blood loss was 266 mL (range 100–400 minutes); 2 patients suffered from postoperative complication. All the patients recovered uneventfully with conservative treatment and discharged with a mean hospital stay 18 days (range 8–24 days). The short-term (from 108 to 600 days) follow up demonstrated 3 patients had normal and consistent glycated hemoglobin (HbA1c) level with acceptable quality of life. Lessons: Laparoscopic total pancreatectomy is feasible and safe in selected patients and pylorus and spleen preserving technique should be considered. Further prospective randomized studies are needed to obtain a comprehensive understanding the role of laparoscopic technique in total pancreatectomy. PMID:28099344

  5. Technique to increase performance of C-program for control systems. Compiler technique for low-cost CPU; Seigyoyo C gengo program no kosokuka gijutsu. Tei cost CPU no tame no gengo compiler gijutsu

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Y [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    The software of automotive control systems has become increasingly large and complex. High level languages (primarily C) and the compilers become more important to reduce coding time. Most compilers represent real number in the floating point format specified by IEEE standard 754. Most microprocessors in the automotive industry have no hardware for the operation using the IEEE standard due to the cost requirements, resulting in the slow execution speed and large code size. Alternative formats to increase execution speed and reduce code size are proposed. Experimental results for the alternative formats show the improvement in execution speed and code size. 4 refs., 3 figs., 2 tabs.

  6. A sequential logic circuit for coincidences randomly distributed in 'time' and 'duration', with selection and total sampling

    International Nuclear Information System (INIS)

    Carnet, Bernard; Delhumeau, Michel

    1971-06-01

    The principles of binary analysis applied to the investigation of sequential circuits were used to design a two way coincidence circuit whose input may be, random or periodic variables of constant or variable duration. The output signal strictly reproduces the characteristics of the input signal triggering the coincidence. A coincidence between input signals does not produce any output signal if one of the signals has already triggered the output signal. The characteristics of the output signal in relation to those of the input signal are: minimum time jitter, excellent duration reproducibility and maximum efficiency. Some rules are given for achieving these results. The symmetry, transitivity and non-transitivity characteristics of the edges on the primitive graph are analyzed and lead to some rules for positioning the states on a secondary graph. It is from this graph that the equations of the circuits can be calculated. The development of the circuit and its dynamic testing are discussed. For this testing, the functioning of the circuit is simulated by feeding into the input randomly generated signals

  7. Impact of mobile intensive care unit use on total ischemic time and clinical outcomes in ST-elevation myocardial infarction patients - real-world data from the Acute Coronary Syndrome Israeli Survey.

    Science.gov (United States)

    Koifman, Edward; Beigel, Roy; Iakobishvili, Zaza; Shlomo, Nir; Biton, Yitschak; Sabbag, Avi; Asher, Elad; Atar, Shaul; Gottlieb, Shmuel; Alcalai, Ronny; Zahger, Doron; Segev, Amit; Goldenberg, Ilan; Strugo, Rafael; Matetzky, Shlomi

    2017-01-01

    Ischemic time has prognostic importance in ST-elevation myocardial infarction patients. Mobile intensive care unit use can reduce components of total ischemic time by appropriate triage of ST-elevation myocardial infarction patients. Data from the Acute Coronary Survey in Israel registry 2000-2010 were analyzed to evaluate factors associated with mobile intensive care unit use and its impact on total ischemic time and patient outcomes. The study comprised 5474 ST-elevation myocardial infarction patients enrolled in the Acute Coronary Survey in Israel registry, of whom 46% ( n=2538) arrived via mobile intensive care units. There was a significant increase in rates of mobile intensive care unit utilization from 36% in 2000 to over 50% in 2010 ( pcare unit use were Killip>1 (odds ratio=1.32, pcare units benefitted from increased rates of primary reperfusion therapy (odds ratio=1.58, pcare unit benefitted from shorter median total ischemic time compared with non-mobile intensive care unit patients (175 (interquartile range 120-262) vs 195 (interquartile range 130-333) min, respectively ( pcare unit use was the most important predictor in achieving door-to-balloon time care unit group (odds ratio=0.79, 95% confidence interval (0.66-0.94), p=0.01). Among patients with ST-elevation myocardial infarction, the utilization of mobile intensive care units is associated with increased rates of primary reperfusion, a reduction in the time interval to reperfusion, and a reduction in one-year adjusted mortality.

  8. Control of total voltage in the large distributed RF system of LEP

    CERN Document Server

    Ciapala, Edmond

    1995-01-01

    The LEP RF system is made up of a large number of independent RF units situated around the ring near the interaction points. These have different available RF voltages depending on their type and they may be inactive or unable to provide full voltage for certain periods. The original RF voltage control system was based on local RF unit voltage function generators pre-loaded with individual tables for energy ramping. This was replaced this year by a more flexible global RF voltage control system. A central controller in the main control room has direct access to the units over the LEP TDM system via multiplexers and local serial links. It continuously checks the state of all the units and adjusts their voltages to maintain the desired total voltage under all conditions. This voltage is distributed among the individual units to reduce the adverse effects of RF voltage asymmetry around the machine as far as possible. The central controller is a VME system with 68040 CPU and real time multitasking operating syste...

  9. Trends in television and computer/videogame use and total screen time in high school students from Caruaru city, Pernambuco, Brazil: A repeated panel study between 2007 and 2012

    Directory of Open Access Journals (Sweden)

    Luis José Lagos Aros

    2018-01-01

    Full Text Available Abstract Aim: to analyze the pattern and trends of use of screen-based devices and associated factors from two surveys conducted on public high school students in Caruaru-PE. Methods: two representative school-based cross-sectional surveys conducted in 2007 (n=600 and 2012 (n=715 on high school students (15-20 years old. The time of exposure to television (TV and computer/videogames PC/VG was obtained through a validated questionnaire, and ≥3 hours/day was considered as being excessive exposure. The independent variables were socioeconomic status, school related, and physical activity. Crude and adjusted binary logistic regression were employed to examine the factors associated with screen time. The statistical significance was set at p<0.05. Results: There was a significant reduction in TV time on weekdays and total weekly, but no change in the prevalence of excessive exposure. The proportion of exposure to PC/VG of ≥3 hours/day increased 182.5% on weekdays and 69.5% on weekends (p <0.05. In 2007, being physically active was the only protection factor for excessive exposure to total screen time. In 2012, girls presented less chance of excessive exposure to all screen-based devices and total screen time. Other protective factors were studying at night and being physically active (PC/VG time, while residing in an urban area [OR 5.03(2.77-7.41] and having higher family income [OR 1.55(1.04-2.30] were risk factors. Conclusion: Significant and important changes in the time trends and pattern of use PC/VG were observed during the interval of 5 years. This rapid increase could be associated with increased family income and improved access to these devices, driven by technological developments.

  10. Total volume versus bouts

    DEFF Research Database (Denmark)

    Chinapaw, Mai; Klakk, Heidi; Møller, Niels Christian

    2018-01-01

    BACKGROUND/OBJECTIVES: Examine the prospective relationship of total volume versus bouts of sedentary behaviour (SB) and moderate-to-vigorous physical activity (MVPA) with cardiometabolic risk in children. In addition, the moderating effects of weight status and MVPA were explored. SUBJECTS....../METHODS: Longitudinal study including 454 primary school children (mean age 10.3 years). Total volume and bouts (i.e. ≥10 min consecutive minutes) of MVPA and SB were assessed by accelerometry in Nov 2009/Jan 2010 (T1) and Aug/Oct 2010 (T2). Triglycerides, total cholesterol/HDL cholesterol ratio (TC:HDLC ratio......, with or without mutual adjustments between MVPA and SB. The moderating effects of weight status and MVPA (for SB only) were examined by adding interaction terms. RESULTS: Children engaged daily in about 60 min of total MVPA and 0-15 min/week in MVPA bouts. Mean total sedentary time was around 7 h/day with over 3...

  11. Application of queueing models to multiprogrammed computer systems operating in a time-critical environment

    Science.gov (United States)

    Eckhardt, D. E., Jr.

    1979-01-01

    A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.

  12. Qualità totale e mobilità totale Total Quality and Total Mobility

    Directory of Open Access Journals (Sweden)

    Giuseppe Trieste

    2010-05-01

    Full Text Available FIABA ONLUS (Italian Fund for Elimination of Architectural Barriers was founded in 2000 with the aim of promoting a culture of equal opportunities and, above all, it has as its main goal to involve public and private institutions to create a really accessible and usable environment for everyone. Total accessibility, Total usability and Total mobility are key indicators to define quality of life within cities. A supportive environment that is free of architectural, cultural and psychological barriers allows everyone to live with ease and universality. In fact, people who access to goods and services in the urban context can use to their advantage time and space, so they can do their activities and can maintain relationships that are deemed significant for their social life. The main aim of urban accessibility is to raise the comfort of space for citizens, eliminating all barriers that discriminate people, and prevent from an equality of opportunity. “FIABA FUND - City of ... for the removal of architectural barriers” is an idea of FIABA that has already affected many regions of Italy as Lazio, Lombardy, Campania, Abruzzi and Calabria. It is a National project which provides for opening a bank account in the cities of referring, in which for the first time, all together, individuals and private and public institutions can make a donation to fund initiatives for the removal of architectural barriers within its own territory for a real and effective total accessibility. Last February the fund was launched in Rome with the aim of achieving a Capital without barriers and a Town European model of accessibility and usability. Urban mobility is a prerequisite to access to goods and services, and to organize activities related to daily life. FIABA promotes the concept of sustainable mobility for all, supported by the European Commission’s White Paper. We need a cultural change in management and organization of public means, which might focus on

  13. Decreasing the number of small eating occasions (total energy intake) regardless of the time of day may be important to improve diet quality but not adiposity: a cross-sectional study in British children and adolescents.

    Science.gov (United States)

    Murakami, Kentaro; Livingstone, M Barbara E

    2016-01-28

    Evidence of associations between meal frequency (MF) and snack frequency (SF) and diet and obesity in young populations is limited. This cross-sectional study examined MF and SF in relation to dietary intake and adiposity measures in British children aged 4-10 years (n 818) and adolescents aged 11-18 years (n 818). Based on data from a 7-d weighed dietary record, all eating occasions were divided into meals or snacks on the basis of contribution to energy intake (≥15 or total sugar, lower intakes of cereals, fish, meat, protein, PUFA, starch and dietary fibre, and a lower diet quality (assessed by the Mediterranean diet score, except for SF based on energy contribution in adolescents). MF based on time, but not based on energy contribution, was associated with higher intakes of confectionery and total sugar, lower intakes of fish, protein, PUFA and starch, and, only in children, a lower diet quality. All measures of MF and SF showed no association with adiposity measures. In conclusion, this cross-sectional study in British children and adolescents suggests that decreasing the number of small eating occasions (total energy intake) regardless of the time of day may be important to improve diet quality but not adiposity.

  14. Determining the better solvent and time for extracting soil by soxhlet in TPH (Total Petroleum Hydrocarbon) gravimetric method; A determinacao de qual o melhor solvente e o melhor tempo de extracao de sedimento em aparato Soxhlet na metodologia do TPH (Total Petroleum Hydrocarbon) gravimetrico

    Energy Technology Data Exchange (ETDEWEB)

    Koike, Renato S.; Lima, Guilherme; Baisch, Paulo R. [Fundacao Universidade Federal do Rio Grande (FURG), RS (Brazil)

    2004-07-01

    There are several methods of TPH (Total Petroleum Hydrocarbons) analysis of petroleum hydrocarbons contaminants in sediment. The TPH gravimetric has been widely used in many studies and in oil spill monitoring case. The present work examined three different solvents (DCM, DCM/N-HEX and N-HEX), in three different times, to the purpose to optimize the contaminants extraction using USEPA 9071 and 3540 reference method. Then was realized analysis of Total Organic Carbon (TOC) for monitoring the reproducible extracts. The sediments used in this experiment was collected in the Cavalos Island, localized in the city of Rio Grande, RS-Brasil. The sediment was 'washed' and after then contaminated with petroleum. The extracts were realized in Soxhlet apparatus, in three different times (4, 8 and 12 hours), and TOC analysis were realized before and after the extraction. The result demonstrated that eight hours with DCM/N-HEX solvent is more indicated for TPH gravimetric in sediment analysis with high concentration of petroleum hydrocarbons. TOC analysis demonstrated inappropriate for monitoring extract reproducibility. (author)

  15. Adaptive real-time methodology for optimizing energy-efficient computing

    Science.gov (United States)

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  16. Volumetry based biomarker speed of growth: Quantifying the change of total tumor volume in whole-body magnetic resonance imaging over time improves risk stratification of smoldering multiple myeloma patients.

    Science.gov (United States)

    Wennmann, Markus; Kintzelé, Laurent; Piraud, Marie; Menze, Bjoern H; Hielscher, Thomas; Hofmanninger, Johannes; Wagner, Barbara; Kauczor, Hans-Ulrich; Merz, Maximilian; Hillengass, Jens; Langs, Georg; Weber, Marc-André

    2018-05-18

    The purpose of this study was to improve risk stratification of smoldering multiple myeloma patients, introducing new 3D-volumetry based imaging biomarkers derived from whole-body MRI. Two-hundred twenty whole-body MRIs from 63 patients with smoldering multiple myeloma were retrospectively analyzed and all focal lesions >5mm were manually segmented for volume quantification. The imaging biomarkers total tumor volume, speed of growth (development of the total tumor volume over time), number of focal lesions, development of the number of focal lesions over time and the recent imaging biomarker '>1 focal lesion' of the International Myeloma Working Group were compared, taking 2-year progression rate, sensitivity and false positive rate into account. Speed of growth, using a cutoff of 114mm 3 /month, was able to isolate a high-risk group with a 2-year progression rate of 82.5%. Additionally, it showed by far the highest sensitivity in this study and in comparison to other biomarkers in the literature, detecting 63.2% of patients who progress within 2 years. Furthermore, its false positive rate (8.7%) was much lower compared to the recent imaging biomarker '>1 focal lesion' of the International Myeloma Working Group. Therefore, speed of growth is the preferable imaging biomarker for risk stratification of smoldering multiple myeloma patients.

  17. Assessing the acidity and total sugar content of four different commercially available beverages commonly consumed by children and its time-dependent effect on plaque and salivary pH

    Directory of Open Access Journals (Sweden)

    Abhishek Jha

    2015-01-01

    Full Text Available Introduction: Sugared beverages such as cola, packaged juice, are known for cariogenicity their intake leads to the immediate drop in plaque and salivary pH, which can be an etiologic factor for dental caries. Objective: The objective was to assess the endogenous acidity and total sugar content of four commercially available beverages commonly consumed by children in India and its effect on salivary and plaque pH. Materials and Methods: A crossover controlled trial was conducted. 60 randomly selected school children from school at south Bangalore, who were meeting the inclusion criteria, were asked to refrain from oral hygiene practices for 24 h till the sample collection. Children were divided into four groups and for each group test drink was given. Plaque and salivary sample were collected at the time of 2, 5, 10, 20, and 30 min and were sent for pH estimation. 7 days of washout time was given for each cross-over and 3 such cross-over was done during the study and the drinks were interchanged. Results: Sweet lassi was found to be having maximum total sugar content of and Coca-Cola had the lowest pH 5.3. Milk showed least sugar content and highest pH (6.7. Study showed a significant drop in pH after consumption of all the test drinks (P = 0. 05. Carbonated beverage, that is, Coca-Cola Showed the maximum drop of pH, followed by Pulpy orange in both the plaque as well as saliva. Coca-Cola showed the drop of plaque pH below the critical level, 5.44 (0.134. Conclusion: Sweet lassi showed the maximum inherent total sugar content, lowest inherent pH and maximum fall in plaque and salivary pH, was found with Coca-Cola.

  18. Total process surveillance: (TOPS)

    International Nuclear Information System (INIS)

    Millar, J.H.P.

    1992-01-01

    A Total Process Surveillance system is under development which can provide, in real-time, additional process information from a limited number of raw measurement signals. This is achieved by using a robust model based observer to generate estimates of the process' internal states. The observer utilises the analytical reduncancy among a diverse range of transducers and can thus accommodate off-normal conditions which lead to transducer loss or damage. The modular hierarchical structure of the system enables the maximum amount of information to be assimilated from the available instrument signals no matter how diverse. This structure also constitutes a data reduction path thus reducing operator cognitive overload from a large number of varying, and possibly contradictory, raw plant signals. (orig.)

  19. Total ankle joint replacement.

    Science.gov (United States)

    2016-02-01

    Ankle arthritis results in a stiff and painful ankle and can be a major cause of disability. For people with end-stage ankle arthritis, arthrodesis (ankle fusion) is effective at reducing pain in the shorter term, but results in a fixed joint, and over time the loss of mobility places stress on other joints in the foot that may lead to arthritis, pain and dysfunction. Another option is to perform a total ankle joint replacement, with the aim of giving the patient a mobile and pain-free ankle. In this article we review the efficacy of this procedure, including how it compares to ankle arthrodesis, and consider the indications and complications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. The Research and Test of Fast Radio Burst Real-time Search Algorithm Based on GPU Acceleration

    Science.gov (United States)

    Wang, J.; Chen, M. Z.; Pei, X.; Wang, Z. Q.

    2017-03-01

    In order to satisfy the research needs of Nanshan 25 m radio telescope of Xinjiang Astronomical Observatory (XAO) and study the key technology of the planned QiTai radio Telescope (QTT), the receiver group of XAO studied the GPU (Graphics Processing Unit) based real-time FRB searching algorithm which developed from the original FRB searching algorithm based on CPU (Central Processing Unit), and built the FRB real-time searching system. The comparison of the GPU system and the CPU system shows that: on the basis of ensuring the accuracy of the search, the speed of the GPU accelerated algorithm is improved by 35-45 times compared with the CPU algorithm.

  1. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  2. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  3. Efficacy of treatment of insomnia in migraineurs with eszopiclone (Lunesta®) and its effect on total sleep time, headache frequency, and daytime functioning: A randomized, double-blind, placebo-controlled, parallel-group, pilot study.

    Science.gov (United States)

    Spierings, Egilius L H; McAllister, Peter J; Bilchik, Tanya R

    2015-04-01

    A review on headache and insomnia revealed that insomnia is a risk factor for increased headache frequency and headache intensity in migraineurs. The authors designed a randomized, double blind, placebo-controlled, parallel-group, pilot study in which migraineurs who also had insomnia were enrolled, to test this observation. In the study, the authors treated 79 subjects with IHS-II migraine with and/or without aura and with DSM-IV primary insomnia for 6 weeks with 3 mg eszopiclone (Lunesta(®)) or placebo at bedtime. The treatment was preceded by a 2-week baseline period and followed by a 2-week run-out period. Of the 79 subjects treated, 75 were evaluable, 35 in the eszopiclone group, and 40 in the placebo group. At baseline, the groups were comparable except for sleep latency. Of the three remaining sleep variables, total sleep time, nighttime awakenings, and sleep quality, the number of nighttime awakenings during the 6-week treatment period was significantly lower in the eszopiclone group than in the placebo group (P = 0.03). Of the three daytime variables, alertness, fatigue, and functioning, this was also the case for fatigue (P = 005). The headache variables, frequency, duration, and intensity, did not show a difference from placebo during the 6-week treatment period. The study did not meet primary endpoint, that is, the difference in total sleep time during the 6-week treatment period between eszopiclone and placebo was less than 40 minutes. Therefore, it failed to answer the question as to whether insomnia is, indeed, a risk factor for increased headache frequency and headache intensity in migraineurs.

  4. Redução do tempo de digestão na determinação de nitrogênio em solos Reduction of digestion time in the determination of total nitrogen in soils

    Directory of Open Access Journals (Sweden)

    Flávio Verlengia

    1968-01-01

    Full Text Available Foi estudada a redução do tempo de digestão na determinação do nitrogênio total em solos, assim como a perda dêsse nutriente durante a sua determinação. Procurou-se comparar o efeito de alguns catalisadores, como sulfato de cobre, óxido de mercúrio e selênio. Diversos tempos de ataques foram estudados, desde 10 até 960 minutos (16 horas. Verificou-se que as maiores reduções de tempo foram obtidas com o selênio, utilizado como catalisador, em presença de óxido de mercúrio, particularmente em solos onde o ataque se mostrou mais difícil. O catalisador tradicional - sulfato de cobre - foi o menos eficiente. A utilização do selênio, não provocou perda de nitrogênio durante a digestão.By using the Kjeldahl method in the determination of total nitrogen in soils, the effect of various catalysts related with digestion time and with possible nitrogen losses was studied. The experiment was carried out by using the catalysts CuSO4.5H2O; HgO and Se in six treatments. Results indicated that a pronounced reduction on digestion time was obtained by using selenium as catalyst. Best results, however, were obtained by using a mixture of selenium and mercury oxide, principally for soils of very difficult digestion (organic soil and "terra roxa" soil. In all treatments CuSO4.5H2O was the less efficient. Use of selenium did not cause loss of nitrogen.

  5. Timing comparison of two-dimensional discrete-ordinates codes for criticality calculations

    International Nuclear Information System (INIS)

    Miller, W.F. Jr.; Alcouffe, R.E.; Bosler, G.E.; Brinkley, F.W. Jr.; O'dell, R.D.

    1979-01-01

    The authors compare two-dimensional discrete-ordinates neutron transport computer codes to solve reactor criticality problems. The fundamental interest is in determining which code requires the minimum Central Processing Unit (CPU) time for a given numerical model of a reasonably realistic fast reactor core and peripherals. The computer codes considered are the most advanced available and, in three cases, are not officially released. The conclusion, based on the study of four fast reactor core models, is that for this class of problems the diffusion synthetic accelerated version of TWOTRAN, labeled TWOTRAN-DA, is superior to the other codes in terms of CPU requirements

  6. Total parenteral nutrition - infants

    Science.gov (United States)

    ... medlineplus.gov/ency/article/007239.htm Total parenteral nutrition - infants To use the sharing features on this page, please enable JavaScript. Total parenteral nutrition (TPN) is a method of feeding that bypasses ...

  7. Total parenteral nutrition

    Science.gov (United States)

    ... medlineplus.gov/ency/patientinstructions/000177.htm Total parenteral nutrition To use the sharing features on this page, please enable JavaScript. Total parenteral nutrition (TPN) is a method of feeding that bypasses ...

  8. Technique of total thyroidectomy

    International Nuclear Information System (INIS)

    Rao, R.S.

    1999-01-01

    It is essential to define the various surgical procedures that are carried out for carcinoma of the thyroid gland. They are thyroid gland, subtotal lobectomy, total thyroidectomy and near total thyroidectomy

  9. Total iron binding capacity

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003489.htm Total iron binding capacity To use the sharing features on this page, please enable JavaScript. Total iron binding capacity (TIBC) is a blood test to ...

  10. Total well dominated trees

    DEFF Research Database (Denmark)

    Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

    cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

  11. Oxidation characteristics of porous-nickel prepared by powder metallurgy and cast-nickel at 1273 K in air for total oxidation time of 100 h

    Directory of Open Access Journals (Sweden)

    Lamiaa Z. Mohamed

    2017-11-01

    Full Text Available The oxidation behavior of two types of inhomogeneous nickel was investigated in air at 1273 K for a total oxidation time of 100 h. The two types were porous sintered-nickel and microstructurally inhomogeneous cast-nickel. The porous-nickel samples were fabricated by compacting Ni powder followed by sintering in vacuum at 1473 K for 2 h. The oxidation kinetics of the samples was determined gravimetrically. The topography and the cross-section microstructure of each oxidized sample were observed using optical and scanning electron microscopy. X-ray diffractometry and X-ray energy dispersive analysis were used to determine the nature of the formed oxide phases. The kinetic results revealed that the porous-nickel samples had higher trend for irreproducibility. The average oxidation rate for porous- and cast-nickel samples was initially rapid, and then decreased gradually to become linear. Linear rate constants were 5.5 × 10−8 g/cm2 s and 3.4 × 10−8 g/cm2 s for the porous- and cast-nickel samples, respectively. Initially a single-porous non-adherent NiO layer was noticed on the porous- and cast-nickel samples. After a longer time of oxidation, a non-adherent duplex NiO scale was formed. The two layers of the duplex scales were different in color. NiO particles were observed in most of the pores of the porous-nickel samples. Finally, the linear oxidation kinetics and the formation of porous non-adherent duplex oxide scales on the inhomogeneous nickel substrates demonstrated that the addition of new layers of NiO occurred at the scale/metal interface due to the thermodynamically possible reaction between Ni and the molecular oxygen migrating inwardly.

  12. Oxidation characteristics of porous-nickel prepared by powder metallurgy and cast-nickel at 1273 K in air for total oxidation time of 100 h.

    Science.gov (United States)

    Mohamed, Lamiaa Z; Ghanem, Wafaa A; El Kady, Omayma A; Lotfy, Mohamed M; Ahmed, Hafiz A; Elrefaie, Fawzi A

    2017-11-01

    The oxidation behavior of two types of inhomogeneous nickel was investigated in air at 1273 K for a total oxidation time of 100 h. The two types were porous sintered-nickel and microstructurally inhomogeneous cast-nickel. The porous-nickel samples were fabricated by compacting Ni powder followed by sintering in vacuum at 1473 K for 2 h. The oxidation kinetics of the samples was determined gravimetrically. The topography and the cross-section microstructure of each oxidized sample were observed using optical and scanning electron microscopy. X-ray diffractometry and X-ray energy dispersive analysis were used to determine the nature of the formed oxide phases. The kinetic results revealed that the porous-nickel samples had higher trend for irreproducibility. The average oxidation rate for porous- and cast-nickel samples was initially rapid, and then decreased gradually to become linear. Linear rate constants were 5.5 × 10 -8  g/cm 2  s and 3.4 × 10 -8  g/cm 2  s for the porous- and cast-nickel samples, respectively. Initially a single-porous non-adherent NiO layer was noticed on the porous- and cast-nickel samples. After a longer time of oxidation, a non-adherent duplex NiO scale was formed. The two layers of the duplex scales were different in color. NiO particles were observed in most of the pores of the porous-nickel samples. Finally, the linear oxidation kinetics and the formation of porous non-adherent duplex oxide scales on the inhomogeneous nickel substrates demonstrated that the addition of new layers of NiO occurred at the scale/metal interface due to the thermodynamically possible reaction between Ni and the molecular oxygen migrating inwardly.

  13. A new approach for global synchronization in hierarchical scheduled real-time systems

    NARCIS (Netherlands)

    Behnam, M.; Nolte, T.; Bril, R.J.

    2009-01-01

    We present our ongoing work to improve an existing synchronization protocol SIRAP for hierarchically scheduled real-time systems. A less pessimistic schedulability analysis is presented which can make the SIRAP protocol more efficient in terms of calculated CPU resource needs. In addition and for

  14. Real-Time Generic Face Tracking in the Wild with CUDA

    NARCIS (Netherlands)

    Cheng, Shiyang; Asthana, Akshay; Asthana, Ashish; Zafeiriou, Stefanos; Shen, Jie; Pantic, Maja

    We present a robust real-time face tracking system based on the Constrained Local Models framework by adopting the novel regression-based Discriminative Response Map Fitting (DRMF) method. By exploiting the algorithm's potential parallelism, we present a hybrid CPU-GPU implementation capable of

  15. Real-time global illumination on mobile device

    Science.gov (United States)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  16. Total Quality Leadership

    Science.gov (United States)

    1991-01-01

    More than 750 NASA, government, contractor, and academic representatives attended the Seventh Annual NASA/Contractors Conference on Quality and Productivity. The panel presentations and Keynote speeches revolving around the theme of total quality leadership provided a solid base of understanding of the importance, benefits, and principles of total quality management (TQM). The presentations from the conference are summarized.

  17. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...

  18. Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization

    Science.gov (United States)

    Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan

    This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.

  19. Estonian total ozone climatology

    Directory of Open Access Journals (Sweden)

    K. Eerme

    Full Text Available The climatological characteristics of total ozone over Estonia based on the Total Ozone Mapping Spectrometer (TOMS data are discussed. The mean annual cycle during 1979–2000 for the site at 58.3° N and 26.5° E is compiled. The available ground-level data interpolated before TOMS, have been used for trend detection. During the last two decades, the quasi-biennial oscillation (QBO corrected systematic decrease of total ozone from February–April was 3 ± 2.6% per decade. Before 1980, a spring decrease was not detectable. No decreasing trend was found in either the late autumn ozone minimum or in the summer total ozone. The QBO related signal in the spring total ozone has an amplitude of ± 20 DU and phase lag of 20 months. Between 1987–1992, the lagged covariance between the Singapore wind and the studied total ozone was weak. The spring (April–May and summer (June–August total ozone have the best correlation (coefficient 0.7 in the yearly cycle. The correlation between the May and August total ozone is higher than the one between the other summer months. Seasonal power spectra of the total ozone variance show preferred periods with an over 95% significance level. Since 1986, during the winter/spring, the contribution period of 32 days prevails instead of the earlier dominating 26 days. The spectral densities of the periods from 4 days to 2 weeks exhibit high interannual variability.

    Key words. Atmospheric composition and structure (middle atmosphere – composition and chemistry; volcanic effects – Meteorology and atmospheric dynamics (climatology

  20. Total photon absorption

    International Nuclear Information System (INIS)

    Carlos, P.

    1985-06-01

    The present discussion is limited to a presentation of the most recent total photonuclear absorption experiments performed with real photons at intermediate energy, and more precisely in the region of nucleon resonances. The main sources of real photons are briefly reviewed and the experimental procedures used for total photonuclear absorption cross section measurements. The main results obtained below 140 MeV photon energy as well as above 2 GeV are recalled. The experimental study of total photonuclear absorption in the nuclear resonance region (140 MeV< E<2 GeV) is still at its beginning and some results are presented

  1. [Total artificial heart].

    Science.gov (United States)

    Antretter, H; Dumfarth, J; Höfer, D

    2015-09-01

    To date the CardioWest™ total artificial heart is the only clinically available implantable biventricular mechanical replacement for irreversible cardiac failure. This article presents the indications, contraindications, implantation procedere and postoperative treatment. In addition to a overview of the applications of the total artificial heart this article gives a brief presentation of the two patients treated in our department with the CardioWest™. The clinical course, postoperative rehabilitation, device-related complications and control mechanisms are presented. The total artificial heart is a reliable implant for treating critically ill patients with irreversible cardiogenic shock. A bridge to transplantation is feasible with excellent results.

  2. Saving time and energy with oversubscription and semi-direct Møller-Plesset second order perturbation methods.

    Science.gov (United States)

    Fought, Ellie L; Sundriyal, Vaibhav; Sosonkina, Masha; Windus, Theresa L

    2017-04-30

    In this work, the effect of oversubscription is evaluated, via calling 2n, 3n, or 4n processes for n physical cores, on semi-direct MP2 energy and gradient calculations and RI-MP2 energy calculations with the cc-pVTZ basis using NWChem. Results indicate that on both Intel and AMD platforms, oversubscription reduces total time to solution on average for semi-direct MP2 energy calculations by 25-45% and reduces total energy consumed by the CPU and DRAM on average by 10-15% on the Intel platform. Semi-direct gradient time to solution is shortened on average by 8-15% and energy consumption is decreased by 5-10%. Linear regression analysis shows a strong correlation between time to solution and total energy consumed. Oversubscribing during RI-MP2 calculations results in performance degradations of 30-50% at the 4n level. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Total 2004 results

    International Nuclear Information System (INIS)

    2005-02-01

    This document presents the 2004 results of Total Group: consolidated account, special items, number of shares, market environment, adjustment for amortization of Sanofi-Aventis merger-related intangibles, 4. quarter 2004 results (operating and net incomes, cash flow), upstream (results, production, reserves, recent highlights), downstream (results, refinery throughput, recent highlights), chemicals (results, recent highlights), Total's full year 2004 results (operating and net income, cash flow), 2005 sensitivities, Total SA parent company accounts and proposed dividend, adoption of IFRS accounting, summary and outlook, main operating information by segment for the 4. quarter and full year 2004: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refined product sales by region, chemicals), Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  4. Total synthesis of ciguatoxin.

    Science.gov (United States)

    Hamajima, Akinari; Isobe, Minoru

    2009-01-01

    Something fishy: Ciguatoxin (see structure) is one of the principal toxins involved in ciguatera poisoning and the target of a total synthesis involving the coupling of three segments. The key transformations in this synthesis feature acetylene-dicobalthexacarbonyl complexation.

  5. Total 2004 results

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-02-01

    This document presents the 2004 results of Total Group: consolidated account, special items, number of shares, market environment, adjustment for amortization of Sanofi-Aventis merger-related intangibles, 4. quarter 2004 results (operating and net incomes, cash flow), upstream (results, production, reserves, recent highlights), downstream (results, refinery throughput, recent highlights), chemicals (results, recent highlights), Total's full year 2004 results (operating and net income, cash flow), 2005 sensitivities, Total SA parent company accounts and proposed dividend, adoption of IFRS accounting, summary and outlook, main operating information by segment for the 4. quarter and full year 2004: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refined product sales by region, chemicals), Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  6. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty rehabilitat......The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...... rehabilitation. Since hospital stay duration has decreased considerably, the need for post-discharge physiotherapy may also have changed. Thus, the indication for and types of rehabilitation programmes need to be studied within the context of fast-track knee arthroplasty....

  7. Genoptraening efter total knaealloplastik

    DEFF Research Database (Denmark)

    Holm, Bente; Kehlet, Henrik

    2009-01-01

    The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty rehabilitat......The short- and long-term benefits of post-discharge physiotherapy regimens after total knee arthroplasty are debatable. A national survey including hospitals in Denmark that perform total knee arthroplasty showed a large variability in indication and regimen for post-knee arthroplasty...... rehabilitation. Since hospital stay duration has decreased considerably, the need for post-discharge physiotherapy may also have changed. Thus, the indication for and types of rehabilitation programmes need to be studied within the context of fast-track knee arthroplasty. Udgivelsesdato: 2009-Feb-23...

  8. Simulating Photon Mapping for Real-time Applications

    DEFF Research Database (Denmark)

    Larsen, Bent Dalgaard; Christensen, Niels Jørgen

    2004-01-01

    This paper introduces a novel method for simulating photon mapping for real-time applications. First we introduce a new method for selectively redistributing photons. Then we describe a method for selectively updating the indirect illumination. The indirect illumination is calculated using a new...... GPU accelerated final gathering method and the illumination is then stored in light maps. Caustic photons are traced on the CPU and then drawn using points in the framebuffer, and finally filtered using the GPU. Both diffuse and non-diffuse surfaces can be handled by calculating the direct...... illumination on the GPU and the photon tracing on the CPU. We achieve real-time frame rates for dynamic scenes....

  9. Supravaginal eller total hysterektomi?

    DEFF Research Database (Denmark)

    Edvardsen, L; Madsen, E M

    1994-01-01

    There has been a decline in the rate of hysterectomies in Denmark in general over the last thirteen years, together with a rise in the number of supravaginal operations over the last two years. The literature concerning the relative merits of the supravaginal and the total abdominal operation is ...... indicate a reduced frequency of orgasm after the total hysterectomy compared with the supravaginal operation. When there are technical problems peroperatively with an increased urologic risk the supravaginal operation is recommended....

  10. Total lymphoid irradiation

    International Nuclear Information System (INIS)

    Sutherland, D.E.; Ferguson, R.M.; Simmons, R.L.; Kim, T.H.; Slavin, S.; Najarian, J.S.

    1983-01-01

    Total lymphoid irradiation by itself can produce sufficient immunosuppression to prolong the survival of a variety of organ allografts in experimental animals. The degree of prolongation is dose-dependent and is limited by the toxicity that occurs with higher doses. Total lymphoid irradiation is more effective before transplantation than after, but when used after transplantation can be combined with pharmacologic immunosuppression to achieve a positive effect. In some animal models, total lymphoid irradiation induces an environment in which fully allogeneic bone marrow will engraft and induce permanent chimerism in the recipients who are then tolerant to organ allografts from the donor strain. If total lymphoid irradiation is ever to have clinical applicability on a large scale, it would seem that it would have to be under circumstances in which tolerance can be induced. However, in some animal models graft-versus-host disease occurs following bone marrow transplantation, and methods to obviate its occurrence probably will be needed if this approach is to be applied clinically. In recent years, patient and graft survival rates in renal allograft recipients treated with conventional immunosuppression have improved considerably, and thus the impetus to utilize total lymphoid irradiation for its immunosuppressive effect alone is less compelling. The future of total lymphoid irradiation probably lies in devising protocols in which maintenance immunosuppression can be eliminated, or nearly eliminated, altogether. Such protocols are effective in rodents. Whether they can be applied to clinical transplantation remains to be seen

  11. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  12. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  13. Real time control of the SSC string magnets

    International Nuclear Information System (INIS)

    Calvo, O.; Flora, R.; MacPherson, M.

    1987-01-01

    The system described in this paper, called SECAR, was designed to control the excitation of a test string of magnets for the proposed Superconducting Super Collider (SSC) and will be used to upgrade the present Tevatron Excitation, Control and Regulation (TECAR) hardware and software. It resides in a VME orate and is controlled by a 68020/68881 based CPU running the application software under a real time operating system named VRTX

  14. Total versus subtotal hysterectomy

    DEFF Research Database (Denmark)

    Gimbel, Helga; Zobbe, Vibeke; Andersen, Anna Birthe

    2005-01-01

    The aim of this study was to compare total and subtotal abdominal hysterectomy for benign indications, with regard to urinary incontinence, postoperative complications, quality of life (SF-36), constipation, prolapse, satisfaction with sexual life, and pelvic pain at 1-year postoperative. Eighty...... women chose total and 105 women chose subtotal abdominal hysterectomy. No significant differences were found between the 2 operation methods in any of the outcome measures at 12 months. Fourteen women (15%) from the subtotal abdominal hysterectomy group experienced vaginal bleeding and three women had...

  15. Incapacidad laboral total

    Directory of Open Access Journals (Sweden)

    Orlando Díaz Tabares

    1997-04-01

    Full Text Available Se realizó un estudio longitudinal, descriptivo y retrospectivo con el objetivo de conocer el comportamiento de la incapacidad permanente para el trabajo en el municipio "San Cristóbal" durante el decenio 1982-1991, y se aplicó el método de encuesta por el que se recogieron datos que fueron extraídos del modelo oficial de peritaje médico laboral y de la entrevista con el peritado. Los resultados fueron plasmados en tablas de contingencias donde se relacionan las variables por cada año estudiado, y se aplicó la prueba estadística de chi cuadrado. El número de individuos dictaminados con incapacidad laboral total fue de 693; predominó en reportes el año 1988 con 114 casos y muy discretamente el sexo femenino sobre el masculino, el grupo etáreo de 45 a 54 años con 360 casos y la artrosis como entidad valorada por ortopedia, con análisis estadísticos significativos. No resultó estadísticamente significativo, el predominio de la hipertensión arterial sistémica entre las entidades valoradas por la especialidad de medicina interna como causas de incapacidad laboral. Fue muy significativa la variación del número de dictaminados por la comisión en cada uno de los años estudiados y que el porcentaje de ellos que se encontraban realizando trabajos que demandan esfuerzo físico de moderado a intenso al momento de aplicar la encuesta, ascendió al 64,9.A longitudinal, descriptive and retrospective study was conducted in order to know the behavior of permanent labor disability at the municipality of San Cristóbal during 1982-1991. A survey was done to collect data taken from the official model of medical inspections and from the interview with the disabled worker. The results were shown in contingency tables where the variables are related by every year studied. The chi square statistical test was applied. The number of individuals with labor disability was 693. As for reports, the year 1988 predominated with 114. There was a discreet

  16. CSF total protein

    Science.gov (United States)

    CSF total protein is a test to determine the amount of protein in your spinal fluid, also called cerebrospinal fluid (CSF). ... The normal protein range varies from lab to lab, but is typically about 15 to 60 milligrams per deciliter (mg/dL) ...

  17. Total body irradiation

    International Nuclear Information System (INIS)

    Novack, D.H.; Kiley, J.P.

    1987-01-01

    The multitude of papers and conferences in recent years on the use of very large megavoltage radiation fields indicates an increased interest in total body, hemibody, and total nodal radiotherapy for various clinical situations. These include high dose total body irradiation (TBI) to destroy the bone marrow and leukemic cells and provide immunosuppression prior to a bone marrow transplant, high dose total lymphoid irradiation (TLI) prior to bone marrow transplantation in severe aplastic anemia, low dose TBI in the treatment of lymphocytic leukemias or lymphomas, and hemibody irradiation (HBI) in the treatment of advanced multiple myeloma. Although accurate provision of a specific dose and the desired degree of dose homogeneity are two of the physicist's major considerations for all radiotherapy techniques, these tasks are even more demanding for large field radiotherapy. Because most large field radiotherapy is done at an extended distance for complex patient geometries, basic dosimetry data measured at the standard distance (isocenter) must be verified or supplemented. This paper discusses some of the special dosimetric problems of large field radiotherapy, with specific examples given of the dosimetry of the TBI program for bone marrow transplant at the authors' hospital

  18. Total design of participation

    DEFF Research Database (Denmark)

    Munch, Anders V.

    2016-01-01

    The idea of design as an art made not only for the people, but also by the people is an old dream going back at least to William Morris. It is, however, reappearing vigoriously in many kinds of design activism and grows out of the visions of a Total Design of society. The ideas of participation b...

  19. Total Quality Management Simplified.

    Science.gov (United States)

    Arias, Pam

    1995-01-01

    Maintains that Total Quality Management (TQM) is one method that helps to monitor and improve the quality of child care. Lists four steps for a child-care center to design and implement its own TQM program. Suggests that quality assurance in child-care settings is an ongoing process, and that TQM programs help in providing consistent, high-quality…

  20. Total Quality Management Seminar.

    Science.gov (United States)

    Massachusetts Career Development Inst., Springfield.

    This booklet is one of six texts from a workplace literacy curriculum designed to assist learners in facing the increased demands of the workplace. The booklet contains seven sections that cover the following topics: (1) meaning of total quality management (TQM); (2) the customer; (3) the organization's culture; (4) comparison of management…

  1. Total photon absorption

    International Nuclear Information System (INIS)

    Carlos, P.

    1985-01-01

    Experimental methods using real photon beams for measurements of total photonuclear absorption cross section σ(Tot : E/sub γ/) are recalled. Most recent σ(Tot : E/sub γ/)results for complex nuclei and in the nucleon resonance region are presented

  2. Total 2004 annual report

    International Nuclear Information System (INIS)

    2004-01-01

    This annual report of the Group Total brings information and economic data on the following topics, for the year 2004: the corporate governance, the corporate social responsibility, the shareholder notebook, the management report, the activities, the upstream (exploration and production) and downstream (refining and marketing) operating, chemicals and other matters. (A.L.B.)

  3. Total Water Management - Report

    Science.gov (United States)

    There is a growing need for urban water managers to take a more holistic view of their water resource systems as population growth, urbanization, and current operations put different stresses on the environment and urban infrastructure. Total Water Management (TWM) is an approac...

  4. Total 2003 Results

    International Nuclear Information System (INIS)

    2003-01-01

    This document presents the 2003 results of Total Group: consolidated account, special items, number of shares, market environment, 4. quarter 2003 results, full year 2003 results, upstream (key figures, proved reserves), downstream key figures, chemicals key figures, parent company accounts and proposed dividends, 2004 sensitivities, summary and outlook, operating information by segment for the 4. quarter and full year 2003: upstream (combined liquids and gas production by region, liquids production by region, gas production by region), downstream (refinery throughput by region, refined product sales by region, chemicals), impact of allocating contribution of Cepsa to net operating income by business segment: equity in income (loss) and affiliates and other items, Total financial statements: consolidated statement of income, consolidated balance sheet (assets, liabilities and shareholder's equity), consolidated statements of cash flows, business segments information. (J.S.)

  5. TOTAL PERFORMANCE SCORECARD

    Directory of Open Access Journals (Sweden)

    Anca ȘERBAN

    2013-06-01

    Full Text Available The purpose of this paper is to present the evolution of the Balanced Scorecard from a measurement instrument to a strategic performance management tool and to highlight the advantages of implementing the Total Performance Scorecard, especially for Human Resource Management. The study has been accomplished using the methodology of bibliographic study and various secondary sources. Implementing the classical Balanced Scorecard indicated over the years, repeatedly failure. It can be indicated that the crucial level is determined by the learning and growth perspective. It has been developed from a human perspective, which focused on staff satisfaction, innovation perspective with focus on future developments. Integrating the Total Performance Scorecard in an overall framework assures the company’s success, by keeping track of the individual goals, the company’s objectives and strategic directions. Like this, individual identity can be linked to corporate brand, individual aspirations to business goals and individual learning objectives to needed organizational capabilities.

  6. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Accessible high performance computing solutions for near real-time image processing for time critical applications

    Science.gov (United States)

    Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek

    2009-09-01

    High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.

  8. Total space in resolution

    Czech Academy of Sciences Publication Activity Database

    Bonacina, I.; Galesi, N.; Thapen, Neil

    2016-01-01

    Roč. 45, č. 5 (2016), s. 1894-1909 ISSN 0097-5397 R&D Projects: GA ČR GBP202/12/G061 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : total space * resolution random CNFs * proof complexity Subject RIV: BA - General Mathematics Impact factor: 1.433, year: 2016 http://epubs.siam.org/doi/10.1137/15M1023269

  9. MFTF TOTAL benchmark

    International Nuclear Information System (INIS)

    Choy, J.H.

    1979-06-01

    A benchmark of the TOTAL data base management system as applied to the Mirror Fusion Test Facility (MFTF) data base was implemented and run in February and March of 1979. The benchmark was run on an Interdata 8/32 and involved the following tasks: (1) data base design, (2) data base generation, (3) data base load, and (4) develop and implement programs to simulate MFTF usage of the data base

  10. Total - annual report 2005

    International Nuclear Information System (INIS)

    2006-01-01

    This annual report presents the activities and results of TOTAL S.A., french society on oil and gas. It deals with statistics, the managers, key information on financial data and risk factors, information on the Company, unresolved Staff Comments, employees, major Shareholders, consolidated statements, markets, security, financial risks, defaults dividend arrearages and delinquencies, controls and procedures, code of ethics and financial statements. (A.L.B.)

  11. Total Absorption Spectroscopy

    International Nuclear Information System (INIS)

    Rubio, B.; Gelletly, W.

    2007-01-01

    The problem of determining the distribution of beta decay strength (B(GT)) as a function of excitation energy in the daughter nucleus is discussed. Total Absorption Spectroscopy is shown to provide a way of determining the B(GT) precisely. A brief history of such measurements and a discussion of the advantages and disadvantages of this technique, is followed by examples of two recent studies using the technique. (authors)

  12. Sobredentadura total superior implantosoportada

    Directory of Open Access Journals (Sweden)

    Luis Orlando Rodríguez García

    2010-06-01

    Full Text Available Se presenta un caso de un paciente desdentado total superior, rehabilitado en la consulta de implantología de la Clínica "Pedro Ortiz" del municipio Habana del Este en Ciudad de La Habana, Cuba, en el año 2009, mediante prótesis sobre implantes osteointegrados, técnica que se ha incorporado a la práctica estomatológica en Cuba como alternativa al tratamiento convencional en los pacientes desdentados totales. Se siguió un protocolo que comprendió una fase quirúrgica, procedimiento con o sin realización de colgajo y carga precoz o inmediata. Se presenta un paciente masculino de 56 años de edad, que acudió a la consulta multidisciplinaria, preocupado, porque se le habían elaborado tres prótesis en los últimos dos años y ninguna reunía los requisitos de retención que él necesitaba para sentirse seguro y cómodo con las mismas. El resultado final fue la satisfacción total del paciente, con el mejoramiento de la calidad estética y funcional.

  13. Design and development of a diversified real time computer for future FBRs

    International Nuclear Information System (INIS)

    Sujith, K.R.; Bhattacharyya, Anindya; Behera, R.P.; Murali, N.

    2014-01-01

    The current safety related computer system of Prototype Fast Breeder Reactor (PFBR) under construction in Kalpakkam consists of two redundant Versa Module Europa (VME) bus based Real Time Computer system with a Switch Over Logic Circuit (SOLC). Since both the VME systems are identical, the dual redundant system is prone to common cause failure (CCF). The probability of CCF can be reduced by adopting diversity. Design diversity has long been used to protect redundant systems against common-mode failures. The conventional notion of diversity relies on 'independent' generation of 'different' implementations. This paper discusses the design and development of a diversified Real Time Computer which will replace one of the computer system in the dual redundant architecture. Compact PCI (cPCI) bus systems are widely used in safety critical applications such as avionics, railways, defence and uses diverse electrical signaling and logical specifications, hence was chosen for development of the diversified system. Towards the initial development a CPU card based on an ARM-9 processor, 16 channel Relay Output (RO) card and a 30 channel Analog Input (AI) card was developed. All the cards mentioned supports hot-swap and geographic addressing capability. In order to mitigate the component obsolescence problem the 32 bit PCI target controller and associated glue logic for the slave I/O cards was indigenously developed using VHDL. U-boot was selected as the boot loader and arm Linux 2.6 as the preliminary operating system for the CPU card. Board specific initialization code for the CPU card was written in ARM assembly language and serial port initialization was written in C language. Boot loader along with Linux 2.6 kernel and jffs2 file system was flashed into the CPU card. Test applications written in C language were used to test the various peripherals of the CPU card. Device driver for the AI and RO card was developed as Linux kernel modules and application library was also

  14. Total employment effect of biofuels

    International Nuclear Information System (INIS)

    Stridsberg, S.

    1998-08-01

    The study examined the total employment effect of both direct production of biofuel and energy conversion to heat and electricity, as well as the indirect employment effect arising from investments and other activities in conjunction with the production organization. A secondary effect depending on the increased capital flow is also included in the final result. The scenarios are based on two periods, 1993-2005 and 2005-2020. In the present study, the different fuels and the different applications have been analyzed individually with regard to direct and indirect employment within each separate sector. The greatest employment effect in the production chain is shown for logging residues with 290 full-time jobs/TWh, whereas other biofuels range between 80 and 280 full-time jobs/TWh. In the processing chain, the corresponding range is 200-300 full-time jobs per each additional TWh. Additionally and finally, there are secondary effects that give a total of 650 full-time jobs/TWh. Together with the predicted increase, this suggests that unprocessed fuel will provide an additional 16 000 annual full-time jobs, and that fuel processing will contribute with a further 5 000 full-time jobs. The energy production from the fuels will provide an additional 13 000 full-time jobs. The total figure of 34 000 annual full-time jobs must then be reduced by about 4000 on account of lost jobs, mainly in the oil sector and to some extent in imports of biofuel. In addition, the anticipated increase in capital turnover that occurs within the biofuel sector, will increase full-time jobs up to year 2020. Finally, a discussion is given of the accomplishment of the programmes anticipated by the scenario, where it is noted that processing of biofuel to wafers, pellets or powder places major demands on access to raw material of good quality and that agrarian fuels must be given priority if they are to enter the system sufficiently fast. Straw is already a resource but is still not accepted by

  15. Assessing the acidity and total sugar content of four different commercially available beverages commonly consumed by children and its time-dependent effect on plaque and salivary pH

    OpenAIRE

    Abhishek Jha; G Radha; R Rekha; S K Pallavi

    2015-01-01

    Introduction: Sugared beverages such as cola, packaged juice, are known for cariogenicity their intake leads to the immediate drop in plaque and salivary pH, which can be an etiologic factor for dental caries. Objective: The objective was to assess the endogenous acidity and total sugar content of four commercially available beverages commonly consumed by children in India and its effect on salivary and plaque pH. Materials and Methods: A crossover controlled trial was conducted. 60 randomly ...

  16. Total Synthesis of Hyperforin.

    Science.gov (United States)

    Ting, Chi P; Maimone, Thomas J

    2015-08-26

    A 10-step total synthesis of the polycyclic polyprenylated acylphloroglucinol (PPAP) natural product hyperforin from 2-methylcyclopent-2-en-1-one is reported. This route was enabled by a diketene annulation reaction and an oxidative ring expansion strategy designed to complement the presumed biosynthesis of this complex meroterpene. The described work enables the preparation of a highly substituted bicyclo[3.3.1]nonane-1,3,5-trione motif in only six steps and thus serves as a platform for the construction of easily synthesized, highly diverse PPAPs modifiable at every position.

  17. Total quality is people

    International Nuclear Information System (INIS)

    Vogel, C.E.

    1991-01-01

    Confronted by changing market conditions and increased global competition, in 1983 the Commercial Nuclear Fuel Division (CNFD) of Westinghouse Electric embarked on an ambitious plan to make total quality the centerpiece of its long-term business strategy. Five years later, the division's efforts in making continuous quality improvement a way of life among its more than 2,000 employees gained national recognition when it was named a charter recipient of the Malcolm Baldridge National Quality Award. What CNFD achieved during the 1980s was a cultural transformation, characterized by an empowered work force committed to a common vision. The company's quality program development strategy is described

  18. Total quality accounting

    Directory of Open Access Journals (Sweden)

    Andrijašević Maja

    2008-01-01

    Full Text Available The focus of competitive "battle" shifted from the price towards non-price instruments, above all, towards quality that became the key variable for profitability increase and achievement of better comparative position of a company. Under such conditions, management of a company, which, according to the established and certified system of total quality, strives towards achieving of a better market position, faces the problem of quality cost measurement and determination. Management, above all, cost accounting can help in solving of this problem, but the question is how much of its potential is being used for that purpose.

  19. Total_Aktion

    DEFF Research Database (Denmark)

    Søndergaard, Morten

    2008-01-01

    digitale medier er registreringen og muligheden for at opbevare og håndtere digital data uden begrænsninger. Oplevelse, registrering og bevaring knyttes sammen i en ny museal virkelighed, hvor samlingens særlige dokumentariske karakter og fokus, som er unikt for Museet for Samtidskunst, er i centrum...... at mikse deres personlige drinks. TOTAL_AKTION viser Hørbar#3, som er en videreudvikling af den første version. METASYN af Carl Emil Carlsen: Metadata er centralt for Carl Emil Carlsens projekt, der betragter museets samling som et ”univers” af værker (analoge og digitale), beskrivelser og relationer. I...

  20. Total Logistic Plant Solutions

    Directory of Open Access Journals (Sweden)

    Dusan Dorcak

    2016-02-01

    Full Text Available The Total Logistics Plant Solutions, plant logistics system - TLPS, based on the philosophy of advanced control processes enables complex coordination of business processes and flows and the management and scheduling of production in the appropriate production plans and planning periods. Main attributes of TLPS is to create a comprehensive, multi-level, enterprise logistics information system, with a certain degree of intelligence, which accepts the latest science and research results in the field of production technology and logistics. Logistic model of company understands as a system of mutually transforming flows of materials, energy, information, finance, which is realized by chain activities and operations

  1. Total Factbook 2003

    International Nuclear Information System (INIS)

    2003-01-01

    This report presents the activities and results of the Group Total-Fina-Elf for the year 2003. It brings information and economic data on the following topics: the corporate and business; the upstream activities with the reserves, the costs, standardized measure and changes of discounted future net cash flow,oil and gas acreage, drilling, liquefied natural gas, pipelines; downstream activities with refining and marketing maps, refinery, petroleum products, sales, retail gasoline outlets; chemicals with sales and operating income by sector, major applications, base chemicals and polymers, intermediates and performance polymers. (A.L.B.)

  2. Total 2004 fact book

    International Nuclear Information System (INIS)

    2004-01-01

    This report presents the activities and results of the Group Total-Fina-Elf for the year 2004. It brings information and economic data on the following topics: the corporate and business; the upstream activities with the reserves, the costs, standardized measure and changes of discounted future net cash flow,oil and gas acreage, drilling, liquefied natural gas, pipelines; downstream activities with refining and marketing maps, refinery, petroleum products, sales, retail gasoline outlets; chemicals with sales and operating income by sector, major applications, base chemicals and polymers, intermediates and performance polymers. (A.L.B.)

  3. TOTAL annual report 2003

    International Nuclear Information System (INIS)

    2004-01-01

    This 2003 annual report of the Group Total provides economical results and information of the society on the following topics: keys data, the corporate governance (Directors charter, board of directors, audit committee, nomination and remuneration committee, internal control procedures, compensation of directors and executive officers), the corporate social responsibility (environmental stewardship, the future of energy management, the safety enhancement, the human resources, ethics and local development), the investor relations, the management report, the upstream exploration and production, the downstream refining, marketing, trading and shipping, the chemicals and financial and legal information. (A.L.B.)

  4. Total knee arthroplasty

    DEFF Research Database (Denmark)

    Schrøder, Henrik M.; Petersen, Michael M.

    2016-01-01

    Total knee arthroplasty (TKA) is a successful treatment of the osteoarthritic knee, which has increased dramatically over the last 30 years. The indication is a painful osteoarthritic knee with relevant radiographic findings and failure of conservative measures like painkillers and exercise...... surgeon seems to positively influence the rate of surgical complications and implant survival. The painful TKA knee should be thoroughly evaluated, but not revised except if a relevant indication can be established. The most frequent indications for revision are: aseptic loosening, instability, infection...

  5. Aggressive time step selection for the time asymptotic velocity diffusion problem

    International Nuclear Information System (INIS)

    Hewett, D.W.; Krapchev, V.B.; Hizanidis, K.; Bers, A.

    1984-12-01

    An aggressive time step selector for an ADI algorithm is preseneted that is applied to the linearized 2-D Fokker-Planck equation including an externally imposed quasilinear diffusion term. This method provides a reduction in CPU requirements by factors of two or three compared to standard ADI. More important, the robustness of the procedure greatly reduces the work load of the user. The procedure selects a nearly optimal Δt with a minimum of intervention by the user thus relieving the need to supervise the algorithm. In effect, the algorithm does its own supervision by discarding time steps made with Δt too large

  6. Dissolved inorganic carbon, total alkalinity, nitrate, phosphate, temperature and other variables collected from time series observations at Heron Island Reef Flat from 2010-06-01 to 2010-12-13 (NODC Accession 0127256)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This archival package contains carbonate chemistry and environmental parameters data that were collected from a 200-day time series monitoring on the Heron Island...

  7. TOTAL user manual

    Science.gov (United States)

    Johnson, Sally C.; Boerschlein, David P.

    1994-01-01

    Semi-Markov models can be used to analyze the reliability of virtually any fault-tolerant system. However, the process of delineating all of the states and transitions in the model of a complex system can be devastatingly tedious and error-prone. Even with tools such as the Abstract Semi-Markov Specification Interface to the SURE Tool (ASSIST), the user must describe a system by specifying the rules governing the behavior of the system in order to generate the model. With the Table Oriented Translator to the ASSIST Language (TOTAL), the user can specify the components of a typical system and their attributes in the form of a table. The conditions that lead to system failure are also listed in a tabular form. The user can also abstractly specify dependencies with causes and effects. The level of information required is appropriate for system designers with little or no background in the details of reliability calculations. A menu-driven interface guides the user through the system description process, and the program updates the tables as new information is entered. The TOTAL program automatically generates an ASSIST input description to match the system description.

  8. Total and EDF invest

    International Nuclear Information System (INIS)

    Signoret, St.

    2008-01-01

    So as to prepare the future of their industrial sector,the Total company plans to invest (14 billion Euros in 2008) to increase its production capacities and strengthen in of other activities as the liquefied natural gas and the renewable energies; EDF plans to inject 35 billion Euros over three years to multiply the new projects of power plants (wind turbines, coal in Germany, gas in Great Britain and nuclear power in Flamanville). EDF wants to exploit its knowledge of leader to run more than ten E.P.R.(European pressurized water reactor) in the world before 2020, projects are in examination with China, Great Britain, South Africa and United States. (N.C.)

  9. Total quality at source

    International Nuclear Information System (INIS)

    Chiandone, A.C.

    1990-01-01

    The Total Quality at Source philosophy is based on optimizing the effectiveness of people in achieving ZERO-DEFECT results. In this paper a philosophy of what, I have come to perceive, it takes to get people to perform to the very best of their abilities and thereby achieve the best results they can, is presented. In the examples I shall describe I have played an instrumental role since it has become my belief that any job can always be done better provided that the people doing it can themselves become convinced that they can do better. Clearly there are many ideas on how to do this. The philosophy that I am presenting in this paper is based on my own experience, where I have both participated and observed it being applied; its effectiveness may be judged by the results. (author)

  10. Benchmarking hardware architecture candidates for the NFIRAOS real-time controller

    Science.gov (United States)

    Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre

    2014-07-01

    As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.

  11. The Effect of 5S-Continuous Quality Improvement-Total Quality Management Approach on Staff Motivation, Patients' Waiting Time and Patient Satisfaction with Services at Hospitals in Uganda.

    Science.gov (United States)

    Take, Naoki; Byakika, Sarah; Tasei, Hiroshi; Yoshikawa, Toru

    2015-03-31

    This study aimed at analyzing the effect of 5S practice on staff motivation, patients' waiting time and patient satisfaction with health services at hospitals in Uganda. Double-difference estimates were measured for 13 Regional Referral Hospitals and eight General Hospitals implementing 5S practice separately. The study for Regional Referral Hospitals revealed 5S practice had the effect on staff motivation in terms of commitment to work in the current hospital and waiting time in the dispensary in 10 hospitals implementing 5S, but significant difference was not identified on patient satisfaction. The study for General Hospitals indicated the effect of 5S practice on patient satisfaction as well as waiting time, but staff motivation in two hospitals did not improve. 5S practice enables the hospitals to improve the quality of services in terms of staff motivation, waiting time and patient satisfaction and it takes as least four years in Uganda. The fourth year since the commencement of 5S can be a threshold to move forward to the next step, Continuous Quality Improvement.

  12. Instruction timing for the CDC 7600 computer

    International Nuclear Information System (INIS)

    Lipps, H.

    1975-01-01

    This report provides timing information for all instructions of the Control Data 7600 computer, except for instructions of type 01X, to enable the optimization of 7600 programs. The timing rules serve as background information for timing charts which are produced by a program (TIME76) of the CERN Program Library. The rules that co-ordinate the different sections of the CPU are stated in as much detail as is necessary to time the flow of instructions for a given sequence of code. Instruction fetch, instruction issue, and access to small core memory are treated at length, since details are not available from the computer manuals. Annotated timing charts are given for 24 examples, chosen to display the full range of timing considerations. (Author)

  13. Total lymphoid irradiation

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    An outline review notes recent work on total lymphoid irradiation (TLI) as a means of preparing patients for grafts and particularly for bone-marrow transplantation. T.L.I. has proved immunosuppressive in rats, mice, dogs, monkeys and baboons; when given before bone-marrow transplantation, engraftment took place without, or with delayed rejection or graft-versus-host disease. Work with mice has indicated that the thymus needs to be included within the irradiation field, since screening of the thymus reduced skin-graft survival from 50 to 18 days, though irradiation of the thymus alone has proved ineffective. A more lasting tolerance has been observed when T.L.I. is followed by an injection of donor bone marrow. 50% of mice treated in this way accepted allogenic skin grafts for more than 100 days, the animals proving to be stable chimeras with 50% of their peripheral blood lymphocytes being of donor origin. Experiments of a similar nature with dogs and baboons were not so successful. (U.K.)

  14. The total artificial heart.

    Science.gov (United States)

    Cook, Jason A; Shah, Keyur B; Quader, Mohammed A; Cooke, Richard H; Kasirajan, Vigneshwar; Rao, Kris K; Smallfield, Melissa C; Tchoukina, Inna; Tang, Daniel G

    2015-12-01

    The total artificial heart (TAH) is a form of mechanical circulatory support in which the patient's native ventricles and valves are explanted and replaced by a pneumatically powered artificial heart. Currently, the TAH is approved for use in end-stage biventricular heart failure as a bridge to heart transplantation. However, with an increasing global burden of cardiovascular disease and congestive heart failure, the number of patients with end-stage heart failure awaiting heart transplantation now far exceeds the number of available hearts. As a result, the use of mechanical circulatory support, including the TAH and left ventricular assist device (LVAD), is growing exponentially. The LVAD is already widely used as destination therapy, and destination therapy for the TAH is under investigation. While most patients requiring mechanical circulatory support are effectively treated with LVADs, there is a subset of patients with concurrent right ventricular failure or major structural barriers to LVAD placement in whom TAH may be more appropriate. The history, indications, surgical implantation, post device management, outcomes, complications, and future direction of the TAH are discussed in this review.

  15. Electronic remote blood issue: a combination of remote blood issue with a system for end-to-end electronic control of transfusion to provide a "total solution" for a safe and timely hospital blood transfusion service.

    Science.gov (United States)

    Staves, Julie; Davies, Amanda; Kay, Jonathan; Pearson, Oliver; Johnson, Tony; Murphy, Michael F

    2008-03-01

    The rapid provision of red cell (RBC) units to patients needing blood urgently is an issue of major importance in transfusion medicine. The development of electronic issue (sometimes termed "electronic crossmatch") has facilitated rapid provision of RBC units by avoidance of the serologic crossmatch in eligible patients. A further development is the issue of blood under electronic control at blood refrigerator remote from the blood bank. This study evaluated a system for electronic remote blood issue (ERBI) developed as an enhancement of a system for end-to-end electronic control of hospital transfusion. Practice was evaluated before and after its introduction in cardiac surgery. Before the implementation of ERBI, the median time to deliver urgently required RBC units to the patient was 24 minutes. After its implementation, RBC units were obtained from the nearby blood refrigerator in a median time of 59 seconds (range, 30 sec to 2 min). The study also found that unused requests were reduced significantly from 42 to 20 percent, the number of RBC units issued reduced by 52 percent, the number of issued units that were transfused increased from 40 to 62 percent, and there was a significant reduction in the workload of both blood bank and clinical staff. This study evaluated a combination of remote blood issue with an end-to-end electronically controlled hospital transfusion process, ERBI. ERBI reduced the time to make blood available for surgical patients and improved the efficiency of hospital transfusion.

  16. A novel ultra-performance liquid chromatography hyphenated with quadrupole time of flight mass spectrometry method for rapid estimation of total toxic retronecine-type of pyrrolizidine alkaloids in herbs without requiring corresponding standards.

    Science.gov (United States)

    Zhu, Lin; Ruan, Jian-Qing; Li, Na; Fu, Peter P; Ye, Yang; Lin, Ge

    2016-03-01

    Nearly 50% of naturally-occurring pyrrolizidine alkaloids (PAs) are hepatotoxic, and the majority of hepatotoxic PAs are retronecine-type PAs (RET-PAs). However, quantitative measurement of PAs in herbs/foodstuffs is often difficult because most of reference PAs are unavailable. In this study, a rapid, selective, and sensitive UHPLC-QTOF-MS method was developed for the estimation of RET-PAs in herbs without requiring corresponding standards. This method is based on our previously established characteristic and diagnostic mass fragmentation patterns and the use of retrorsine for calibration. The use of a single RET-PA (i.e. retrorsine) for construction of calibration was based on high similarities with no significant differences demonstrated by the calibration curves constructed by peak areas of extract ion chromatograms of fragment ion at m/z 120.0813 or 138.0919 versus concentrations of five representative RET-PAs. The developed method was successfully applied to measure a total content of toxic RET-PAs of diversified structures in fifteen potential PA-containing herbs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Total Productive Maintenance at Paccar INC

    Directory of Open Access Journals (Sweden)

    Ştefan Farkas

    2010-06-01

    Full Text Available This paper reports the application of total productive maintenance method at Paccar Inc. truck’s plant in Victoria, Australia. The total productive maintenance method and total productive maintenance house are presented. The global equipment effectiveness is computed and exemplified. The production structure and organising maintenance are presented. Resultas of the variation of global equipment effectiveness and autonomous maintenance in a two weeks period of time are reported.

  18. Total Productive Maintenance at Paccar INC

    OpenAIRE

    Ştefan Farkas

    2010-01-01

    This paper reports the application of total productive maintenance method at Paccar Inc. truck’s plant in Victoria, Australia. The total productive maintenance method and total productive maintenance house are presented. The global equipment effectiveness is computed and exemplified. The production structure and organising maintenance are presented. Resultas of the variation of global equipment effectiveness and autonomous maintenance in a two weeks period of time are reported.

  19. The 1995 total solar eclipse: an overview.

    Science.gov (United States)

    Singh, J.

    A number of experiments were conducted during the total solar eclipse of October 24, 1995. First time efforts were made to photograph the solar corona using IAF jet aircrafts and transport planes ad hot air balloons.

  20. US-Total Electron Content Product (USTEC)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The US Total Electron Content (US-TEC) product is designed to specify TEC over the Continental US (CONUS) in near real-time. The product uses a Kalman Filter data...

  1. Monopolist requires totally liberalization

    International Nuclear Information System (INIS)

    Janoska, J.

    2003-01-01

    Slovenske elektrarne (SE), a.s, Bratislava in present time operates some sources, which would be off at normal conditions. It was caused by high electricity price on European markets. It is possible to sell 1 MWh for 12 thousands Slovak crowns abroad in last months. It is also advantageous to initialise thermal sources for power plant, where variable expenses are higher - from 1200 to 1300 Slovak crowns per MWh. SE are mainly trying to sell most of electricity on domestic market because returnability of dominant nuclear power plants was projected for this market. Utilizing capacities profit via domestic market covers fixed costs of power plants. Besides, power plant can demand regulated price 1272 Slovak crowns per 1 MWh. SE sources have capacity of 6800 MW, but maximal daily load uses for example in December approximately 4000 MW. Overflows are more higher in the summer - load dropped to 2200 MW in the beginning of September in last year. It is noted in issue that price increase in Europe is noticed. Price of primary power electricity will remain at liquid markets at 28 Euro (1176 Slovak crowns) in the following year prices fluctuate from 38 to 40 Euro (to 1700 Slovak crowns) per 1 MW at load peaks. Price increase is caused by lack of sources - it does not keep up to satisfy demand increase. Sources are gradually laid up and no new sources are built. Production capacities will decrease also in SE - by laying up of two 440 MW blocks of Bohunice NPP V-1. Price increase trend will continue according to SE businessmen till it is advantageous to build new source. Present price trend can be accelerated by decision about completing of Mochovce NPP 3-4

  2. Where Does the Time Go in Software DSMs?--Experiences with JIAJIA

    Institute of Scientific and Technical Information of China (English)

    SHI Weisong; HU Weiwu; TANGZhimin

    1999-01-01

    The performance gap between softwareDSM systems and message passing platforms prevents the prevalence ofsoftware DSM system greatly, though great efforts have been delivered inthis area in the past decade. In this paper, we take the challenge tofind where we should focus our efforts in the future design. Thecomponents of total system overhead of software DSM systems are analyzedin detail firstly. Based on a state-of-the-art software DSM systemJIAJIA, we measure these components on Dawning parallel system and drawfive important conclusions which are different from some traditionalviewpoints. (1) The performance of the JIAJIA software DSM system isacceptable. For four of eight applications, the parallel efficiencyachieved by JIAJIA is about 80%, while for two others, 70% efficiencycan be obtained. (2) 40.94% interrupt service time is overlapped withwaiting time. (3) Encoding and decoding diffs do not cost muchtime (<1%), so using hardware support to encode/decode diffs andsend/receive messages is not worthwhile. (4) Great endeavours should beput to reduce data miss penalty and optimize synchronization operations,which occupy 11.75% and 13.65% of total execution time respectively.(5) Communication hardware overhead occupies 66.76% of the wholecommunication time in the experimental environment, and communicationsoftware overhead does not take much time as expected.Moreover, by studying the effect of CPU speed to system overhead, wefind that the common speedup formula for distributed memory systems doesnot work under software DSM systems. Therefore, we design a new speedupformula special to software DSM systems, and point out that when the CPUspeed increases the speedup can be increased too even if the networkspeed is fixed, which is impossible in message passing systems. Finally,we argue that JIAJIA system has desired scalability.

  3. A moving image system for cardiovascular nuclear medicine. A dedicated auxiliary device for the total capacity imaging system for multiple plane dynamic colour display

    International Nuclear Information System (INIS)

    Iio, M.; Toyama, H.; Murata, H.; Takaoka, S.

    1981-01-01

    The recent device of the authors, the dedicated multiplane dynamic colour image display system for nuclear medicine, is discussed. This new device is a hardware-based auxiliary moving image system (AMIS) attached to the total capacity image processing system of the authors' department. The major purpose of this study is to develop the dedicated device so that cardiovascular nuclear medicine and other dynamic studies will include the ability to assess the real time delicate processing of the colour selection, edge detection, phased analysis, etc. The auxiliary system consists of the interface for image transferring, four IC refresh memories of 64x64 matrix with 10 bit count depth, a digital 20-in colour TV monitor, a control keyboard and a control panel with potentiometers. This system has five major functions for colour display: (1) A microcomputer board can select any one of 40 different colour tables preset in the colour transformation RAM. This key also provides edge detection at a certain level of the count by leaving the optional colour and setting the rest of the levels at 0 (black); (2) The arithmetic processing circuit performs the operation of the fundamental rules, permitting arithmetic processes of the two images; (3) The colour level control circuit is operated independently by four potentiometers for four refresh image memories, so that the gain and offset of the colour level can be manually and visually controlled to the satisfaction of the operator; (4) The simultaneous CRT display of the maximum four images with or without cinematic motion is possible; (5) The real time movie interval is also adjustable by hardware, and certain frames can be freezed with overlapping of the dynamic frames. Since this system of AMIS is linked with the whole capacity image processing system of the CPU size of 128kW, etc., clinical applications are not limited to cardiovascular nuclear medicine. (author)

  4. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    Science.gov (United States)

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  5. Total spectral distributions from Hawking radiation

    Energy Technology Data Exchange (ETDEWEB)

    Broda, Boguslaw [University of Lodz, Department of Theoretical Physics, Faculty of Physics and Applied Informatics, Lodz (Poland)

    2017-11-15

    Taking into account the time dependence of the Hawking temperature and finite evaporation time of the black hole, the total spectral distributions of the radiant energy and of the number of particles have been explicitly calculated and compared to their temporary (initial) blackbody counterparts (spectral exitances). (orig.)

  6. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  7. Cache timing attacks on recent microarchitectures

    DEFF Research Database (Denmark)

    Andreou, Alexandres; Bogdanov, Andrey; Tischhauser, Elmar Wolfgang

    2017-01-01

    Cache timing attacks have been known for a long time, however since the rise of cloud computing and shared hardware resources, such attacks found new potentially devastating applications. One prominent example is S$A (presented by Irazoqui et al at S&P 2015) which is a cache timing attack against...... AES or similar algorithms in virtualized environments. This paper applies variants of this cache timing attack to Intel's latest generation of microprocessors. It enables a spy-process to recover cryptographic keys, interacting with the victim processes only over TCP. The threat model is a logically...... separated but CPU co-located attacker with root privileges. We report successful and practically verified applications of this attack against a wide range of microarchitectures, from a two-core Nehalem processor (i5-650) to two-core Haswell (i7-4600M) and four-core Skylake processors (i7-6700). The attack...

  8. Total quality management in orthodontic practice.

    Science.gov (United States)

    Atta, A E

    1999-12-01

    Quality is the buzz word for the new Millennium. Patients demand it, and we must serve it. Yet one must identify it. Quality is not imaging or public relations; it is a business process. This short article presents quality as a balance of three critical notions: core clinical competence, perceived values that our patients seek and want, and the cost of quality. Customer satisfaction is a variable that must be identified for each practice. In my practice, patients perceive quality as communication and time, be it treatment or waiting time. Time is a value and cost that must be managed effectively. Total quality management is a business function; it involves diagnosis, design, implementation, and measurement of the process, the people, and the service. Kazien is a function that reduces value services, eliminates waste, and manages time and cost in the process. Total quality management is a total commitment for continuous improvement.

  9. Time related total lactic acid bacteria population diversity and ...

    African Journals Online (AJOL)

    user

    2011-02-07

    Feb 7, 2011 ... the diversity and dynamics of lactic acid bacteria (LAB) population in fresh ..... combining morphological, biochemical and molecular data are important for ..... acid bacteria from fermented maize (Kenkey) and their interactions.

  10. Exciting times: Towards a totally minimally invasive paediatric urology service

    OpenAIRE

    Lazarus, John

    2011-01-01

    Following on from the first paediatric laparoscopic nephrectomy in 1992, the growth of minimally invasive ablative and reconstructive procedures in paediatric urology has been dramatic. This article reviews the literature related to laparoscopic dismembered pyeloplasty, optimising posterior urethral valve ablation and intravesical laparoscopic ureteric reimplantation.

  11. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  12. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  13. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  14. Comparison of Serum Concentrations of Total Cholesterol and Total ...

    African Journals Online (AJOL)

    Tuberculosis (TB) is one of the most dangerous tropical diseases that complicates HIV infection in Nigeria to date. Over two million Nigerians are known to be infected with TB and many more are at risk of the infection. Serum concentrations of total cholesterol and total lipid of 117 female TB patients attending chest clinic at ...

  15. Changes in total and differential white cell counts, total lymphocyte ...

    African Journals Online (AJOL)

    Background: Published reports on the possible changes in the various immune cell populations, especially the total lymphocyte and CD4 cell counts, during the menstrual cycle in Nigerian female subjects are relatively scarce. Aim: To determine possible changes in the total and differential white blood cell [WBC] counts, ...

  16. Time Stamp Synchronization of PEFP Distributed Control Systems

    International Nuclear Information System (INIS)

    Song, Young Gi; An, Eun Mi; Kwon, Hyeok Jung; Cho, Yong Sub

    2010-01-01

    Proton Engineering Frontier Project (PEFP) proton linac consists of several types of control systems, such as soft Input Output Controllers (IOC) and embedded IOC based on Experimental Physics Industrial Control System (EPICS) for each subsection of PEFP facility. One of the important factors is that IOC's time clock is synchronized. The synchronized time and time stamp can be achieved with Network Time Protocol (NTP) and EPICS time stamp record without timing hardware. The requirement of the time accuracy of IOCs is less than 1 second. The main objective of this study is to configure a master clock and produce Process Variable (PV) time stamps using local CPU time synchronized from the master clock. The distributed control systems are attached on PEFP control network

  17. Significantly reducing registration time in IGRT using graphics processing units

    DEFF Research Database (Denmark)

    Noe, Karsten Østergaard; Denis de Senneville, Baudouin; Tanderup, Kari

    2008-01-01

    respiration phases in a free breathing volunteer and 41 anatomical landmark points in each image series. The registration method used is a multi-resolution GPU implementation of the 3D Horn and Schunck algorithm. It is based on the CUDA framework from Nvidia. Results On an Intel Core 2 CPU at 2.4GHz each...... registration took 30 minutes. On an Nvidia Geforce 8800GTX GPU in the same machine this registration took 37 seconds, making the GPU version 48.7 times faster. The nine image series of different respiration phases were registered to the same reference image (full inhale). Accuracy was evaluated on landmark...

  18. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max

    2016-11-25

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  19. Newmark local time stepping on high-performance computing architectures

    Energy Technology Data Exchange (ETDEWEB)

    Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Grote, Marcus, E-mail: marcus.grote@unibas.ch [Department of Mathematics and Computer Science, University of Basel (Switzerland); Peter, Daniel, E-mail: daniel.peter@kaust.edu.sa [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland); Institute of Geophysics, ETH Zurich (Switzerland); Schenk, Olaf, E-mail: olaf.schenk@usi.ch [Institute for Computational Science, Università della Svizzera italiana, Lugano (Switzerland)

    2017-04-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  20. Newmark local time stepping on high-performance computing architectures

    KAUST Repository

    Rietmann, Max; Grote, Marcus; Peter, Daniel; Schenk, Olaf

    2016-01-01

    In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strong element-size contrasts (more than 100×). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.

  1. Total hip arthroplasty in Denmark

    DEFF Research Database (Denmark)

    Pedersen, Alma Becic; Johnsen, Søren Paaske; Overgaard, Søren

    2005-01-01

    The annual number of total hip arthroplasties (THA) has increased in Denmark over the past 15 years. There is, however, limited detailed data available on the incidence of THAs.......The annual number of total hip arthroplasties (THA) has increased in Denmark over the past 15 years. There is, however, limited detailed data available on the incidence of THAs....

  2. Congruences of totally geodesic surfaces

    International Nuclear Information System (INIS)

    Plebanski, J.F.; Rozga, K.

    1989-01-01

    A general theory of congruences of totally geodesic surfaces is presented. In particular their classification, based on the properties of induced affine connections, is provided. In the four-dimensional case canonical forms of the metric tensor admitting congruences of two-dimensional totally geodesic surfaces of rank one are given. Finally, congruences of two-dimensional extremal surfaces are studied. (author)

  3. Integration of MDSplus in real-time systems

    International Nuclear Information System (INIS)

    Luchetta, A.; Manduchi, G.; Taliercio, C.

    2006-01-01

    RFX-mod makes extensive usage of real-time systems for feedback control and uses MDSplus to interface them to the main Data Acquisition system. For this purpose, the core of MDSplus has been ported to VxWorks, the operating system used for real-time control in RFX. Using this approach, it is possible to integrate real-time systems, but MDSplus is used only for non-real-time tasks, i.e. those tasks which are executed before and after the pulse and whose performance does not affect the system time constraints. More extensive use of MDSplus in real-time systems is foreseen, and a real-time layer for MDSplus is under development, which will provide access to memory-mapped pulse files, shared by the tasks running on the same CPU. Real-time communication will also be integrated in the MDSplus core to provide support for distributed memory-mapped pulse files

  4. Management of hypocalcemia following total thyroidectomy

    International Nuclear Information System (INIS)

    Pahuja, D.N.; Patwardhan, U.N.; Samuel, A.M.

    1999-01-01

    A retrospective analysis of calcemic status of 500 randomly selected patients, who underwent total thyroidectomy (TTx) for differentiated thyroid carcinoma (DTC) was studied. These patients were followed up from a minimum of 2-3 years, to a maximum of 15-20 years, and calcemic status was ascertained at varying times following their surgery and radioiodine ( 131 ) therapy

  5. Total Quality Management in Libraries: A Sourcebook.

    Science.gov (United States)

    O'Neil, Rosanna M., Comp.

    Total Quality Management (TQM) brings together the best aspects of organizational excellence by driving out fear, offering customer-driven products and services, doing it right the first time by eliminating error, and maintaining inventory control without waste. Libraries are service organizations which are constantly trying to improve service.…

  6. [Contents of total flavonoids in Rhizoma Arisaematis].

    Science.gov (United States)

    Du, S S; Lin, H Y; Zhou, Y X; Wei, L X

    2001-06-01

    Comparing the contents of total flavonoides of Rhizoma Arisaematis, which collected in different time, regions, different varieties and processed. Determining the contents by ultraviolet spectro-photometry. The contents were found in the following sequence: 1. the end of July, the begin of July, August, September; 2. Beijing, Shanxi, Sichuan, Anhui; 3. Arisaema erubenscens, A. heterophyllum, A. amurense; 4. unprocessed product, processed product.

  7. LAMBDA p total cross-section measurement

    CERN Multimedia

    CERN PhotoLab

    1970-01-01

    A view of the apparatus used for the LAMBDA p total cross-section measurement at the time of its installation. The hyperons decaying into a proton and a pion in the conical tank in front were detected in the magnet spectrometer in the upper half of the picture. A novel detection technique using exclusively multiwire proportional chambers was employed.

  8. Totality eclipses of the Sun

    CERN Document Server

    Littmann, Mark; Willcox, Ken

    2008-01-01

    A total eclipse of the Sun is the most awesome sight in the heavens. Totality: Eclipses of the Sun takes you to eclipses of the past, present, and future, and lets you see - and feel - why people travel to the ends of the Earth to observe them. - ;A total eclipse of the Sun is the most awesome sight in the heavens. Totality: Eclipses of the Sun takes you to eclipses of the past, present, and future, and lets you see - and feel - why people travel to the ends of the Earth to observe them. Totality: Eclipses of the Sun is the best guide and reference book on solar eclipses ever written. It explains: how to observe them; how to photograph and videotape them; why they occur; their history and mythology; and future eclipses - when and where to see them. Totality also tells the remarkable story of how eclipses shocked scientists, revealed the workings of the Sun, and made Einstein famous. And the book shares the experiences and advice of many veteran eclipse observers. Totality: Eclipses of the Sun is profusely ill...

  9. Total Product Life Cycle (TPLC)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Total Product Life Cycle (TPLC) database integrates premarket and postmarket data about medical devices. It includes information pulled from CDRH databases...

  10. Nutritional management after total laryngectomy

    African Journals Online (AJOL)

    28 September 2010 with a known diagnosis of cancer of the larynx. The patient, who underwent a total laryngectomy on 13 October, had a tracheostomy inserted .... status, leading to improved quality of life and better response to treatment.4.

  11. Transmandibular approach to total maxillectomy

    OpenAIRE

    Tiwari, R. M.

    2001-01-01

    Total Maxillectomy through transfacial approach has been practiced in the treatment of Cancer for more than a decade. Its role in T3 - T4 tumors extending posteriorly through gthe bony wall is questionable, since an oncological radical procedure is often not possible. Recurrences in the infratemporal fossa are common. Despite the addition of radiotherapy five year survivals have not significantly improved. Transmandibular approach to Total Maxillectomy overcomes this shortcoming by including ...

  12. Leadership and Total Quality Management

    Science.gov (United States)

    1992-04-15

    leadership and management skills yields increased productivity. This paper will focus on the skills required of senior level leaders (leaders at the...publication until it has been cleared by the appropriate mii..-, service or government agency. Leadership and Total Quality Management An Individual Study...llty Codes fAvti1 and/or DltISpecial Abstract AUTHOR: Harry D. Gatanas, LTC, USA TITLE: Leadership and Total Quality Management FORMAT- Individual

  13. VERSE - Virtual Equivalent Real-time Simulation

    Science.gov (United States)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  14. Cataract incidence after total-body irradiation

    International Nuclear Information System (INIS)

    Zierhut, D.; Lohr, F.; Schraube, P.; Huber, P.; Haas, R.; Hunstein, W.; Wannenmacher, M.

    1997-01-01

    Purpose: Aim of this retrospective study was to evaluate cataract incidence in a homogeneous group of patients after total-body irradiation followed by autologous bone marrow transplantation or peripheral blood stem cell transplantation. Method and Materials: Between 11/1982 and 6/1994 in total 260 patients received in our hospital total-body irradiation for treatment of haematological malignancy. In 1996-96 patients out of these 260 patients were still alive. 85 from these still living patients (52 men, 33 women) answered evaluable on a questionnaire and could be examined ophthalmologically. Median age of these patients was 38,5 years (15 - 59 years) at time of total-body irradiation. Radiotherapy was applied as hyperfractionated total-body irradiation with a median dose of 14,4 Gy in 12 fractions over 4 days. Minimum time between fractions was 4 hours, photons with a energy of 23 MeV were used, and the dose rate was 7 - 18 cGy/min. Results: Median follow-up is now 5,8 years (1,7 - 13 years). Cataract occurred in (28(85)) patients after a median time of 47 months (1 - 104 months). In 6 out of these 28 patients who developed a cataract, surgery of the cataract was performed. Whole-brain irradiation prior to total-body irradiation was more often in the group of patients developing a cataract (14,3%) vs. 10,7% in the group of patients without cataract. Conclusion: Cataract is a common side effect of total-body irradiation. Cataract incidence found in our patients is comparable to results of other centres using a fractionated regimen for total-body irradiation. The hyperfractionated regimen used in our hospital does obviously not result in a even lower cataract incidence. In contrast to acute and late toxicity in other organ/organsystems, hyperfractionation of total-body irradiation does not further reduce toxicity for the eye-lens. Dose rate may have more influence on cataract incidence

  15. Determination of total solutes in synfuel wastewaters

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, J.R.; Bonomo, F.S.

    1984-03-01

    Efforts to investigate both lyophilization and the measurement of colligative properties as an indication of total solute content are described. The objective of the work described is to develop a method for measuring total dissolved material in retort wastewaters which is simple and rugged enough to be performed in a field laboratory in support of pollution control tests. The analysis should also be rapid enough to provide timely and pertinent data to the pollution control plant operator. To be of most value, the technique developed also should be applicable to other synfuel wastewaters, most of which contain similar major components as oil shale retort waters. 4 references, 1 table.

  16. Total body water and total body potassium in anorexia nervosa

    Energy Technology Data Exchange (ETDEWEB)

    Dempsey, D.T.; Crosby, L.O.; Lusk, E.; Oberlander, J.L.; Pertschuk, M.J.; Mullen, J.L.

    1984-08-01

    In the ill hospitalized patient with clinically relevant malnutrition, there is a measurable decrease in the ratio of the total body potassium to total body water (TBK/TBW) and a detectable increase in the ratio of total exchangeable sodium to total exchangeable potassium (Nae/Ke). To evaluate body composition analyses in anorexia nervosa patients with chronic uncomplicated semistarvation, TBK and TBW were measured by whole body K40 counting and deuterium oxide dilution in 10 females with stable anorexia nervosa and 10 age-matched female controls. The ratio of TBK/TBW was significantly (p less than 0.05) higher in anorexia nervosa patients than controls. The close inverse correlation found in published studies between TBK/TBW and Nae/Ke together with our results suggest that in anorexia nervosa, Nae/Ke may be low or normal. A decreased TBK/TBW is not a good indicator of malnutrition in the anorexia nervosa patient. The use of a decreased TBK/TBW ratio or an elevated Nae/Ke ratio as a definition of malnutrition may result in inappropriate nutritional management in the patient with severe nonstressed chronic semistarvation.

  17. Total body water and total body potassium in anorexia nervosa

    International Nuclear Information System (INIS)

    Dempsey, D.T.; Crosby, L.O.; Lusk, E.; Oberlander, J.L.; Pertschuk, M.J.; Mullen, J.L.

    1984-01-01

    In the ill hospitalized patient with clinically relevant malnutrition, there is a measurable decrease in the ratio of the total body potassium to total body water (TBK/TBW) and a detectable increase in the ratio of total exchangeable sodium to total exchangeable potassium (Nae/Ke). To evaluate body composition analyses in anorexia nervosa patients with chronic uncomplicated semistarvation, TBK and TBW were measured by whole body K40 counting and deuterium oxide dilution in 10 females with stable anorexia nervosa and 10 age-matched female controls. The ratio of TBK/TBW was significantly (p less than 0.05) higher in anorexia nervosa patients than controls. The close inverse correlation found in published studies between TBK/TBW and Nae/Ke together with our results suggest that in anorexia nervosa, Nae/Ke may be low or normal. A decreased TBK/TBW is not a good indicator of malnutrition in the anorexia nervosa patient. The use of a decreased TBK/TBW ratio or an elevated Nae/Ke ratio as a definition of malnutrition may result in inappropriate nutritional management in the patient with severe nonstressed chronic semistarvation

  18. Total 2004 annual report; TOTAL 2004 rapport annuel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This annual report of the Group Total brings information and economic data on the following topics, for the year 2004: the corporate governance, the corporate social responsibility, the shareholder notebook, the management report, the activities, the upstream (exploration and production) and downstream (refining and marketing) operating, chemicals and other matters. (A.L.B.)

  19. Stochastic first passage time accelerated with CUDA

    Science.gov (United States)

    Pierro, Vincenzo; Troiano, Luigi; Mejuto, Elena; Filatrella, Giovanni

    2018-05-01

    The numerical integration of stochastic trajectories to estimate the time to pass a threshold is an interesting physical quantity, for instance in Josephson junctions and atomic force microscopy, where the full trajectory is not accessible. We propose an algorithm suitable for efficient implementation on graphical processing unit in CUDA environment. The proposed approach for well balanced loads achieves almost perfect scaling with the number of available threads and processors, and allows an acceleration of about 400× with a GPU GTX980 respect to standard multicore CPU. This method allows with off the shell GPU to challenge problems that are otherwise prohibitive, as thermal activation in slowly tilted potentials. In particular, we demonstrate that it is possible to simulate the switching currents distributions of Josephson junctions in the timescale of actual experiments.

  20. Calidad total en el ICESI

    OpenAIRE

    González Zamora, José Hipólito

    2010-01-01

    En primer lugar deseo dar la bienvenida esta reunión al ingeniero Francisco Gensini, director ejecutivo de INCOLDA, quien ha sido fuente constante de inspiración para el trabajo relacionado con Control Total de Calidad (C.T.C.) que se ha venido desarrollando en el ICESI. El ingeniero Gensini ha logrado aglutinar alrededor de INCOLDA al grupo de seis empresas de la región, líderes en el estudio y aplicación de los principios del Control Total de Calidad , es decir Rica Rondo S.A., Banco de Oc...

  1. Sobredentadura total superior implantosoportada Superior total overdenture on implants

    Directory of Open Access Journals (Sweden)

    Luis Orlando Rodríguez García

    2010-06-01

    Full Text Available Se presenta un caso de un paciente desdentado total superior, rehabilitado en la consulta de implantología de la Clínica "Pedro Ortiz" del municipio Habana del Este en Ciudad de La Habana, Cuba, en el año 2009, mediante prótesis sobre implantes osteointegrados, técnica que se ha incorporado a la práctica estomatológica en Cuba como alternativa al tratamiento convencional en los pacientes desdentados totales. Se siguió un protocolo que comprendió una fase quirúrgica, procedimiento con o sin realización de colgajo y carga precoz o inmediata. Se presenta un paciente masculino de 56 años de edad, que acudió a la consulta multidisciplinaria, preocupado, porque se le habían elaborado tres prótesis en los últimos dos años y ninguna reunía los requisitos de retención que él necesitaba para sentirse seguro y cómodo con las mismas. El resultado final fue la satisfacción total del paciente, con el mejoramiento de la calidad estética y funcional.This is the case of a total maxilla edentulous patient seen in consultation of the "Pedro Ortíz" Clinic Implant of Habana del Este municipality in 2009 and con rehabilitation by prosthesis over osteointegration implants added to stomatology practice in Cuba as an alternative to conventional treatment in patients totally edentulous. We follow a protocol including a surgery or surgical phase, technique without or with flap creation and early or immediate load. This is a male patient aged 56 came to our multidisciplinary consultation worried because he had three prostheses in last two years and any fulfilled the requirements of retention to feel safe and comfortable with prostheses. The final result was the total satisfaction of rehabilitated patient improving its aesthetic and functional quality.

  2. Total phenolics and total flavonoids in selected Indian medicinal plants.

    Science.gov (United States)

    Sulaiman, C T; Balachandran, Indira

    2012-05-01

    Plant phenolics and flavonoids have a powerful biological activity, which outlines the necessity of their determination. The phenolics and flavonoids content of 20 medicinal plants were determined in the present investigation. The phenolic content was determined by using Folin-Ciocalteu assay. The total flavonoids were measured spectrophotometrically by using the aluminium chloride colorimetric assay. The results showed that the family Mimosaceae is the richest source of phenolics, (Acacia nilotica: 80.63 mg gallic acid equivalents, Acacia catechu 78.12 mg gallic acid equivalents, Albizia lebbeck 66.23 mg gallic acid equivalents). The highest total flavonoid content was revealed in Senna tora which belongs to the family Caesalpiniaceae. The present study also shows the ratio of flavonoids to the phenolics in each sample for their specificity.

  3. Total Synthesis of Adunctin B.

    Science.gov (United States)

    Dethe, Dattatraya H; Dherange, Balu D

    2018-03-16

    Total synthesis of (±)-adunctin B, a natural product isolated from Piper aduncum (Piperaceae), has been achieved using two different strategies, in seven and three steps. The efficient approach features highly atom economical and diastereoselective Friedel-Crafts acylation, alkylation reaction and palladium catalyzed Wacker type oxidative cyclization.

  4. Edge colouring by total labellings

    DEFF Research Database (Denmark)

    Brandt, Stephan; Rautenbach, D.; Stiebitz, M.

    2010-01-01

    We introduce the concept of an edge-colouring total k-labelling. This is a labelling of the vertices and the edges of a graph G with labels 1, 2, ..., k such that the weights of the edges define a proper edge colouring of G. Here the weight of an edge is the sum of its label and the labels of its...

  5. What is Total Quality Management?

    Science.gov (United States)

    Bryan, William A.

    1996-01-01

    Provides a general overview of Total Quality Management (TQM) and explains why there is pressure for change in higher education institutions. Defines TQM and the various themes, tools, and beliefs that make it different from other management approaches. Presents 14 principles and how they might be applied to student affairs. (RJM)

  6. A totally diverting loop colostomy.

    Science.gov (United States)

    Merrett, N. D.; Gartell, P. C.

    1993-01-01

    A technique is described where the distal limb of a loop colostomy is tied with nylon or polydioxanone. This ensures total faecal diversion and dispenses with the supporting rod, enabling early application of stoma appliances. The technique does not interfere with the traditional transverse closure of a loop colostomy. PMID:8379632

  7. A generalization of total graphs

    Indian Academy of Sciences (India)

    M Afkhami

    2018-04-12

    Apr 12, 2018 ... product of any lower triangular matrix with the transpose of any element of U belongs to U. The ... total graph of R, which is denoted by T( (R)), is a simple graph with all elements of R as vertices, and ...... [9] Badawi A, On dot-product graph of a commutative ring, Communications in Algebra 43 (2015). 43–50.

  8. Total synthesis of nepetoidin B

    Science.gov (United States)

    The total synthesis of nepetoidin B (the 2-(3,4-dihydroxyphenyl)ethenyl ester of 3-(3,4-dihydroxy¬phenyl)-2-propenoic acid) has been achieved in two steps from commercially available 1,5-bis(3,4-dimethoxyphenyl)-1,4-pentadien-3-one. Tetramethylated nepetoidin B was prepared directly by Baeyer-Villig...

  9. The "Total Immersion" Meeting Environment.

    Science.gov (United States)

    Finkel, Coleman

    1980-01-01

    The designing of intelligently planned meeting facilities can aid management communication and learning. The author examines the psychology of meeting attendance; architectural considerations (lighting, windows, color, etc.); design elements and learning modes (furniture, walls, audiovisuals, materials); and the idea of "total immersion meeting…

  10. First total synthesis of Boehmenan

    Indian Academy of Sciences (India)

    The first total synthesis of dilignan Boehmenan has been achieved. A biomimetic oxidative coupling of the ferulic acid methyl ester in the presence of silver oxide is the crucial step in the synthesis sequence, generating the dihydrobenzofuran skeleton. Hydroxyl group was protected with DHP and reducted with LiAlH4 to ...

  11. GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method

    Science.gov (United States)

    Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae

    2016-01-01

    We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724

  12. MODIFIED TECHNIQUE OF TOTAL LARYNGECTOMY

    Directory of Open Access Journals (Sweden)

    Predrag Spirić

    2010-12-01

    Full Text Available Surgical technique of total laryngectomy is well presented in many surgical textbooks. Essentially, it has remained the same since Gluck an Soerensen in 1922 described all its details. Generally, it stresses the U shape skin incision with releasing laryngeal structures and removing larynx from up to down. Further, pharyngeal reconstruction is performed with different kinds of sutures in two or more layers and is finished with skin suture and suction drainage. One of worst complications following this surgery is pharyngocutaneous fistula (PF. Modifications proposed in this this article suggests vertical skin incision with larynx removal from below upwards. In pharyngeal reconstruction we used the running locked suture in submucosal plan with „tobacco sac“ at the end on the tongue base instead of traditional T shaped suture. Suction drains were not used.The aim of study was to present the modified surgical technique of total laryingectomy and its impact on hospital stay duration and pharyngocutanous fistula formation. In this randomized study we analyzed 49 patients operated with modified surgical technique compared to 49 patient operated with traditional surgical technique of total laryngectomy. The modified technique of total laryngectomy was presented. Using modified technique we managed to decrease the PF percentage from previous 20,41% to acceptable 8,16% (p=0,0334. Also, the average hospital stay was shortened from 14,96 to 10,63 days (t =-2.9850; p=0.0358.The modified technique of total laryngectomy is safe, short and efficient surgical intervention which decreases the number of pharyngocutaneos fistulas and shortens the hospital stay.

  13. 10 Management Controller for Time and Space Partitioning Architectures

    Science.gov (United States)

    Lachaize, Jerome; Deredempt, Marie-Helene; Galizzi, Julien

    2015-09-01

    The Integrated Modular Avionics (IMA) has been industrialized in aeronautical domain to enable the independent qualification of different application softwares from different suppliers on the same generic computer, this latter computer being a single terminal in a deterministic network. This concept allowed to distribute efficiently and transparently the different applications across the network, sizing accurately the HW equipments to embed on the aircraft, through the configuration of the virtual computers and the virtual network. , This concept has been studied for space domain and requirements issued [D04],[D05]. Experiments in the space domain have been done, for the computer level, through ESA and CNES initiatives [D02] [D03]. One possible IMA implementation may use Time and Space Partitioning (TSP) technology. Studies on Time and Space Partitioning [D02] for controlling resources access such as CPU and memories and studies on hardware/software interface standardization [D01] showed that for space domain technologies where I/O components (or IP) do not cover advanced features such as buffering, descriptors or virtualization, CPU overhead in terms of performances is mainly due to shared interface management in the execution platform, and to the high frequency of I/O accesses, these latter leading to an important number of context switches. This paper will present a solution to reduce this execution overhead with an open, modular and configurable controller.

  14. Total Quality Management. A Selected Bibliography

    Science.gov (United States)

    1994-03-01

    and Publow, Mark. "Understanding and Managing Authority Relationships : Guidelines for Supervisors and Sub- ordinates." QUALITY PROGRESS, Vol. 25...NEXT OPERATION AS CUSTOMER (NOAC): HOW TO IMPROVE QUALITY, COST AND CYCLE TIME IN SERVICE OPERATIONS. New York: American Management Association, 1991...Keith. HORIZONTAL MANAGEMENT : BEYOND TOTAL CUSTOMER SATISFACTION. New York: Lexington Books, 1991. 211pp. (HD66 D45 1991) Donnelly, James H., Jr. CLOSE

  15. NASA total quality management 1990 accomplishments report

    Science.gov (United States)

    1991-01-01

    NASA's efforts in Total Quality Management are based on continuous improvement and serve as a foundation for NASA's present and future endeavors. Given here are numerous examples of quality strategies that have proven effective and efficient in a time when cost reduction is critical. These accomplishment benefit our Agency and help to achieve our primary goal, keeping American in the forefront of the aerospace industry.

  16. Subtotal versus total abdominal hysterectomy

    DEFF Research Database (Denmark)

    Andersen, Lea Laird; Ottesen, Bent; Alling Møller, Lars Mikael

    2015-01-01

    OBJECTIVE: The objective of the study was to compare long-term results of subtotal vs total abdominal hysterectomy for benign uterine diseases 14 years after hysterectomy, with urinary incontinence as the primary outcome measure. STUDY DESIGN: This was a long-term follow-up of a multicenter......, randomized clinical trial without blinding. Eleven gynecological departments in Denmark contributed participants to the trial. Women referred for benign uterine diseases who did not have contraindications to subtotal abdominal hysterectomy were randomized to subtotal (n = 161) vs total (n = 158) abdominal...... from discharge summaries from all public hospitals in Denmark. The results were analyzed as intention to treat and per protocol. Possible bias caused by missing data was handled by multiple imputation. The primary outcome was urinary incontinence; the secondary outcomes were pelvic organ prolapse...

  17. Institutional total energy case studies

    Energy Technology Data Exchange (ETDEWEB)

    Wulfinghoff, D.

    1979-07-01

    Profiles of three total energy systems in institutional settings are provided in this report. The plants are those of Franciscan Hospital, a 384-bed facility in Rock Island, Illinois; Franklin Foundation Hospital, a 100-bed hospital in Franklin, Louisiana; and the North American Air Defense Command Cheyenne Mountain Complex, a military installation near Colorado Springs, Colorado. The case studies include descriptions of plant components and configurations, operation and maintenance procedures, reliability, relationships to public utilities, staffing, economic efficiency, and factors contributing to success.

  18. Total synthesis of (-)- and (+)-tedanalactam

    Digital Repository Service at National Institute of Oceanography (India)

    Majik, M.S.; Parameswaran, P.S.; Tilve, S.G.

    : The Journal of Organic Chemistry, vol.74(16); 6378-6381 1 Total Synthesis of (-) and (+)-Tedanalactam Mahesh S. Majik, † Peruninakulath S. Parameswaran, ‡ and Santosh G. Tilve* ,† Department of Chemistry, Goa University, Taleigao Plateau, Goa 403..., displaying a wide range of biological activities. 1 Piperidones are key synthetic intermediates 2 for the synthesis of piperidine ring due to the presence of keto function which allows the introduction of other groups. Piperidones are also known...

  19. Intrathoracic Hernia after Total Gastrectomy

    Directory of Open Access Journals (Sweden)

    Yoshihiko Tashiro

    2016-05-01

    Full Text Available Intrathoracic hernias after total gastrectomy are rare. We report the case of a 78-year-old man who underwent total gastrectomy with antecolic Roux-Y reconstruction for residual gastric cancer. He had alcoholic liver cirrhosis and received radical laparoscopic proximal gastrectomy for gastric cancer 3 years ago. Early gastric cancer in the remnant stomach was found by routine upper gastrointestinal endoscopy. We initially performed endoscopic submucosal dissection, but the vertical margin was positive in a pathological result. We performed total gastrectomy with antecolic Roux-Y reconstruction by laparotomy. For adhesion of the esophageal hiatus, the left chest was connected with the abdominal cavity. A pleural defect was not repaired. Two days after the operation, the patient was suspected of having intrathoracic hernia by chest X-rays. Computed tomography showed that the transverse colon and Roux limb were incarcerated in the left thoracic cavity. He was diagnosed with intrathoracic hernia, and emergency reduction and repair were performed. Operative findings showed that the Roux limb and transverse colon were incarcerated in the thoracic cavity. After reduction, the orifice of the hernia was closed by suturing the crus of the diaphragm with the ligament of the jejunum and omentum. After the second operation, he experienced anastomotic leakage and left pyothorax. Anastomotic leakage was improved with conservative therapy and he was discharged 76 days after the second operation.

  20. NIF total neutron yield diagnostic

    International Nuclear Information System (INIS)

    Cooper, Gary W.; Ruiz, Carlos L.

    2001-01-01

    We have designed a total neutron yield diagnostic for the National Ignition Facility (NIF) which is based on the activation of In and Cu samples. The particular approach that we have chosen is one in which we calibrate the entire counting system and which we call the ''F factor'' method. In this method, In and/or Cu samples are exposed to known sources of DD and DT neutrons. The activated samples are then counted with an appropriate system: a high purity Ge detector for In and a NaI coincidence system for Cu. We can then calculate a calibration factor, which relates measured activity to total neutron yield. The advantage of this approach is that specific knowledge of such quantities as cross sections and detector efficiencies is not needed. Unless the actual scattering environment of the NIF can be mocked up in the calibration experiment, the F factor will have to be modified using the results of a numerical simulation of the NIF scattering environment. In this article, the calibration factor methodology will be discussed and experimental results for the calibration factors will be presented. Total NIF neutron yields of 10 9 --10 19 can be measured with this method assuming a 50 cm stand-off distance can be employed for the lower yields