WorldWideScience

Sample records for performance-aware cpu power

  1. Thermally-aware composite run-time CPU power models

    OpenAIRE

    Walker, Matthew J.; Diestelhorst, Stephan; Hansson, Andreas; Balsamo, Domenico; Merrett, Geoff V.; Al-Hashimi, Bashir M.

    2016-01-01

    Accurate and stable CPU power modelling is fundamental in modern system-on-chips (SoCs) for two main reasons: 1) they enable significant online energy savings by providing a run-time manager with reliable power consumption data for controlling CPU energy-saving techniques; 2) they can be used as accurate and trusted reference models for system design and exploration. We begin by showing the limitations in typical performance monitoring counter (PMC) based power modelling approaches and illust...

  2. The Effect of NUMA Tunings on CPU Performance

    Science.gov (United States)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-12-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory. The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality. As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software.

  3. The Effect of NUMA Tunings on CPU Performance

    International Nuclear Information System (INIS)

    Hollowell, Christopher; Caramarcu, Costin; Strecker-Kellogg, William; Wong, Antonio; Zaytsev, Alexandr

    2015-01-01

    Non-Uniform Memory Access (NUMA) is a memory architecture for symmetric multiprocessing (SMP) systems where each processor is directly connected to separate memory. Indirect access to other CPU's (remote) RAM is still possible, but such requests are slower as they must also pass through that memory's controlling CPU. In concert with a NUMA-aware operating system, the NUMA hardware architecture can help eliminate the memory performance reductions generally seen in SMP systems when multiple processors simultaneously attempt to access memory.The x86 CPU architecture has supported NUMA for a number of years. Modern operating systems such as Linux support NUMA-aware scheduling, where the OS attempts to schedule a process to the CPU directly attached to the majority of its RAM. In Linux, it is possible to further manually tune the NUMA subsystem using the numactl utility. With the release of Red Hat Enterprise Linux (RHEL) 6.3, the numad daemon became available in this distribution. This daemon monitors a system's NUMA topology and utilization, and automatically makes adjustments to optimize locality.As the number of cores in x86 servers continues to grow, efficient NUMA mappings of processes to CPUs/memory will become increasingly important. This paper gives a brief overview of NUMA, and discusses the effects of manual tunings and numad on the performance of the HEPSPEC06 benchmark, and ATLAS software. (paper)

  4. The CMSSW benchmarking suite: Using HEP code to measure CPU performance

    International Nuclear Information System (INIS)

    Benelli, G

    2010-01-01

    The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.

  5. Online performance evaluation of RAID 5 using CPU utilization

    Science.gov (United States)

    Jin, Hai; Yang, Hua; Zhang, Jiangling

    1998-09-01

    Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.

  6. Enhancing Leakage Power in CPU Cache Using Inverted Architecture

    OpenAIRE

    Bilal A. Shehada; Ahmed M. Serdah; Aiman Abu Samra

    2013-01-01

    Power consumption is an increasingly pressing problem in modern processor design. Since the on-chip caches usually consume a significant amount of power so power and energy consumption parameters have become one of the most important design constraint. It is one of the most attractive targets for power reduction. This paper presents an approach to enhance the dynamic power consumption of CPU cache using inverted cache architecture. Our assumption tries to reduce dynamic write power dissipatio...

  7. ITCA: Inter-Task Conflict-Aware CPU accounting for CMP

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla Almeida, Francisco Javier; Gioiosa, Roberto; Valero Cortés, Mateo

    2010-01-01

    Chip-MultiProcessors (CMP) introduce complexities when accounting CPU utilization to processes because the progress done by a process during an interval of time highly depends on the activity of the other processes it is coscheduled with. We propose a new hardware CPU accounting mechanism to improve the accuracy when measuring the CPU utilization in CMPs and compare it with previous accounting mechanisms. Our results show that currently known mechanisms lead to a 16% average error when it com...

  8. A Bit String Content Aware Chunking Strategy for Reduced CPU Energy on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Bin Zhou

    2015-01-01

    Full Text Available In order to achieve energy saving and reduce the total cost of ownership, green storage has become the first priority for data center. Detecting and deleting the redundant data are the key factors to the reduction of the energy consumption of CPU, while high performance stable chunking strategy provides the groundwork for detecting redundant data. The existing chunking algorithm greatly reduces the system performance when confronted with big data and it wastes a lot of energy. Factors affecting the chunking performance are analyzed and discussed in the paper and a new fingerprint signature calculation is implemented. Furthermore, a Bit String Content Aware Chunking Strategy (BCCS is put forward. This strategy reduces the cost of signature computation in chunking process to improve the system performance and cuts down the energy consumption of the cloud storage data center. On the basis of relevant test scenarios and test data of this paper, the advantages of the chunking strategy are verified.

  9. Promise of a low power mobile CPU based embedded system in artificial leg control.

    Science.gov (United States)

    Hernandez, Robert; Zhang, Fan; Zhang, Xiaorong; Huang, He; Yang, Qing

    2012-01-01

    This paper presents the design and implementation of a low power embedded system using mobile processor technology (Intel Atom™ Z530 Processor) specifically tailored for a neural-machine interface (NMI) for artificial limbs. This embedded system effectively performs our previously developed NMI algorithm based on neuromuscular-mechanical fusion and phase-dependent pattern classification. The analysis shows that NMI embedded system can meet real-time constraints with high accuracies for recognizing the user's locomotion mode. Our implementation utilizes the mobile processor efficiently to allow a power consumption of 2.2 watts and low CPU utilization (less than 4.3%) while executing the complex NMI algorithm. Our experiments have shown that the highly optimized C program implementation on the embedded system has superb advantages over existing PC implementations on MATLAB. The study results suggest that mobile-CPU-based embedded system is promising for implementing advanced control for powered lower limb prostheses.

  10. Analysis of Characteristics of Power Consumption for Context-Aware Mobile Applications

    Directory of Open Access Journals (Sweden)

    Meeyeon Lee

    2014-11-01

    Full Text Available In recent years, a large portion of smartphone applications (Apps has targeted context-aware services. They aim to perceive users’ real-time context like his/her location, actions, or even emotion, and to provide various customized services based on the inferred context. However, context-awareness in mobile environments has some challenging issues due to limitations of devices themselves. Limited power is regarded as the most critical problem in context-awareness on smartphones. Many studies have tried to develop low-power methods, but most of them have focused on the power consumption of H/W modules of smartphones such as CPU and LCD. Only a few research papers have recently started to present some S/W-based approaches to improve the power consumption. That is, previous works did not consider energy consumed by context-awareness of Apps. Therefore, in this paper, we focus on the power consumption of context-aware Apps. We analyze the characteristics of context-aware Apps in a perspective of the power consumption, and then define two main factors which significantly influence the power consumption: a sort of context that context-aware Apps require for their services and a type of ways that a user uses them. The experimental result shows the reasonability and the possibility to develop low-power methods based on our analysis. That is, our analysis presented in this paper will be a foundation for energy-efficient context-aware services in mobile environments.

  11. STEM image simulation with hybrid CPU/GPU programming

    International Nuclear Information System (INIS)

    Yao, Y.; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-01-01

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  12. STEM image simulation with hybrid CPU/GPU programming

    Energy Technology Data Exchange (ETDEWEB)

    Yao, Y., E-mail: yaoyuan@iphy.ac.cn; Ge, B.H.; Shen, X.; Wang, Y.G.; Yu, R.C.

    2016-07-15

    STEM image simulation is achieved via hybrid CPU/GPU programming under parallel algorithm architecture to speed up calculation on a personal computer (PC). To utilize the calculation power of a PC fully, the simulation is performed using the GPU core and multi-CPU cores at the same time to significantly improve efficiency. GaSb and an artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. - Highlights: • STEM image simulation is achieved by hybrid CPU/GPU programming under parallel algorithm architecture to speed up the calculation in the personal computer (PC). • In order to fully utilize the calculation power of the PC, the simulation is performed by GPU core and multi-CPU cores at the same time so efficiency is improved significantly. • GaSb and artificial GaSb/InAs interface with atom diffusion have been used to verify the computation. The results reveal some unintuitive phenomena about the contrast variation with the atom numbers.

  13. The relationship among CPU utilization, temperature, and thermal power for waste heat utilization

    International Nuclear Information System (INIS)

    Haywood, Anna M.; Sherbeck, Jon; Phelan, Patrick; Varsamopoulos, Georgios; Gupta, Sandeep K.S.

    2015-01-01

    Highlights: • This work graphs a triad relationship among CPU utilization, temperature and power. • Using a custom-built cold plate, we were able capture CPU-generated high quality heat. • The work undertakes a radical approach using mineral oil to directly cool CPUs. • We found that it is possible to use CPU waste energy to power an absorption chiller. - Abstract: This work addresses significant datacenter issues of growth in numbers of computer servers and subsequent electricity expenditure by proposing, analyzing and testing a unique idea of recycling the highest quality waste heat generated by datacenter servers. The aim was to provide a renewable and sustainable energy source for use in cooling the datacenter. The work incorporates novel approaches in waste heat usage, graphing CPU temperature, power and utilization simultaneously, and a mineral oil experimental design and implementation. The work presented investigates and illustrates the quantity and quality of heat that can be captured from a variably tasked liquid-cooled microprocessor on a datacenter server blade. It undertakes a radical approach using mineral oil. The trials examine the feasibility of using the thermal energy from a CPU to drive a cooling process. Results indicate that 123 servers encapsulated in mineral oil can power a 10-ton chiller with a design point of 50.2 kW th . Compared with water-cooling experiments, the mineral oil experiment mitigated the temperature drop between the heat source and discharge line by up to 81%. In addition, due to this reduction in temperature drop, the heat quality in the oil discharge line was up to 12.3 °C higher on average than for water-cooled experiments. Furthermore, mineral oil cooling holds the potential to eliminate the 50% cooling expenditure which initially motivated this project

  14. ITCA: Inter-Task Conflict-Aware CPU accounting for CMPs

    OpenAIRE

    Luque, Carlos; Moreto Planas, Miquel; Cazorla, Francisco; Gioiosa, Roberto; Buyuktosunoglu, Alper; Valero Cortés, Mateo

    2009-01-01

    Chip-MultiProcessor (CMP) architectures are becoming more and more popular as an alternative to the traditional processors that only extract instruction-level parallelism from an application. CMPs introduce complexities when accounting CPU utilization. This is due to the fact that the progress done by an application during an interval of time highly depends on the activity of the other applications it is co-scheduled with. In this paper, we identify how an inaccurate measurement of the CPU ut...

  15. Thermoelectric mini cooler coupled with micro thermosiphon for CPU cooling system

    International Nuclear Information System (INIS)

    Liu, Di; Zhao, Fu-Yun; Yang, Hong-Xing; Tang, Guang-Fa

    2015-01-01

    In the present study, a thermoelectric mini cooler coupling with a micro thermosiphon cooling system has been proposed for the purpose of CPU cooling. A mathematical model of heat transfer, depending on one-dimensional treatment of thermal and electric power, is firstly established for the thermoelectric module. Analytical results demonstrate the relationship between the maximal COP (Coefficient of Performance) and Q c with the figure of merit. Full-scale experiments have been conducted to investigate the effect of thermoelectric operating voltage, power input of heat source, and thermoelectric module number on the performance of the cooling system. Experimental results indicated that the cooling production increases with promotion of thermoelectric operating voltage. Surface temperature of CPU heat source linearly increases with increasing of power input, and its maximum value reached 70 °C as the prototype CPU power input was equivalent to 84 W. Insulation between air and heat source surface can prevent the condensate water due to low surface temperature. In addition, thermal performance of this cooling system could be enhanced when the total dimension of thermoelectric module matched well with the dimension of CPU. This research could benefit the design of thermal dissipation of electronic chips and CPU units. - Highlights: • A cooling system coupled with thermoelectric module and loop thermosiphon is developed. • Thermoelectric module coupled with loop thermosiphon can achieve high heat-transfer efficiency. • A mathematical model of thermoelectric cooling is built. • An analysis of modeling results for design and experimental data are presented. • Influence of power input and operating voltage on the cooling system are researched

  16. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  17. Power Aware Simulation Framework for Wireless Sensor Networks and Nodes

    Directory of Open Access Journals (Sweden)

    Daniel Weber

    2008-07-01

    Full Text Available The constrained resources of sensor nodes limit analytical techniques and cost-time factors limit test beds to study wireless sensor networks (WSNs. Consequently, simulation becomes an essential tool to evaluate such systems.We present the power aware wireless sensors (PAWiS simulation framework that supports design and simulation of wireless sensor networks and nodes. The framework emphasizes power consumption capturing and hence the identification of inefficiencies in various hardware and software modules of the systems. These modules include all layers of the communication system, the targeted class of application itself, the power supply and energy management, the central processing unit (CPU, and the sensor-actuator interface. The modular design makes it possible to simulate heterogeneous systems. PAWiS is an OMNeT++ based discrete event simulator written in C++. It captures the node internals (modules as well as the node surroundings (network, environment and provides specific features critical to WSNs like capturing power consumption at various levels of granularity, support for mobility, and environmental dynamics as well as the simulation of timing effects. A module library with standardized interfaces and a power analysis tool have been developed to support the design and analysis of simulation models. The performance of the PAWiS simulator is comparable with other simulation environments.

  18. CPU time reduction strategies for the Lambda modes calculation of a nuclear power reactor

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V.; Garayoa, J.; Hernandez, V. [Universidad Politecnica de Valencia (Spain). Dept. de Sistemas Informaticos y Computacion; Navarro, J.; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Dept. de Ingenieria Quimica y Nuclear; Ginestar, D. [Universidad Politecnica de Valencia (Spain). Dept. de Matematica Aplicada

    1997-12-01

    In this paper, we present two strategies to reduce the CPU time spent in the lambda modes calculation for a realistic nuclear power reactor.The discretization of the multigroup neutron diffusion equation has been made using a nodal collocation method, solving the associated eigenvalue problem with two different techniques: the Subspace Iteration Method and Arnoldi`s Method. CPU time reduction is based on a coarse grain parallelization approach together with a multistep algorithm to initialize adequately the solution. (author). 9 refs., 6 tabs.

  19. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-28

    Many database applications, such as sequence comparing, sequence searching, and sequence matching, etc, process large database sequences. we introduce a novel and efficient technique to improve the performance of database applica- tions by using a Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm. The experimental results show that our Hybrid GPU/CPU technique improves the average performance by a factor of 2.2, and improves the peak performance by a factor of 2.8 when compared to earlier implementations. Copyright © 2011 by ASME.

  20. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    International Nuclear Information System (INIS)

    Yoon, Jong Seon; Choi, Hyoung Gwon; Jeon, Byoung Jin

    2017-01-01

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  1. An Investigation of the Performance of the Colored Gauss-Seidel Solver on CPU and GPU

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jong Seon; Choi, Hyoung Gwon [Seoul Nat’l Univ. of Science and Technology, Seoul (Korea, Republic of); Jeon, Byoung Jin [Yonsei Univ., Seoul (Korea, Republic of)

    2017-02-15

    The performance of the colored Gauss–Seidel solver on CPU and GPU was investigated for the two- and three-dimensional heat conduction problems by using different mesh sizes. The heat conduction equation was discretized by the finite difference method and finite element method. The CPU yielded good performance for small problems but deteriorated when the total memory required for computing was larger than the cache memory for large problems. In contrast, the GPU performed better as the mesh size increased because of the latency hiding technique. Further, GPU computation by the colored Gauss–Siedel solver was approximately 7 times that by the single CPU. Furthermore, the colored Gauss–Seidel solver was found to be approximately twice that of the Jacobi solver when parallel computing was conducted on the GPU.

  2. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    Science.gov (United States)

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  3. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  4. LHCb: Statistical Comparison of CPU performance for LHCb applications on the Grid

    CERN Multimedia

    Graciani, R

    2009-01-01

    The usage of CPU resources by LHCb on the Grid id dominated by two different applications: Gauss and Brunel. Gauss the application doing the Monte Carlo simulation of proton-proton collisions. Brunel is the application responsible for the reconstruction of the signals recorded by the detector converting them into objects that can be used for later physics analysis of the data (tracks, clusters,…) Both applications are based on the Gaudi and LHCb software frameworks. Gauss uses Pythia and Geant as underlying libraries for the simulation of the collision and the later passage of the generated particles through the LHCb detector. While Brunel makes use of LHCb specific code to process the data from each sub-detector. Both applications are CPU bound. Large Monte Carlo productions or data reconstructions running on the Grid are an ideal benchmark to compare the performance of the different CPU models for each case. Since the processed events are only statistically comparable, only statistical comparison of the...

  5. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    Science.gov (United States)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full

  6. A high performance image processing platform based on CPU-GPU heterogeneous cluster with parallel image reconstroctions for micro-CT

    International Nuclear Information System (INIS)

    Ding Yu; Qi Yujin; Zhang Xuezhu; Zhao Cuilan

    2011-01-01

    In this paper, we report the development of a high-performance image processing platform, which is based on CPU-GPU heterogeneous cluster. Currently, it consists of a Dell Precision T7500 and HP XW8600 workstations with parallel programming and runtime environment, using the message-passing interface (MPI) and CUDA (Compute Unified Device Architecture). We succeeded in developing parallel image processing techniques for 3D image reconstruction of X-ray micro-CT imaging. The results show that a GPU provides a computing efficiency of about 194 times faster than a single CPU, and the CPU-GPU clusters provides a computing efficiency of about 46 times faster than the CPU clusters. These meet the requirements of rapid 3D image reconstruction and real time image display. In conclusion, the use of CPU-GPU heterogeneous cluster is an effective way to build high-performance image processing platform. (authors)

  7. VMware vSphere performance designing CPU, memory, storage, and networking for performance-intensive workloads

    CERN Document Server

    Liebowitz, Matt; Spies, Rynardt

    2014-01-01

    Covering the latest VMware vSphere software, an essential book aimed at solving vSphere performance problems before they happen VMware vSphere is the industry's most widely deployed virtualization solution. However, if you improperly deploy vSphere, performance problems occur. Aimed at VMware administrators and engineers and written by a team of VMware experts, this resource provides guidance on common CPU, memory, storage, and network-related problems. Plus, step-by-step instructions walk you through techniques for solving problems and shed light on possible causes behind the problems. Divu

  8. BSLD threshold driven power management policy for HPC centers

    OpenAIRE

    Etinski, Maja; Corbalán González, Julita; Labarta Mancho, Jesús José; Valero Cortés, Mateo

    2010-01-01

    In this paper, we propose a power-aware parallel job scheduler assuming DVFS enabled clusters. A CPU frequency assignment algorithm is integrated into the well established EASY backfilling job scheduling policy. Running a job at lower frequency results in a reduction in power dissipation and accordingly in energy consumption. However, lower frequencies introduce a penalty in performance. Our frequency assignment algorithm has two adjustable parameters in order to enable fine grain energy-perf...

  9. Improving the Performance of CPU Architectures by Reducing the Operating System Overhead (Extended Version

    Directory of Open Access Journals (Sweden)

    Zagan Ionel

    2016-07-01

    Full Text Available The predictable CPU architectures that run hard real-time tasks must be executed with isolation in order to provide a timing-analyzable execution for real-time systems. The major problems for real-time operating systems are determined by an excessive jitter, introduced mainly through task switching. This can alter deadline requirements, and, consequently, the predictability of hard real-time tasks. New requirements also arise for a real-time operating system used in mixed-criticality systems, when the executions of hard real-time applications require timing predictability. The present article discusses several solutions to improve the performance of CPU architectures and eventually overcome the Operating Systems overhead inconveniences. This paper focuses on the innovative CPU implementation named nMPRA-MT, designed for small real-time applications. This implementation uses the replication and remapping techniques for the program counter, general purpose registers and pipeline registers, enabling multiple threads to share a single pipeline assembly line. In order to increase predictability, the proposed architecture partially removes the hazard situation at the expense of larger execution latency per one instruction.

  10. Liquid Cooling System for CPU by Electroconjugate Fluid

    Directory of Open Access Journals (Sweden)

    Yasuo Sakurai

    2014-06-01

    Full Text Available The dissipated power of CPU for personal computer has been increased because the performance of personal computer becomes higher. Therefore, a liquid cooling system has been employed in some personal computers in order to improve their cooling performance. Electroconjugate fluid (ECF is one of the functional fluids. ECF has a remarkable property that a strong jet flow is generated between electrodes when a high voltage is applied to ECF through the electrodes. By using this strong jet flow, an ECF-pump with simple structure, no sliding portion, no noise, and no vibration seems to be able to be developed. And then, by the use of the ECF-pump, a new liquid cooling system by ECF seems to be realized. In this study, to realize this system, an ECF-pump is proposed and fabricated to investigate the basic characteristics of the ECF-pump experimentally. Next, by utilizing the ECF-pump, a model of a liquid cooling system by ECF is manufactured and some experiments are carried out to investigate the performance of this system. As a result, by using this system, the temperature of heat source of 50 W is kept at 60°C or less. In general, CPU is usually used at this temperature or less.

  11. Heterogeneous Gpu&Cpu Cluster For High Performance Computing In Cryptography

    Directory of Open Access Journals (Sweden)

    Michał Marks

    2012-01-01

    Full Text Available This paper addresses issues associated with distributed computing systems andthe application of mixed GPU&CPU technology to data encryption and decryptionalgorithms. We describe a heterogenous cluster HGCC formed by twotypes of nodes: Intel processor with NVIDIA graphics processing unit and AMDprocessor with AMD graphics processing unit (formerly ATI, and a novel softwareframework that hides the heterogeneity of our cluster and provides toolsfor solving complex scientific and engineering problems. Finally, we present theresults of numerical experiments. The considered case study is concerned withparallel implementations of selected cryptanalysis algorithms. The main goal ofthe paper is to show the wide applicability of the GPU&CPU technology tolarge scale computation and data processing.

  12. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    Directory of Open Access Journals (Sweden)

    Fan Zhang

    2016-04-01

    Full Text Available With the development of synthetic aperture radar (SAR technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO. However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  13. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    Science.gov (United States)

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  14. Energy consumption optimization of the total-FETI solver by changing the CPU frequency

    Science.gov (United States)

    Horak, David; Riha, Lubomir; Sojka, Radim; Kruzik, Jakub; Beseda, Martin; Cermak, Martin; Schuchart, Joseph

    2017-07-01

    The energy consumption of supercomputers is one of the critical problems for the upcoming Exascale supercomputing era. The awareness of power and energy consumption is required on both software and hardware side. This paper deals with the energy consumption evaluation of the Finite Element Tearing and Interconnect (FETI) based solvers of linear systems, which is an established method for solving real-world engineering problems. We have evaluated the effect of the CPU frequency on the energy consumption of the FETI solver using a linear elasticity 3D cube synthetic benchmark. In this problem, we have evaluated the effect of frequency tuning on the energy consumption of the essential processing kernels of the FETI method. The paper provides results for two types of frequency tuning: (1) static tuning and (2) dynamic tuning. For static tuning experiments, the frequency is set before execution and kept constant during the runtime. For dynamic tuning, the frequency is changed during the program execution to adapt the system to the actual needs of the application. The paper shows that static tuning brings up 12% energy savings when compared to default CPU settings (the highest clock rate). The dynamic tuning improves this further by up to 3%.

  15. Crew Situation Awareness, Diagnoses, and Performance in Simulated Nuclear Power Plant Process Disturbances

    International Nuclear Information System (INIS)

    Sebok, Angelia; Kaarstad, Magnhild

    1998-01-01

    Research was conducted at the OECD Halden Reactor Project to identify issues in crew performance in complex simulated nuclear power plant scenarios. Eight crews of operators participated in five scenarios, administered over a two or three-day period. Scenarios required either rule-based or knowledge-based problem solving. Several performance parameters were collected, including Situation Awareness (SA), objective performance, rated crew performance, and crew diagnoses. The purpose of this study was to investigate differences in performance measures in knowledge-based and rule-based scenarios. Preliminary data analysis revealed a significant difference in crew SA between the two scenario types: crews in the rule-based scenarios had significantly higher SA then crews in the knowledge-based scenarios. Further investigations were initiated to determine if crews performed differently, in terms of objective performance, rated crew performance, and diagnoses, between the scenario types. Correlations between the various crew performance measurements were calculated to reveal insights into the nature of SA, performance, and diagnoses. The insights into crew performance can be used to design more effective interfaces and operator performance aids, thus contributing to enhanced crew performance and improved plant safety. (authors)

  16. Exploring performance and power properties of modern multicore chips via simple machine models

    OpenAIRE

    Hager, Georg; Treibig, Jan; Habich, Johannes; Wellein, Gerhard

    2012-01-01

    Modern multicore chips show complex behavior with respect to performance and power. Starting with the Intel Sandy Bridge processor, it has become possible to directly measure the power dissipation of a CPU chip and correlate this data with the performance properties of the running code. Going beyond a simple bottleneck analysis, we employ the recently published Execution-Cache-Memory (ECM) model to describe the single- and multi-core performance of streaming kernels. The model refines the wel...

  17. A combined PLC and CPU approach to multiprocessor control

    International Nuclear Information System (INIS)

    Harris, J.J.; Broesch, J.D.; Coon, R.M.

    1995-10-01

    A sophisticated multiprocessor control system has been developed for use in the E-Power Supply System Integrated Control (EPSSIC) on the DIII-D tokamak. EPSSIC provides control and interlocks for the ohmic heating coil power supply and its associated systems. Of particular interest is the architecture of this system: both a Programmable Logic Controller (PLC) and a Central Processor Unit (CPU) have been combined on a standard VME bus. The PLC and CPU input and output signals are routed through signal conditioning modules, which provide the necessary voltage and ground isolation. Additionally these modules adapt the signal levels to that of the VME I/O boards. One set of I/O signals is shared between the two processors. The resulting multiprocessor system provides a number of advantages: redundant operation for mission critical situations, flexible communications using conventional TCP/IP protocols, the simplicity of ladder logic programming for the majority of the control code, and an easily maintained and expandable non-proprietary system

  18. GeantV: from CPU to accelerators

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Arora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Sehgal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs.

  19. GeantV: from CPU to accelerators

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Arora, A; Apostolakis, J; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S; Lima, G; Duhem, L

    2016-01-01

    The GeantV project aims to research and develop the next-generation simulation software describing the passage of particles through matter. While the modern CPU architectures are being targeted first, resources such as GPGPU, Intel© Xeon Phi, Atom or ARM cannot be ignored anymore by HEP CPU-bound applications. The proof of concept GeantV prototype has been mainly engineered for CPU's having vector units but we have foreseen from early stages a bridge to arbitrary accelerators. A software layer consisting of architecture/technology specific backends supports currently this concept. This approach allows to abstract out the basic types such as scalar/vector but also to formalize generic computation kernels using transparently library or device specific constructs based on Vc, CUDA, Cilk+ or Intel intrinsics. While the main goal of this approach is portable performance, as a bonus, it comes with the insulation of the core application and algorithms from the technology layer. This allows our application to be long term maintainable and versatile to changes at the backend side. The paper presents the first results of basket-based GeantV geometry navigation on the Intel© Xeon Phi KNC architecture. We present the scalability and vectorization study, conducted using Intel performance tools, as well as our preliminary conclusions on the use of accelerators for GeantV transport. We also describe the current work and preliminary results for using the GeantV transport kernel on GPUs. (paper)

  20. Power Aware Distributed Systems

    National Research Council Canada - National Science Library

    Schott, Brian

    2004-01-01

    The goal of PADS was to study power aware management techniques for wireless unattended ground sensor applications to extend their operational lifetime and overall capabilities in this battery-constrained environment...

  1. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU; Reconstruccion del espectro de neutrones usando una red neuronal artificial (RNA) en CPU y GPU

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A. [Universidad de Cordoba, 14002 Cordoba (Spain); Vega C, H. R.; Alonso M, O. E., E-mail: vic.mc68010@gmail.com [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  2. Low Power Design with High-Level Power Estimation and Power-Aware Synthesis

    CERN Document Server

    Ahuja, Sumit; Shukla, Sandeep Kumar

    2012-01-01

    Low-power ASIC/FPGA based designs are important due to the need for extended battery life, reduced form factor, and lower packaging and cooling costs for electronic devices. These products require fast turnaround time because of the increasing demand for handheld electronic devices such as cell-phones, PDAs and high performance machines for data centers. To achieve short time to market, design flows must facilitate a much shortened time-to-product requirement. High-level modeling, architectural exploration and direct synthesis of design from high level description enable this design process. This book presents novel research techniques, algorithms,methodologies and experimental results for high level power estimation and power aware high-level synthesis. Readers will learn to apply such techniques to enable design flows resulting in shorter time to market and successful low power ASIC/FPGA design. Integrates power estimation and reduction for high level synthesis, with low-power, high-level design; Shows spec...

  3. High performance technique for database applicationsusing a hybrid GPU/CPU platform

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Hybrid GPU/CPU platform. In particular, our technique solves the problem of the low efficiency result- ing from running short-length sequences in a database on a GPU. To verify our technique, we applied it to the widely used Smith-Waterman algorithm

  4. A Hybrid CPU/GPU Pattern-Matching Algorithm for Deep Packet Inspection.

    Directory of Open Access Journals (Sweden)

    Chun-Liang Lee

    Full Text Available The large quantities of data now being transferred via high-speed networks have made deep packet inspection indispensable for security purposes. Scalable and low-cost signature-based network intrusion detection systems have been developed for deep packet inspection for various software platforms. Traditional approaches that only involve central processing units (CPUs are now considered inadequate in terms of inspection speed. Graphic processing units (GPUs have superior parallel processing power, but transmission bottlenecks can reduce optimal GPU efficiency. In this paper we describe our proposal for a hybrid CPU/GPU pattern-matching algorithm (HPMA that divides and distributes the packet-inspecting workload between a CPU and GPU. All packets are initially inspected by the CPU and filtered using a simple pre-filtering algorithm, and packets that might contain malicious content are sent to the GPU for further inspection. Test results indicate that in terms of random payload traffic, the matching speed of our proposed algorithm was 3.4 times and 2.7 times faster than those of the AC-CPU and AC-GPU algorithms, respectively. Further, HPMA achieved higher energy efficiency than the other tested algorithms.

  5. DSM vs. NSM: CPU Performance Tradeoffs in Block-Oriented Query Processing

    NARCIS (Netherlands)

    M. Zukowski (Marcin); N.J. Nes (Niels); P.A. Boncz (Peter)

    2008-01-01

    textabstractComparisons between the merits of row-wise storage (NSM) and columnar storage (DSM) are typically made with respect to the persistent storage layer of database systems. In this paper, however, we focus on the CPU efficiency tradeoffs of tuple representations inside the query

  6. A Bio-inspired Approach for Power and Performance Aware Resource Allocation in Clouds

    Directory of Open Access Journals (Sweden)

    Kumar Rajesh

    2016-01-01

    Full Text Available In order to cope with increasing demand, cloud market players such as Amazon, Microsoft, Google, Gogrid, Flexiant, etc. have set up large sized data centers. Due to monotonically increasing size of data centers and heterogeneity of resources have made resource allocation a challenging task. A large percentage of total energy consumption of the data centers gets wasted because of under-utilization of resources. Thus, there is a need of resource allocation technique that improves the utilization of resources with effecting performance of services being delivered to end users. In this work, a bio-inspired resource allocation approach is proposed with the aim to improve utilization and hence the energy efficiency of the cloud infrastructure. The proposed approach makes use of Cuckoo search for power and performance aware allocation of resources to the services hired by the end users. The proposed approach is implemented in CloudSim. The simulation results have shown approximately 12% saving in energy consumption.

  7. Reconstruction of the neutron spectrum using an artificial neural network in CPU and GPU

    International Nuclear Information System (INIS)

    Hernandez D, V. M.; Moreno M, A.; Ortiz L, M. A.; Vega C, H. R.; Alonso M, O. E.

    2016-10-01

    The increase in computing power in personal computers has been increasing, computers now have several processors in the CPU and in addition multiple CUDA cores in the graphics processing unit (GPU); both systems can be used individually or combined to perform scientific computation without resorting to processor or supercomputing arrangements. The Bonner sphere spectrometer is the most commonly used multi-element system for neutron detection purposes and its associated spectrum. Each sphere-detector combination gives a particular response that depends on the energy of the neutrons, and the total set of these responses is known like the responses matrix Rφ(E). Thus, the counting rates obtained with each sphere and the neutron spectrum is related to the Fredholm equation in its discrete version. For the reconstruction of the spectrum has a system of poorly conditioned equations with an infinite number of solutions and to find the appropriate solution, it has been proposed the use of artificial intelligence through neural networks with different platforms CPU and GPU. (Author)

  8. SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, K; Chen, D. Z; Hu, X. S [University of Notre Dame, Notre Dame, IN (United States); Zhou, B [Altera Corp., San Jose, CA (United States)

    2014-06-01

    Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this procedure into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF

  9. Power-aware load balancing of large scale MPI applications

    OpenAIRE

    Etinski, Maja; Corbalán González, Julita; Labarta Mancho, Jesús José; Valero Cortés, Mateo; Veidenbaum, Alex

    2009-01-01

    Power consumption is a very important issue for HPC community, both at the level of one application or at the level of whole workload. Load imbalance of a MPI application can be exploited to save CPU energy without penalizing the execution time. An application is load imbalanced when some nodes are assigned more computation than others. The nodes with less computation can be run at lower frequency since otherwise they have to wait for the nodes with more computation blocked in MPI calls. A te...

  10. Using the CPU and GPU for real-time video enhancement on a mobile computer

    CSIR Research Space (South Africa)

    Bachoo, AK

    2010-09-01

    Full Text Available . In this paper, the current advances in mobile CPU and GPU hardware are used to implement video enhancement algorithms in a new way on a mobile computer. Both the CPU and GPU are used effectively to achieve realtime performance for complex image enhancement...

  11. Coupling SIMD and SIMT architectures to boost performance of a phylogeny-aware alignment kernel

    Directory of Open Access Journals (Sweden)

    Alachiotis Nikolaos

    2012-08-01

    Full Text Available Abstract Background Aligning short DNA reads to a reference sequence alignment is a prerequisite for detecting their biological origin and analyzing them in a phylogenetic context. With the PaPaRa tool we introduced a dedicated dynamic programming algorithm for simultaneously aligning short reads to reference alignments and corresponding evolutionary reference trees. The algorithm aligns short reads to phylogenetic profiles that correspond to the branches of such a reference tree. The algorithm needs to perform an immense number of pairwise alignments. Therefore, we explore vector intrinsics and GPUs to accelerate the PaPaRa alignment kernel. Results We optimized and parallelized PaPaRa on CPUs and GPUs. Via SSE 4.1 SIMD (Single Instruction, Multiple Data intrinsics for x86 SIMD architectures and multi-threading, we obtained a 9-fold acceleration on a single core as well as linear speedups with respect to the number of cores. The peak CPU performance amounts to 18.1 GCUPS (Giga Cell Updates per Second using all four physical cores on an Intel i7 2600 CPU running at 3.4 GHz. The average CPU performance (averaged over all test runs is 12.33 GCUPS. We also used OpenCL to execute PaPaRa on a GPU SIMT (Single Instruction, Multiple Threads architecture. A NVIDIA GeForce 560 GPU delivered peak and average performance of 22.1 and 18.4 GCUPS respectively. Finally, we combined the SIMD and SIMT implementations into a hybrid CPU-GPU system that achieved an accumulated peak performance of 33.8 GCUPS. Conclusions This accelerated version of PaPaRa (available at http://www.exelixis-lab.org/software.html provides a significant performance improvement that allows for analyzing larger datasets in less time. We observe that state-of-the-art SIMD and SIMT architectures deliver comparable performance for this dynamic programming kernel when the “competing programmer approach” is deployed. Finally, we show that overall performance can be substantially increased

  12. A Work-Demand Analysis Compatible with Preemption-Aware Scheduling for Power-Aware Real-Time Tasks

    Directory of Open Access Journals (Sweden)

    Da-Ren Chen

    2013-01-01

    Full Text Available Due to the importance of slack time utilization for power-aware scheduling algorithms,we propose a work-demand analysis method called parareclamation algorithm (PRA to increase slack time utilization of the existing real-time DVS algorithms. PRA is an online scheduling for power-aware real-time tasks under rate-monotonic (RM policy. It can be implemented and fully compatible with preemption-aware or transition-aware scheduling algorithms without increasing their computational complexities. The key technique of the heuristics method doubles the analytical interval and turns the deferrable workload out the potential slack time. Theoretical proofs show that PRA guarantees the task deadlines in a feasible RM schedule and takes linear time and space complexities. Experimental results indicate that the proposed method combining the preemption-aware methods seamlessly reduces the energy consumption by 14% on average over their original algorithms.

  13. NUMA-Aware Thread Scheduling for Big Data Transfers over Terabits Network Infrastructure

    Directory of Open Access Journals (Sweden)

    Taeuk Kim

    2018-01-01

    Full Text Available The evergrowing trend of big data has led scientists to share and transfer the simulation and analytical data across the geodistributed research and computing facilities. However, the existing data transfer frameworks used for data sharing lack the capability to adopt the attributes of the underlying parallel file systems (PFS. LADS (Layout-Aware Data Scheduling is an end-to-end data transfer tool optimized for terabit network using a layout-aware data scheduling via PFS. However, it does not consider the NUMA (Nonuniform Memory Access architecture. In this paper, we propose a NUMA-aware thread and resource scheduling for optimized data transfer in terabit network. First, we propose distributed RMA buffers to reduce memory controller contention in CPU sockets and then schedule the threads based on CPU socket and NUMA nodes inside CPU socket to reduce memory access latency. We design and implement the proposed resource and thread scheduling in the existing LADS framework. Experimental results showed from 21.7% to 44% improvement with memory-level optimizations in the LADS framework as compared to the baseline without any optimization.

  14. CPU and GPU (Cuda Template Matching Comparison

    Directory of Open Access Journals (Sweden)

    Evaldas Borcovas

    2014-05-01

    Full Text Available Image processing, computer vision or other complicated opticalinformation processing algorithms require large resources. It isoften desired to execute algorithms in real time. It is hard tofulfill such requirements with single CPU processor. NVidiaproposed CUDA technology enables programmer to use theGPU resources in the computer. Current research was madewith Intel Pentium Dual-Core T4500 2.3 GHz processor with4 GB RAM DDR3 (CPU I, NVidia GeForce GT320M CUDAcompliable graphics card (GPU I and Intel Core I5-2500K3.3 GHz processor with 4 GB RAM DDR3 (CPU II, NVidiaGeForce GTX 560 CUDA compatible graphic card (GPU II.Additional libraries as OpenCV 2.1 and OpenCV 2.4.0 CUDAcompliable were used for the testing. Main test were made withstandard function MatchTemplate from the OpenCV libraries.The algorithm uses a main image and a template. An influenceof these factors was tested. Main image and template have beenresized and the algorithm computing time and performancein Gtpix/s have been measured. According to the informationobtained from the research GPU computing using the hardwarementioned earlier is till 24 times faster when it is processing abig amount of information. When the images are small the performanceof CPU and GPU are not significantly different. Thechoice of the template size makes influence on calculating withCPU. Difference in the computing time between the GPUs canbe explained by the number of cores which they have.

  15. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  16. Optimizing The Performance of Streaming Numerical Kernels On The IBM Blue Gene/P PowerPC 450

    KAUST Repository

    Malas, Tareq

    2011-07-01

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a formidable challenge despite the regularity of memory access. Sophisticated optimization techniques beyond the capabilities of modern compilers are required to fully utilize the Central Processing Unit (CPU). The aim of the work presented here is to improve the performance of streaming numerical kernels on high performance architectures by developing efficient algorithms to utilize the vectorized floating point units. The importance of the development time demands the creation of tools to enable simple yet direct development in assembly to utilize the power-efficient cores featuring in-order execution and multiple-issue units. We implement several stencil kernels for a variety of cached memory scenarios using our Python instruction simulation and generation tool. Our technique simplifies the development of efficient assembly code for the IBM Blue Gene/P supercomputer\\'s PowerPC 450. This enables us to perform high-level design, construction, verification, and simulation on a subset of the CPU\\'s instruction set. Our framework has the capability to implement streaming numerical kernels on current and future high performance architectures. Finally, we present several automatically generated implementations, including a 27-point stencil achieving a 1.7x speedup over the best previously published results.

  17. Comparison of the CPU and memory performance of StatPatternRecognitions (SPR) and Toolkit for MultiVariate Analysis (TMVA)

    International Nuclear Information System (INIS)

    Palombo, G.

    2012-01-01

    High Energy Physics data sets are often characterized by a huge number of events. Therefore, it is extremely important to use statistical packages able to efficiently analyze these unprecedented amounts of data. We compare the performance of the statistical packages StatPatternRecognition (SPR) and Toolkit for MultiVariate Analysis (TMVA). We focus on how CPU time and memory usage of the learning process scale versus data set size. As classifiers, we consider Random Forests, Boosted Decision Trees and Neural Networks only, each with specific settings. For our tests, we employ a data set widely used in the machine learning community, “Threenorm” data set, as well as data tailored for testing various edge cases. For each data set, we constantly increase its size and check CPU time and memory needed to build the classifiers implemented in SPR and TMVA. We show that SPR is often significantly faster and consumes significantly less memory. For example, the SPR implementation of Random Forest is by an order of magnitude faster and consumes an order of magnitude less memory than TMVA on Threenorm data.

  18. Energy-Aware Sensor Networks via Sensor Selection and Power Allocation

    KAUST Repository

    Niyazi, Lama B.

    2018-02-12

    Finite energy reserves and the irreplaceable nature of nodes in battery-driven wireless sensor networks (WSNs) motivate energy-aware network operation. This paper considers energy-efficiency in a WSN by investigating the problem of minimizing the power consumption consisting of both radiated and circuit power of sensor nodes, so as to determine an optimal set of active sensors and corresponding transmit powers. To solve such a mixed discrete and continuous problem, the paper proposes various sensor selection and power allocation algorithms of low complexity. Simulation results show an appreciable improvement in their performance over a system in which no selection strategy is applied, with a slight gap from derived lower bounds. The results further yield insights into the relationship between the number of activated sensors and its effect on total power in different regimes of operation, based on which recommendations are made for which strategies to use in the different regimes.

  19. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    International Nuclear Information System (INIS)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun

    2016-01-01

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD

  20. Design improvement of FPGA and CPU based digital circuit cards to solve timing issues

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Lee, Jaeki; Lee, Kwang-Hyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The digital circuit cards installed at NPPs (Nuclear Power Plant) are mostly composed of a CPU (Central Processing Unit) and a PLD (Programmable Logic Device; these include a FPGA (Field Programmable Gate Array) and a CPLD (Complex Programmable Logic Device)). This type of structure is typical and is maintained using digital circuit cards. There are no big problems with this device as a structure. In particular, signal delay causes a lot of problems when various IC (Integrated Circuit) and several circuit cards are connected to the BUS of the backplane in the BUS design. This paper suggests a structure to improve the BUS signal timing problems in a circuit card consisting of CPU and FPGA. Nowadays, as the structure of circuit cards has become complex and mass data at high speed is communicated through the BUS, data integrity is the most important issue. The conventional design does not consider delay and the synchronicity of signal and this causes many problems in data processing. In order to solve these problems, it is important to isolate the BUS controller from the CPU and maintain constancy of the signal delay by using a PLD.

  1. Empirical CPU power modelling and estimation in the gem5 simulator

    OpenAIRE

    Basireddy, Karunakar Reddy; Walker, Matthew; Balsamo, Domenico; Diestelhorst, Stephan; Al-Hashimi, Bashir; Merrett, Geoffrey

    2017-01-01

    Power modelling is important for modern CPUs to inform power management approaches and allow design space exploration. Power simulators, combined with a full-system architectural simulator such as gem5, enable power-performance trade-offs to be investigated early in the design of a system with different configurations (e.g number of cores, cache size, etc.). However, the accuracy of existing power simulators, such as McPAT, is known to be low due to the abstraction and specification errors, a...

  2. Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Luca Chiaraviglio

    2016-06-01

    Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.

  3. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  4. Driving the Power of AIX Performance Tuning on IBM Power

    CERN Document Server

    Milberg, Ken

    2009-01-01

    A concise reference for IT professionals, this book goes beyond the rules and contains the best practices and strategies for solid tuning methodology. Tips based on years of experience from an AIX tuning master show specific steps for monitoring and tuning CPU, virtual memory, disk I/O, and network components. Also offering techniques for tuning Oracle and Linux structures that run on an IBM power system-as well as for the new AIX 6.1-this manual discusses what tools are available, how to best use them to collect historical data, and when to analyze trends and results. The only comprehensive,

  5. Understanding, modeling, and improving main-memory database performance

    OpenAIRE

    Manegold, S.

    2002-01-01

    textabstractDuring the last two decades, computer hardware has experienced remarkable developments. Especially CPU (clock-)speed has been following Moore's Law, i.e., doubling every 18 months; and there is no indication that this trend will change in the foreseeable future. Recent research has revealed that database performance, even with main-memory based systems, can hardly benefit from the ever increasing CPU power. The reason for this is that the performance of other hardware components h...

  6. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Throneburg, E. B.; Jones, J. M. [AREVA NP Inc., 7207 IBM Drive, Charlotte, NC 28262 (United States)

    2006-07-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  7. Situation awareness and trust in computer-based procedures in nuclear power plant operations

    International Nuclear Information System (INIS)

    Throneburg, E. B.; Jones, J. M.

    2006-01-01

    Situation awareness and trust are two issues that need to be addressed in the design of computer-based procedures for nuclear power plants. Situation awareness, in relation to computer-based procedures, concerns the operators' knowledge of the plant's state while following the procedures. Trust concerns the amount of faith that the operators put into the automated procedures, which can affect situation awareness. This paper first discusses the advantages and disadvantages of computer-based procedures. It then discusses the known aspects of situation awareness and trust as applied to computer-based procedures in nuclear power plants. An outline of a proposed experiment is then presented that includes methods of measuring situation awareness and trust so that these aspects can be analyzed for further study. (authors)

  8. Relationship between people's awareness of environmental capabilities of saving energy, photovoltaic power generation and nuclear power generation

    International Nuclear Information System (INIS)

    Hashiba, Takashi

    2001-01-01

    In this research, relationship between people's awareness of environmental capabilities of saving energy, photovoltaic power generation (PV) and nuclear power generation was investigated using questionnaire method. The results showed that saving energy is conducted without reference to its environment preservation effect. However the older people tend to regard saving energy as contribution to environment preservation. The attitude toward usage of PV has a close relationship to awareness of energy environmental concerns. Acceptance of cost sharing for the introducing of wide-scale PV systems to society is related to environment protection image of PV and the attitude toward loss of social convenience lost as a result of saving energy activities. The older people become, the more priority people put on environment protection before the social convenience. There is little relationship between environmental capabilities of nuclear power generation, that never discharge CO 2 on generation, and awareness of energy environmental concerns. (author)

  9. Conserved-peptide upstream open reading frames (CPuORFs are associated with regulatory genes in angiosperms

    Directory of Open Access Journals (Sweden)

    Richard A Jorgensen

    2012-08-01

    Full Text Available Upstream open reading frames (uORFs are common in eukaryotic transcripts, but those that encode conserved peptides (CPuORFs occur in less than 1% of transcripts. The peptides encoded by three plant CPuORF families are known to control translation of the downstream ORF in response to a small signal molecule (sucrose, polyamines and phosphocholine. In flowering plants, transcription factors are statistically over-represented among genes that possess CPuORFs, and in general it appeared that many CPuORF genes also had other regulatory functions, though the significance of this suggestion was uncertain (Hayden and Jorgensen, 2007. Five years later the literature provides much more information on the functions of many CPuORF genes. Here we reassess the functions of 27 known CPuORF gene families and find that 22 of these families play a variety of different regulatory roles, from transcriptional control to protein turnover, and from small signal molecules to signal transduction kinases. Clearly then, there is indeed a strong association of CPuORFs with regulatory genes. In addition, 16 of these families play key roles in a variety of different biological processes. Most strikingly, the core sucrose response network includes three different CPuORFs, creating the potential for sophisticated balancing of the network in response to three different molecular inputs. We propose that the function of most CPuORFs is to modulate translation of a downstream major ORF (mORF in response to a signal molecule recognized by the conserved peptide and that because the mORFs of CPuORF genes generally encode regulatory proteins, many of them centrally important in the biology of plants, CPuORFs play key roles in balancing such regulatory networks.

  10. Programming models for energy-aware systems

    Science.gov (United States)

    Zhu, Haitao

    Energy efficiency is an important goal of modern computing, with direct impact on system operational cost, reliability, usability and environmental sustainability. This dissertation describes the design and implementation of two innovative programming languages for constructing energy-aware systems. First, it introduces ET, a strongly typed programming language to promote and facilitate energy-aware programming, with a novel type system design called Energy Types. Energy Types is built upon a key insight into today's energy-efficient systems and applications: despite the popular perception that energy and power can only be described in joules and watts, real-world energy management is often based on discrete phases and modes, which in turn can be reasoned about by type systems very effectively. A phase characterizes a distinct pattern of program workload, and a mode represents an energy state the program is expected to execute in. Energy Types is designed to reason about energy phases and energy modes, bringing programmers into the optimization of energy management. Second, the dissertation develops Eco, an energy-aware programming language centering around sustainability. A sustainable program built from Eco is able to adaptively adjusts its own behaviors to stay on a given energy budget, avoiding both deficit that would lead to battery drain or CPU overheating, and surplus that could have been used to improve the quality of the program output. Sustainability is viewed as a form of supply and demand matching, and a sustainable program consistently maintains the equilibrium between supply and demand. ET is implemented as a prototyped compiler for smartphone programming on Android, and Eco is implemented as a minimal extension to Java. Programming practices and benchmarking experiments in these two new languages showed that ET can lead to significant energy savings for Android Apps and Eco can efficiently promote battery awareness and temperature awareness in real

  11. A Power Balance Aware Wireless Charger Deployment Method for Complete Coverage in Wireless Rechargeable Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tu-Liang Lin

    2016-08-01

    Full Text Available Traditional sensor nodes are usually battery powered, and the limited battery power constrains the overall lifespan of the sensors. Recently, wireless power transmission technology has been applied in wireless sensor networks (WSNs to transmit wireless power from the chargers to the sensor nodes and solve the limited battery power problem. The combination of wireless sensors and wireless chargers forms a new type of network called wireless rechargeable sensor networks (WRSNs. In this research, we focus on how to effectively deploy chargers to maximize the lifespan of a network. In WSNs, the sensor nodes near the sink consume more power than nodes far away from the sink because of frequent data forwarding. This important power unbalanced factor has not been considered, however, in previous charger deployment research. In this research, a power balance aware deployment (PBAD method is proposed to address the power unbalance in WRSNs and to design the charger deployment with maximum charging efficiency. The proposed deployment method is effectively aware of the existence of the sink node that would cause unbalanced power consumption in WRSNs. The simulation results show that the proposed PBAD algorithm performs better than other deployment methods, and fewer chargers are deployed as a result.

  12. Enhanced situation awareness and decision making for an intelligent reconfigurable reactor power controller

    International Nuclear Information System (INIS)

    Kenney, S.J.; Edwards, R.M.

    1996-01-01

    A Learning Automata based intelligent reconfigurable controller has been adapted for use as a reactor power controller to achieve improved reactor temperature performance. The intelligent reconfigurable controller is capable of enforcing either a classical or an optimal reactor power controller based on control performance feedback. Four control performance evaluation measures: dynamically estimated average quadratic temperature error, power, rod reactivity and rod reactivity rate were developed to provide feedback to the control decision component of the intelligent reconfigurable controller. Fuzzy Logic and Neural Network controllers have been studied for inclusion in the bank of controllers that form the intermediate level of an enhanced intelligent reconfigurable reactor power controller (IRRPC). The increased number of alternatives available to the supervisory level of the IRRPC requires enhanced situation awareness. Additional performance measures have been designed and a method for synthesizing them into a single indication of the overall performance of the currently enforced reactor power controller has been conceptualized. Modification of the reward/penalty scheme implemented in the existing IRRPC to increase the quality of the supervisory level decision process has been studied. The logogen model of human memory (Morton, 1969) and individual controller design information could be used to allocate reward to the most appropriate controller. Methods for allocating supervisory level attention were also studied with the goal of maximizing learning rate

  13. Case Study of Using High Performance Commercial Processors in Space

    Science.gov (United States)

    Ferguson, Roscoe C.; Olivas, Zulema

    2009-01-01

    The purpose of the Space Shuttle Cockpit Avionics Upgrade project (1999 2004) was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the re-evaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s were radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but had some ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.

  14. Enersave API: Android-based power-saving framework for mobile devices

    Directory of Open Access Journals (Sweden)

    A.M. Muharum

    2017-06-01

    Full Text Available Power consumption is a major factor to be taken into consideration when using mobile devices in the IoT field. Good Power management requires proper understanding of the way in which it is being consumed by the end-devices. This paper is a continuation of the work in Ref. [1] and proposes an energy saving API for the Android Operating System in order to help developers turn their applications into energy-aware ones. The main features heavily used for building smart applications, greatly impact battery life of Android devices and which have been taken into consideration are: Screen brightness, Colour scheme, CPU frequency, 2G/3G network, Maps, Low power localisation, Bluetooth and Wi-Fi. The assessment of the power-saving API has been performed on real Android devices and also compared to the most powerful power-saving applications – DU Battery Saver and Battery Saver 2016 – currently available on the Android market. Comparisons demonstrate that the Enersave API has a significant impact on power saving when incorporated in android applications. While DU Battery Saver and Battery Saver 2016 help saving 22.2% and 40.5% of the battery power respectively, the incorporation of the Enersave API in android applications can help save 84.6% of battery power.

  15. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL).

    Science.gov (United States)

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-08-10

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  16. A Scalable Context-Aware Objective Function (SCAOF of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL

    Directory of Open Access Journals (Sweden)

    Yibo Chen

    2015-08-01

    Full Text Available In recent years, IoT (Internet of Things technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL, which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  17. Dispatching strategies for coordinating environmental awareness and risk perception in wind power integrated system

    International Nuclear Information System (INIS)

    Jin, Jingliang; Zhou, Dequn; Zhou, Peng; Qian, Shuqu; Zhang, Mingming

    2016-01-01

    Wind power plays a significant role in economic and environmental operation of electric power system. Meanwhile, the variability and uncertainty characteristics of wind power generation bring technical and economical challenges for power system operation. In order to harmonize the relationship between environmental protection and risk management in power dispatching, this paper presents a stochastic dynamic economic emission dispatch model combining risk perception with environmental awareness of decision-makers by following the principle of chance-constrained programming. In this power dispatching model, the description of wind power uncertainty is derived from the probability statistic character of wind speed. Constraints-handling techniques as a heuristic strategy are embedded into non-dominated sorting genetic algorithm-II. In addition, more information is digested from the Pareto optimum solution set by cluster analysis and fuzzy set theory. The simulation results eventually demonstrate that the increase of the share of wind power output will bring higher risk, though it is beneficial for economic cost and environmental protection. Since different risk perception and environmental awareness can possibly lead to diverse non-dominated solutions, decision-makers may choose an appropriate dispatching strategy according to their specific risk perception and environmental awareness. - Highlights: • A dispatch model combining environmental awareness and risk perception is proposed. • The uncertain characteristic of available wind power is determined. • Constraints-handling techniques are embedded into genetic algorithm. • An appropriate decision-making method is designed. • Dispatching strategies can be coordinated by the proposed model and method.

  18. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    OpenAIRE

    Hiienkari, Markus; Teittinen, Jukka; Koskinen, Lauri; Turnquist, Matthew; Mäkipää, Jani; Rantala, Arto; Sopanen, Matti; Kaltiokallio, Mikko

    2015-01-01

    To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable ...

  19. Contamination awareness at the Dresden Nuclear Power Station

    International Nuclear Information System (INIS)

    Pagel, D.J.; Rath, W.C.

    1986-01-01

    Dresden Nuclear Power Station, which is located ∼ 60 miles southwest of Chicago near Morris, Illinois, has been generating electricity since 1960. Owned by Commonwealth Edison, Dresden was the nation's first privately financed nuclear station. On its site are three boiling water reactors (BWRs). Due to the contamination potential inherent with a reactor, a contamination trending program was created at the station. Studies had indicated a rise in contamination events during refueling outages. Further increases were due to specific work projects such as hydrolyzing operations. The investigations suggested that contract personnel also increased the number of events. In 1983, a contamination awareness program was created. The 1984 contamination awareness program was comprised of the following: (1) a statistical review in which trended contamination events were discussed. (2) A demonstration of protective clothing removal by an individual making various mistakes. (3) Scenarios were developed for use in mock work areas. (4) Upper management involvement. Because of the 1984 program, favorable attention has been focused on Dresden from the US Nuclear Regulatory Commission and the Institute of Nuclear Power Operations

  20. Der ATLAS LVL2-Trigger mit FPGA-Prozessoren : Entwicklung, Aufbau und Funktionsnachweis des hybriden FPGA/CPU-basierten Prozessorsystems ATLANTIS

    CERN Document Server

    Singpiel, Holger

    2000-01-01

    This thesis describes the conception and implementation of the hybrid FPGA/CPU based processing system ATLANTIS as trigger processor for the proposed ATLAS experiment at CERN. CompactPCI provides the close coupling of a multi FPGA system and a standard CPU. The system is scalable in computing power and flexible in use due to its partitioning into dedicated FPGA boards for computation, I/O tasks and a private communication. Main focus of the research activities based on the usage of the ATLANTIS system are two areas in the second level trigger (LVL2). First, the acceleration of time critical B physics trigger algorithms is the major aim. The execution of the full scan TRT algorithm on ATLANTIS, which has been used as a demonstrator, results in a speedup of 5.6 compared to a standard CPU. Next, the ATLANTIS system is used as a hardware platform for research work in conjunction with the ATLAS readout systems. For further studies a permanent installation of the ATLANTIS system in the LVL2 application testbed is f...

  1. Task Classification Based Energy-Aware Consolidation in Clouds

    Directory of Open Access Journals (Sweden)

    HeeSeok Choi

    2016-01-01

    Full Text Available We consider a cloud data center, in which the service provider supplies virtual machines (VMs on hosts or physical machines (PMs to its subscribers for computation in an on-demand fashion. For the cloud data center, we propose a task consolidation algorithm based on task classification (i.e., computation-intensive and data-intensive and resource utilization (e.g., CPU and RAM. Furthermore, we design a VM consolidation algorithm to balance task execution time and energy consumption without violating a predefined service level agreement (SLA. Unlike the existing research on VM consolidation or scheduling that applies none or single threshold schemes, we focus on a double threshold (upper and lower scheme, which is used for VM consolidation. More specifically, when a host operates with resource utilization below the lower threshold, all the VMs on the host will be scheduled to be migrated to other hosts and then the host will be powered down, while when a host operates with resource utilization above the upper threshold, a VM will be migrated to avoid using 100% of resource utilization. Based on experimental performance evaluations with real-world traces, we prove that our task classification based energy-aware consolidation algorithm (TCEA achieves a significant energy reduction without incurring predefined SLA violations.

  2. Case Study of Using High Performance Commercial Processors in a Space Environment

    Science.gov (United States)

    Ferguson, Roscoe C.; Olivas, Zulema

    2009-01-01

    The purpose of the Space Shuttle Cockpit Avionics Upgrade project was to reduce crew workload and improve situational awareness. The upgrade was to augment the Shuttle avionics system with new hardware and software. A major success of this project was the validation of the hardware architecture and software design. This was significant because the project incorporated new technology and approaches for the development of human rated space software. An early version of this system was tested at the Johnson Space Center for one month by teams of astronauts. The results were positive, but NASA eventually cancelled the project towards the end of the development cycle. The goal to reduce crew workload and improve situational awareness resulted in the need for high performance Central Processing Units (CPUs). The choice of CPU selected was the PowerPC family, which is a reduced instruction set computer (RISC) known for its high performance. However, the requirement for radiation tolerance resulted in the reevaluation of the selected family member of the PowerPC line. Radiation testing revealed that the original selected processor (PowerPC 7400) was too soft to meet mission objectives and an effort was established to perform trade studies and performance testing to determine a feasible candidate. At that time, the PowerPC RAD750s where radiation tolerant, but did not meet the required performance needs of the project. Thus, the final solution was to select the PowerPC 7455. This processor did not have a radiation tolerant version, but faired better than the 7400 in the ability to detect failures. However, its cache tags did not provide parity and thus the project incorporated a software strategy to detect radiation failures. The strategy was to incorporate dual paths for software generating commands to the legacy Space Shuttle avionics to prevent failures due to the softness of the upgraded avionics.

  3. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  4. Designing of Vague Logic Based 2-Layered Framework for CPU Scheduler

    Directory of Open Access Journals (Sweden)

    Supriya Raheja

    2016-01-01

    Full Text Available Fuzzy based CPU scheduler has become of great interest by operating system because of its ability to handle imprecise information associated with task. This paper introduces an extension to the fuzzy based round robin scheduler to a Vague Logic Based Round Robin (VBRR scheduler. VBRR scheduler works on 2-layered framework. At the first layer, scheduler has a vague inference system which has the ability to handle the impreciseness of task using vague logic. At the second layer, Vague Logic Based Round Robin (VBRR scheduling algorithm works to schedule the tasks. VBRR scheduler has the learning capability based on which scheduler adapts intelligently an optimum length for time quantum. An optimum time quantum reduces the overhead on scheduler by reducing the unnecessary context switches which lead to improve the overall performance of system. The work is simulated using MATLAB and compared with the conventional round robin scheduler and the other two fuzzy based approaches to CPU scheduler. Given simulation analysis and results prove the effectiveness and efficiency of VBRR scheduler.

  5. Acceleration of stereo-matching on multi-core CPU and GPU

    OpenAIRE

    Tian, Xu; Cockshott, Paul; Oehler, Susanne

    2014-01-01

    This paper presents an accelerated version of a\\ud dense stereo-correspondence algorithm for two different parallelism\\ud enabled architectures, multi-core CPU and GPU. The\\ud algorithm is part of the vision system developed for a binocular\\ud robot-head in the context of the CloPeMa 1 research project.\\ud This research project focuses on the conception of a new clothes\\ud folding robot with real-time and high resolution requirements\\ud for the vision system. The performance analysis shows th...

  6. Study on dynamic team performance evaluation methodology based on team situation awareness model

    International Nuclear Information System (INIS)

    Kim, Suk Chul

    2005-02-01

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  7. Study on dynamic team performance evaluation methodology based on team situation awareness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Suk Chul

    2005-02-15

    The purpose of this thesis is to provide a theoretical framework and its evaluation methodology of team dynamic task performance of operating team at nuclear power plant under the dynamic and tactical environment such as radiological accident. This thesis suggested a team dynamic task performance evaluation model so called team crystallization model stemmed from Endsely's situation awareness model being comprised of four elements: state, information, organization, and orientation and its quantification methods using system dynamics approach and a communication process model based on a receding horizon control approach. The team crystallization model is a holistic approach for evaluating the team dynamic task performance in conjunction with team situation awareness considering physical system dynamics and team behavioral dynamics for a tactical and dynamic task at nuclear power plant. This model provides a systematic measure to evaluate time-dependent team effectiveness or performance affected by multi-agents such as plant states, communication quality in terms of transferring situation-specific information and strategies for achieving the team task goal at given time, and organizational factors. To demonstrate the applicability of the proposed model and its quantification method, the case study was carried out using the data obtained from a full-scope power plant simulator for 1,000MWe pressurized water reactors with four on-the-job operating groups and one expert group who knows accident sequences. Simulated results team dynamic task performance with reference key plant parameters behavior and team-specific organizational center of gravity and cue-and-response matrix illustrated good symmetry with observed value. The team crystallization model will be useful and effective tool for evaluating team effectiveness in terms of recruiting new operating team for new plant as cost-benefit manner. Also, this model can be utilized as a systematic analysis tool for

  8. SAFARI digital processing unit: performance analysis of the SpaceWire links in case of a LEON3-FT based CPU

    Science.gov (United States)

    Giusi, Giovanni; Liu, Scige J.; Di Giorgio, Anna M.; Galli, Emanuele; Pezzuto, Stefano; Farina, Maria; Spinoglio, Luigi

    2014-08-01

    SAFARI (SpicA FAR infrared Instrument) is a far-infrared imaging Fourier Transform Spectrometer for the SPICA mission. The Digital Processing Unit (DPU) of the instrument implements the functions of controlling the overall instrument and implementing the science data compression and packing. The DPU design is based on the use of a LEON family processor. In SAFARI, all instrument components are connected to the central DPU via SpaceWire links. On these links science data, housekeeping and commands flows are in some cases multiplexed, therefore the interface control shall be able to cope with variable throughput needs. The effective data transfer workload can be an issue for the overall system performances and becomes a critical parameter for the on-board software design, both at application layer level and at lower, and more HW related, levels. To analyze the system behavior in presence of the expected SAFARI demanding science data flow, we carried out a series of performance tests using the standard GR-CPCI-UT699 LEON3-FT Development Board, provided by Aeroflex/Gaisler, connected to the emulator of the SAFARI science data links, in a point-to-point topology. Two different communication protocols have been used in the tests, the ECSS-E-ST-50-52C RMAP protocol and an internally defined one, the SAFARI internal data handling protocol. An incremental approach has been adopted to measure the system performances at different levels of the communication protocol complexity. In all cases the performance has been evaluated by measuring the CPU workload and the bus latencies. The tests have been executed initially in a custom low level execution environment and finally using the Real- Time Executive for Multiprocessor Systems (RTEMS), which has been selected as the operating system to be used onboard SAFARI. The preliminary results of the carried out performance analysis confirmed the possibility of using a LEON3 CPU processor in the SAFARI DPU, but pointed out, in agreement

  9. Relationship between people's awareness of environmental capabilities of saving energy, photovoltaic power generation and nuclear power generation

    Energy Technology Data Exchange (ETDEWEB)

    Hashiba, Takashi [Institute of Nuclear Safety System Inc., Mihama, Fukui (Japan)

    2001-09-01

    In this research, relationship between people's awareness of environmental capabilities of saving energy, photovoltaic power generation (PV) and nuclear power generation was investigated using questionnaire method. The results showed that saving energy is conducted without reference to its environment preservation effect. However the older people tend to regard saving energy as contribution to environment preservation. The attitude toward usage of PV has a close relationship to awareness of energy environmental concerns. Acceptance of cost sharing for the introducing of wide-scale PV systems to society is related to environment protection image of PV and the attitude toward loss of social convenience lost as a result of saving energy activities. The older people become, the more priority people put on environment protection before the social convenience. There is little relationship between environmental capabilities of nuclear power generation, that never discharge CO{sub 2} on generation, and awareness of energy environmental concerns. (author)

  10. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin

    2012-05-21

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a challenge despite the regularity of memory access. Sophisticated optimization techniques are required to fully utilize the CPU. We propose a new method for constructing streaming numerical kernels using a high-level assembly synthesis and optimization framework. We describe an implementation of this method in Python targeting the IBM® Blue Gene®/P supercomputer\\'s PowerPC® 450 core. This paper details the high-level design, construction, simulation, verification, and analysis of these kernels utilizing a subset of the CPU\\'s instruction set. We demonstrate the effectiveness of our approach by implementing several three-dimensional stencil kernels over a variety of cached memory scenarios and analyzing the mechanically scheduled variants, including a 27-point stencil achieving a 1.7× speedup over the best previously published results. © The Author(s) 2012.

  11. The “Chimera”: An Off-The-Shelf CPU/GPGPU/FPGA Hybrid Computing Platform

    Directory of Open Access Journals (Sweden)

    Ra Inta

    2012-01-01

    Full Text Available The nature of modern astronomy means that a number of interesting problems exhibit a substantial computational bound and this situation is gradually worsening. Scientists, increasingly fighting for valuable resources on conventional high-performance computing (HPC facilities—often with a limited customizable user environment—are increasingly looking to hardware acceleration solutions. We describe here a heterogeneous CPU/GPGPU/FPGA desktop computing system (the “Chimera”, built with commercial-off-the-shelf components. We show that this platform may be a viable alternative solution to many common computationally bound problems found in astronomy, however, not without significant challenges. The most significant bottleneck in pipelines involving real data is most likely to be the interconnect (in this case the PCI Express bus residing on the CPU motherboard. Finally, we speculate on the merits of our Chimera system on the entire landscape of parallel computing, through the analysis of representative problems from UC Berkeley’s “Thirteen Dwarves.”

  12. Data-Centric Situational Awareness and Management in Intelligent Power Systems

    Science.gov (United States)

    Dai, Xiaoxiao

    The rapid development of technology and society has made the current power system a much more complicated system than ever. The request for big data based situation awareness and management becomes urgent today. In this dissertation, to respond to the grand challenge, two data-centric power system situation awareness and management approaches are proposed to address the security problems in the transmission/distribution grids and social benefits augmentation problem at the distribution-customer lever, respectively. To address the security problem in the transmission/distribution grids utilizing big data, the first approach provides a fault analysis solution based on characterization and analytics of the synchrophasor measurements. Specically, the optimal synchrophasor measurement devices selection algorithm (OSMDSA) and matching pursuit decomposition (MPD) based spatial-temporal synchrophasor data characterization method was developed to reduce data volume while preserving comprehensive information for the big data analyses. And the weighted Granger causality (WGC) method was investigated to conduct fault impact causal analysis during system disturbance for fault localization. Numerical results and comparison with other methods demonstrate the effectiveness and robustness of this analytic approach. As more social effects are becoming important considerations in power system management, the goal of situation awareness should be expanded to also include achievements in social benefits. The second approach investigates the concept and application of social energy upon the University of Denver campus grid to provide management improvement solutions for optimizing social cost. Social element--human working productivity cost, and economic element--electricity consumption cost, are both considered in the evaluation of overall social cost. Moreover, power system simulation, numerical experiments for smart building modeling, distribution level real-time pricing and social

  13. Study on public awareness of utilizing nuclear power in China. Changes in public awareness after the accident of Fukushima Daiichi Nuclear Power Plants

    International Nuclear Information System (INIS)

    Xu, Ting; Wakabayashi, Toshio

    2012-01-01

    The purpose of this study is to clarify public awareness of utilizing nuclear power in China and to determine the effects of the accident of Fukushima Daiichi nuclear power plants. Web online surveys were carried out before and after the accident of Fukushima Daiichi nuclear power plants. The online survey before the accident of Fukushima Daiichi nuclear power plants had 4,255 adult respondents consisting of 1,851 males and 2,404 females. The online survey after the accident had 721 respondents consisting of 406 males and 315 females. The two online surveys about the attitude toward nuclear power plants consisted of 37 items, such as the necessity of nuclear power plants, the reliability of safety, and government confidence. As a result, respondents of the online surveys in China consider that nuclear energy is more important than the anxiety of accident. On the other hand, women have sensation of fear for the accident of Fukushima Daiichi nuclear power plants and radiation. (author)

  14. Inhibition of CPU0213, a Dual Endothelin Receptor Antagonist, on Apoptosis via Nox4-Dependent ROS in HK-2 Cells

    Directory of Open Access Journals (Sweden)

    Qing Li

    2016-06-01

    Full Text Available Background/Aims: Our previous studies have indicated that a novel endothelin receptor antagonist CPU0213 effectively normalized renal function in diabetic nephropathy. However, the molecular mechanisms mediating the nephroprotective role of CPU0213 remain unknown. Methods and Results: In the present study, we first detected the role of CPU0213 on apoptosis in human renal tubular epithelial cell (HK-2. It was shown that high glucose significantly increased the protein expression of Bax and decreased Bcl-2 protein in HK-2 cells, which was reversed by CPU0213. The percentage of HK-2 cells that showed Annexin V-FITC binding was markedly suppressed by CPU0213, which confirmed the inhibitory role of CPU0213 on apoptosis. Given the regulation of endothelin (ET system to oxidative stress, we determined the role of redox signaling in the regulation of CPU0213 on apoptosis. It was demonstrated that the production of superoxide (O2-. was substantially attenuated by CPU0213 treatment in HK-2 cells. We further found that CPU0213 dramatically inhibited expression of Nox4 protein, which gene silencing mimicked the role of CPU0213 on the apoptosis under high glucose stimulation. We finally examined the role of CPU0213 on ET-1 receptors and found that high glucose-induced protein expression of endothelin A and B receptors was dramatically inhibited by CPU0213. Conclusion: Taken together, these results suggest that this Nox4-dependenet O2- production is critical for the apoptosis of HK-2 cells in high glucose. Endothelin receptor antagonist CPU0213 has an anti-apoptosis role through Nox4-dependent O2-.production, which address the nephroprotective role of CPU0213 in diabetic nephropathy.

  15. Overtaking CPU DBMSes with a GPU in whole-query analytic processing with parallelism-friendly execution plan optimization

    NARCIS (Netherlands)

    A. Agbaria (Adnan); D. Minor (David); N. Peterfreund (Natan); E. Rozenberg (Eyal); O. Rosenberg (Ofer); Huawei Research

    2016-01-01

    textabstractExisting work on accelerating analytic DB query processing with (discrete) GPUs fails to fully realize their potential for speedup through parallelism: Published results do not achieve significant speedup over more performant CPU-only DBMSes when processing complete queries. This

  16. Implementation and Optimization of GPU-Based Static State Security Analysis in Power Systems

    Directory of Open Access Journals (Sweden)

    Yong Chen

    2017-01-01

    Full Text Available Static state security analysis (SSSA is one of the most important computations to check whether a power system is in normal and secure operating state. It is a challenge to satisfy real-time requirements with CPU-based concurrent methods due to the intensive computations. A sensitivity analysis-based method with Graphics processing unit (GPU is proposed for power systems, which can reduce calculation time by 40% compared to the execution on a 4-core CPU. The proposed method involves load flow analysis and sensitivity analysis. In load flow analysis, a multifrontal method for sparse LU factorization is explored on GPU through dynamic frontal task scheduling between CPU and GPU. The varying matrix operations during sensitivity analysis on GPU are highly optimized in this study. The results of performance evaluations show that the proposed GPU-based SSSA with optimized matrix operations can achieve a significant reduction in computation time.

  17. Impact of Metacognitive Awareness on Performance of Students in Chemistry

    Science.gov (United States)

    Rahman, Fazal ur; Jumani, Nabi Bux; Chaudry, Muhammad Ajmal; Chisti, Saeed ul Hasan; Abbasi, Fahim

    2010-01-01

    The impact of metacognitive awareness on students' performance has been examined in the present study. 900 students of grade X participated in the study. Metacognitive awareness was measured using inventory, while performance of students was measured with the help of researcher made test in the subject of chemistry. Results indicated that…

  18. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2015-01-01

    Full Text Available The Smith-Waterman (SW algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  19. Accelerating Smith-Waterman Alignment for Protein Database Search Using Frequency Distance Filtration Scheme Based on CPU-GPU Collaborative System.

    Science.gov (United States)

    Liu, Yu; Hong, Yang; Lin, Chun-Yuan; Hung, Che-Lun

    2015-01-01

    The Smith-Waterman (SW) algorithm has been widely utilized for searching biological sequence databases in bioinformatics. Recently, several works have adopted the graphic card with Graphic Processing Units (GPUs) and their associated CUDA model to enhance the performance of SW computations. However, these works mainly focused on the protein database search by using the intertask parallelization technique, and only using the GPU capability to do the SW computations one by one. Hence, in this paper, we will propose an efficient SW alignment method, called CUDA-SWfr, for the protein database search by using the intratask parallelization technique based on a CPU-GPU collaborative system. Before doing the SW computations on GPU, a procedure is applied on CPU by using the frequency distance filtration scheme (FDFS) to eliminate the unnecessary alignments. The experimental results indicate that CUDA-SWfr runs 9.6 times and 96 times faster than the CPU-based SW method without and with FDFS, respectively.

  20. Design of a Message Passing Model for Use in a Heterogeneous CPU-NFP Framework for Network Analytics

    CSIR Research Space (South Africa)

    Pennefather, S

    2017-09-01

    Full Text Available of applications written in the Go programming language to be executed on a Network Flow Processor (NFP) for enhanced performance. This paper explores the need and feasibility of implementing a message passing model for data transmission between the NFP and CPU...

  1. A Robust Ultra-Low Voltage CPU Utilizing Timing-Error Prevention

    Directory of Open Access Journals (Sweden)

    Markus Hiienkari

    2015-04-01

    Full Text Available To minimize energy consumption of a digital circuit, logic can be operated at sub- or near-threshold voltage. Operation at this region is challenging due to device and environment variations, and resulting performance may not be adequate to all applications. This article presents two variants of a 32-bit RISC CPU targeted for near-threshold voltage. Both CPUs are placed on the same die and manufactured in 28 nm CMOS process. They employ timing-error prevention with clock stretching to enable operation with minimal safety margins while maximizing performance and energy efficiency at a given operating point. Measurements show minimum energy of 3.15 pJ/cyc at 400 mV, which corresponds to 39% energy saving compared to operation based on static signoff timing.

  2. Measuring Adolescent Self-Awareness and Accuracy Using a Performance-Based Assessment and Parental Report

    Directory of Open Access Journals (Sweden)

    Sharon Zlotnik

    2018-02-01

    Full Text Available AimThe aim of this study was to assess awareness of performance and performance accuracy for a task that requires executive functions (EF, among healthy adolescents and to compare their performance to their parent’s ratings.MethodParticipants: 109 healthy adolescents (mean age 15.2 ± 1.86 years completed the Weekly Calendar Planning Activity (WCPA. The discrepancy between self-estimated and actual performance was used to measure the level of awareness. The participants were divided into high and low accuracy groups according to the WCPA accuracy median score. The participants were also divided into high and low awareness groups. A comparison was conducted between groups using WCPA performance and parent ratings on the Behavior Rating Inventory of Executive Function (BRIEF.ResultsHigher awareness was associated with better EF performance. Participants with high accuracy scores were more likely to show high awareness of performance as compared to participants with low accuracy scores. The high accuracy group had better parental ratings of EF, higher efficiency, followed more rules, and were more aware of their WCPA performance.ConclusionOur results highlight the important contribution that self-awareness of performance may have on the individual’s function. Assessing the level of awareness and providing metacognitive training techniques for those adolescents who are less aware, could support their performance.

  3. Application of Synchrophasor Measurements for Improving Situational Awareness of the Power System

    Science.gov (United States)

    Obushevs, A.; Mutule, A.

    2018-04-01

    The paper focuses on the application of synchrophasor measurements that present unprecedented benefits compared to SCADA systems in order to facilitate the successful transformation of the Nordic-Baltic-and-European electric power system to operate with large amounts of renewable energy sources and improve situational awareness of the power system. The article describes new functionalities of visualisation tools to estimate a grid inertia level in real time with monitoring results between Nordic and Baltic power systems.

  4. Accelerating the SCE-UA Global Optimization Method Based on Multi-Core CPU and Many-Core GPU

    Directory of Open Access Journals (Sweden)

    Guangyuan Kan

    2016-01-01

    Full Text Available The famous global optimization SCE-UA method, which has been widely used in the field of environmental model parameter calibration, is an effective and robust method. However, the SCE-UA method has a high computational load which prohibits the application of SCE-UA to high dimensional and complex problems. In recent years, the hardware of computer, such as multi-core CPUs and many-core GPUs, improves significantly. These much more powerful new hardware and their software ecosystems provide an opportunity to accelerate the SCE-UA method. In this paper, we proposed two parallel SCE-UA methods and implemented them on Intel multi-core CPU and NVIDIA many-core GPU by OpenMP and CUDA Fortran, respectively. The Griewank benchmark function was adopted in this paper to test and compare the performances of the serial and parallel SCE-UA methods. According to the results of the comparison, some useful advises were given to direct how to properly use the parallel SCE-UA methods.

  5. CPU and cache efficient management of memory-resident databases

    NARCIS (Netherlands)

    Pirk, H.; Funke, F.; Grund, M.; Neumann, T.; Leser, U.; Manegold, S.; Kemper, A.; Kersten, M.L.

    2013-01-01

    Memory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current implementations,

  6. CPU and Cache Efficient Management of Memory-Resident Databases

    NARCIS (Netherlands)

    H. Pirk (Holger); F. Funke; M. Grund; T. Neumann (Thomas); U. Leser; S. Manegold (Stefan); A. Kemper (Alfons); M.L. Kersten (Martin)

    2013-01-01

    htmlabstractMemory-Resident Database Management Systems (MRDBMS) have to be optimized for two resources: CPU cycles and memory bandwidth. To optimize for bandwidth in mixed OLTP/OLAP scenarios, the hybrid or Partially Decomposed Storage Model (PDSM) has been proposed. However, in current

  7. Testing Situation Awareness Network for the Electrical Power Infrastructure

    Directory of Open Access Journals (Sweden)

    Rafał Leszczyna

    2016-09-01

    Full Text Available The contemporary electrical power infrastructure is exposed to new types of threats. The cause of such threats is related to the large number of new vulnerabilities and architectural weaknesses introduced by the extensive use of Information and communication Technologies (ICT in such complex critical systems. The power grid interconnection with the Internet exposes the grid to new types of attacks, such as Advanced Persistent Threats (APT or Distributed-Denial-ofService (DDoS attacks. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. To counter evolved and highly sophisticated threats such as the APT or DDoS, state-of-the-art technologies including Security Incident and Event Management (SIEM systems, extended Intrusion Detection/Prevention Systems (IDS/IPS and Trusted Platform Modules (TPM are required. Developing and deploying extensive ICT infrastructure that supports wide situational awareness and allows precise command and control is also necessary. In this paper the results of testing the Situational Awareness Network (SAN designed for the energy sector are presented. The purpose of the tests was to validate the selection of SAN components and check their operational capability in a complex test environment. During the tests’ execution appropriate interaction between the components was verified.

  8. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    Energy Technology Data Exchange (ETDEWEB)

    Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve 1348 (Belgium); Sterpin, Edmond [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and Department of Oncology, Katholieke Universiteit Leuven, O& N I Herestraat 49, 3000 Leuven (Belgium)

    2016-04-15

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  9. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures

    International Nuclear Information System (INIS)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-01-01

    Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10"7 primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  10. Fast multipurpose Monte Carlo simulation for proton therapy using multi- and many-core CPU architectures.

    Science.gov (United States)

    Souris, Kevin; Lee, John Aldo; Sterpin, Edmond

    2016-04-01

    Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.

  11. Hybrid GPU-CPU adaptive precision ray-triangle intersection tests for robust high-performance GPU dosimetry computations

    International Nuclear Information System (INIS)

    Perrotte, Lancelot; Bodin, Bruno; Chodorge, Laurent

    2011-01-01

    Before an intervention on a nuclear site, it is essential to study different scenarios to identify the less dangerous one for the operator. Therefore, it is mandatory to dispose of an efficient dosimetry simulation code with accurate results. One classical method in radiation protection is the straight-line attenuation method with build-up factors. In the case of 3D industrial scenes composed of meshes, the computation cost resides in the fast computation of all of the intersections between the rays and the triangles of the scene. Efficient GPU algorithms have already been proposed, that enable dosimetry calculation for a huge scene (800000 rays, 800000 triangles) in a fraction of second. But these algorithms are not robust: because of the rounding caused by floating-point arithmetic, the numerical results of the ray-triangle intersection tests can differ from the expected mathematical results. In worst case scenario, this can lead to a computed dose rate dramatically inferior to the real dose rate to which the operator is exposed. In this paper, we present a hybrid GPU-CPU algorithm to manage adaptive precision floating-point arithmetic. This algorithm allows robust ray-triangle intersection tests, with very small loss of performance (less than 5 % overhead), and without any need for scene-dependent tuning. (author)

  12. Thermoeconomic cost analysis of CO_2 compression and purification unit in oxy-combustion power plants

    International Nuclear Information System (INIS)

    Jin, Bo; Zhao, Haibo; Zheng, Chuguang

    2015-01-01

    Highlights: • Thermoeconomic cost analysis for CO_2 compression and purification unit is conducted. • Exergy cost and thermoeconomic cost occur in flash separation and mixing processes. • Unit exergy costs for flash separator and multi-stream heat exchanger are identical. • Multi-stage CO_2 compressor contributes to the minimum unit exergy cost. • Thermoeconomic performance for optimized CPU is enhanced. - Abstract: High CO_2 purity products can be obtained from oxy-combustion power plants through CO_2 compression and purification unit (CPU) based on phase separation method. To identify cost formation process and potential energy savings for CPU, detailed thermoeconomic cost analysis based on structure theory of thermoeconomics is applied to an optimized CPU (with double flash separators). It is found that the largest unit exergy cost occurs in the first separation process while the multi-stage CO_2 compressor contributes to the minimum unit exergy cost. In two flash separation processes, unit exergy costs for the flash separator and multi-stream heat exchanger are identical but their unit thermoeconomic costs are different once monetary cost for each device is considered. For cost inefficiency occurring in CPU, it mainly derives from large exergy costs and thermoeconomic costs in the flash separation and mixing processes. When compared with an unoptimized CPU, thermoeconomic performance for the optimized CPU is enhanced and the maximum reduction of 5.18% for thermoeconomic cost is attained. To achieve cost effective operation, measures should be taken to improve operations of the flash separation and mixing processes.

  13. A CFD Heterogeneous Parallel Solver Based on Collaborating CPU and GPU

    Science.gov (United States)

    Lai, Jianqi; Tian, Zhengyu; Li, Hua; Pan, Sha

    2018-03-01

    Since Graphic Processing Unit (GPU) has a strong ability of floating-point computation and memory bandwidth for data parallelism, it has been widely used in the areas of common computing such as molecular dynamics (MD), computational fluid dynamics (CFD) and so on. The emergence of compute unified device architecture (CUDA), which reduces the complexity of compiling program, brings the great opportunities to CFD. There are three different modes for parallel solution of NS equations: parallel solver based on CPU, parallel solver based on GPU and heterogeneous parallel solver based on collaborating CPU and GPU. As we can see, GPUs are relatively rich in compute capacity but poor in memory capacity and the CPUs do the opposite. We need to make full use of the GPUs and CPUs, so a CFD heterogeneous parallel solver based on collaborating CPU and GPU has been established. Three cases are presented to analyse the solver’s computational accuracy and heterogeneous parallel efficiency. The numerical results agree well with experiment results, which demonstrate that the heterogeneous parallel solver has high computational precision. The speedup on a single GPU is more than 40 for laminar flow, it decreases for turbulent flow, but it still can reach more than 20. What’s more, the speedup increases as the grid size becomes larger.

  14. Power-Energy Simulation for Multi-Core Processors in Bench-marking

    Directory of Open Access Journals (Sweden)

    Mona A. Abou-Of

    2017-01-01

    Full Text Available At Microarchitectural level, multi-core processor, as a complex System on Chip, has sophisticated on-chip components including cores, shared caches, interconnects and system controllers such as memory and ethernet controllers. At technological level, architects should consider the device types forecast in the International Technology Roadmap for Semiconductors (ITRS. Energy simulation enables architects to study two important metrics simultaneously. Timing is a key element of the CPU performance that imposes constraints on the CPU target clock frequency. Power and the resulting heat impose more severe design constraints, such as core clustering, while semiconductor industry is providing more transistors in the die area in pace with Moore’s law. Energy simulators provide a solution for such serious challenge. Energy is modelled either by combining performance benchmarking tool with a power simulator or by an integrated framework of both performance simulator and power profiling system. This article presents and asses trade-offs between different architectures using four cores battery-powered mobile systems by running a custom-made and a standard benchmark tools. The experimental results assure the Energy/ Frequency convexity rule over a range of frequency settings on different number of enabled cores. The reported results show that increasing the number of cores has a great effect on increasing the power consumption. However, a minimum energy dissipation will occur at a lower frequency which reduces the power consumption. Despite that, increasing the number of cores will also increase the effective cores value which will reflect a better processor performance.

  15. Turbo Charge CPU Utilization in Fork/Join Using the ManagedBlocker

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Fork/Join is a framework for parallelizing calculations using recursive decomposition, also called divide and conquer. These algorithms occasionally end up duplicating work, especially at the beginning of the run. We can reduce wasted CPU cycles by implementing a reserved caching scheme. Before a task starts its calculation, it tries to reserve an entry in the shared map. If it is successful, it immediately begins. If not, it blocks until the other thread has finished its calculation. Unfortunately this might result in a significant number of blocked threads, decreasing CPU utilization. In this talk we will demonstrate this issue and offer a solution in the form of the ManagedBlocker. Combined with the Fork/Join, it can keep parallelism at the desired level.

  16. GENIE: a software package for gene-gene interaction analysis in genetic association studies using multiple GPU or CPU cores

    Directory of Open Access Journals (Sweden)

    Wang Kai

    2011-05-01

    Full Text Available Abstract Background Gene-gene interaction in genetic association studies is computationally intensive when a large number of SNPs are involved. Most of the latest Central Processing Units (CPUs have multiple cores, whereas Graphics Processing Units (GPUs also have hundreds of cores and have been recently used to implement faster scientific software. However, currently there are no genetic analysis software packages that allow users to fully utilize the computing power of these multi-core devices for genetic interaction analysis for binary traits. Findings Here we present a novel software package GENIE, which utilizes the power of multiple GPU or CPU processor cores to parallelize the interaction analysis. GENIE reads an entire genetic association study dataset into memory and partitions the dataset into fragments with non-overlapping sets of SNPs. For each fragment, GENIE analyzes: 1 the interaction of SNPs within it in parallel, and 2 the interaction between the SNPs of the current fragment and other fragments in parallel. We tested GENIE on a large-scale candidate gene study on high-density lipoprotein cholesterol. Using an NVIDIA Tesla C1060 graphics card, the GPU mode of GENIE achieves a speedup of 27 times over its single-core CPU mode run. Conclusions GENIE is open-source, economical, user-friendly, and scalable. Since the computing power and memory capacity of graphics cards are increasing rapidly while their cost is going down, we anticipate that GENIE will achieve greater speedups with faster GPU cards. Documentation, source code, and precompiled binaries can be downloaded from http://www.cceb.upenn.edu/~mli/software/GENIE/.

  17. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    Science.gov (United States)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  18. Direct Measurement of Power Dissipated by Monte Carlo Simulations on CPU and FPGA Platforms

    OpenAIRE

    Albicocco, Pietro; Papini, Davide; Nannarelli, Alberto

    2012-01-01

    In this technical report, we describe how power dissipation measurements on different computing platforms (a desktop computer and an FPGA board) are performed by using a Hall effectbased current sensor. The chosen application is a Monte Carlo simulation for European option pricing which is a popular algorithm used in financial computations. The Hall effect probe measurements complement the measurements performed on the core of the FPGA by a built-in Xilinxpower monitoring system.

  19. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  20. CPU0213, a novel endothelin type A and type B receptor antagonist, protects against myocardial ischemia/reperfusion injury in rats

    Directory of Open Access Journals (Sweden)

    Z.Y. Wang

    2011-11-01

    Full Text Available The efficacy of endothelin receptor antagonists in protecting against myocardial ischemia/reperfusion (I/R injury is controversial, and the mechanisms remain unclear. The aim of this study was to investigate the effects of CPU0123, a novel endothelin type A and type B receptor antagonist, on myocardial I/R injury and to explore the mechanisms involved. Male Sprague-Dawley rats weighing 200-250 g were randomized to three groups (6-7 per group: group 1, Sham; group 2, I/R + vehicle. Rats were subjected to in vivo myocardial I/R injury by ligation of the left anterior descending coronary artery and 0.5% sodium carboxymethyl cellulose (1 mL/kg was injected intraperitoneally immediately prior to coronary occlusion. Group 3, I/R + CPU0213. Rats were subjected to identical surgical procedures and CPU0213 (30 mg/kg was injected intraperitoneally immediately prior to coronary occlusion. Infarct size, cardiac function and biochemical changes were measured. CPU0213 pretreatment reduced infarct size as a percentage of the ischemic area by 44.5% (I/R + vehicle: 61.3 ± 3.2 vs I/R + CPU0213: 34.0 ± 5.5%, P < 0.05 and improved ejection fraction by 17.2% (I/R + vehicle: 58.4 ± 2.8 vs I/R + CPU0213: 68.5 ± 2.2%, P < 0.05 compared to vehicle-treated animals. This protection was associated with inhibition of myocardial inflammation and oxidative stress. Moreover, reduction in Akt (protein kinase B and endothelial nitric oxide synthase (eNOS phosphorylation induced by myocardial I/R injury was limited by CPU0213 (P < 0.05. These data suggest that CPU0123, a non-selective antagonist, has protective effects against myocardial I/R injury in rats, which may be related to the Akt/eNOS pathway.

  1. Direct Measurement of Power Dissipated by Monte Carlo Simulations on CPU and FPGA Platforms

    DEFF Research Database (Denmark)

    Albicocco, Pietro; Papini, Davide; Nannarelli, Alberto

    In this technical report, we describe how power dissipation measurements on different computing platforms (a desktop computer and an FPGA board) are performed by using a Hall effectbased current sensor. The chosen application is a Monte Carlo simulation for European option pricing which is a popu...

  2. Change of public awareness on nuclear power generation in 2010

    International Nuclear Information System (INIS)

    Shimooka, Hiroshi

    2011-01-01

    The eighth attitude survey for nuclear power generation was carried out by two methods (the written questionnaire survey and online survey), from 22nd in October to 22nd in November, 2010. The survey population of the first method was 500, 250 of male and 250 female from over twenty years old lived within 30 km from Tokyo station. That of second method was 500 from over twenty years old lived in the Metropolitan area. The questionnaire consisted of four items such as awareness on the general public and life, energy problems, nuclear power generation and others. The written questionnaire survey showed almost same results as the previous surveys. New results showed some subjects (23%) thought the nuclear power generation was useful at that time but not useful in the future. Outline of survey, the main results, the analytical results and comparison between the written questionnaire survey and online survey were reported. (S.Y.)

  3. Analysis of Application Power and Schedule Composition in a High Performance Computing Environment

    Energy Technology Data Exchange (ETDEWEB)

    Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gruchalla, Kenny [National Renewable Energy Lab. (NREL), Golden, CO (United States); Phillips, Caleb [National Renewable Energy Lab. (NREL), Golden, CO (United States); Purkayastha, Avi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Wunder, Nick [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-01-05

    As the capacity of high performance computing (HPC) systems continues to grow, small changes in energy management have the potential to produce significant energy savings. In this paper, we employ an extensive informatics system for aggregating and analyzing real-time performance and power use data to evaluate energy footprints of jobs running in an HPC data center. We look at the effects of algorithmic choices for a given job on the resulting energy footprints, and analyze application-specific power consumption, and summarize average power use in the aggregate. All of these views reveal meaningful power variance between classes of applications as well as chosen methods for a given job. Using these data, we discuss energy-aware cost-saving strategies based on reordering the HPC job schedule. Using historical job and power data, we present a hypothetical job schedule reordering that: (1) reduces the facility's peak power draw and (2) manages power in conjunction with a large-scale photovoltaic array. Lastly, we leverage this data to understand the practical limits on predicting key power use metrics at the time of submission.

  4. mrsFAST-Ultra: a compact, SNP-aware mapper for high performance sequencing applications.

    Science.gov (United States)

    Hach, Faraz; Sarrafi, Iman; Hormozdiari, Farhad; Alkan, Can; Eichler, Evan E; Sahinalp, S Cenk

    2014-07-01

    High throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for processing and downstream analysis. While tools that report the 'best' mapping location of each read provide a fast way to process HTS data, they are not suitable for many types of downstream analysis such as structural variation detection, where it is important to report multiple mapping loci for each read. For this purpose we introduce mrsFAST-Ultra, a fast, cache oblivious, SNP-aware aligner that can handle the multi-mapping of HTS reads very efficiently. mrsFAST-Ultra improves mrsFAST, our first cache oblivious read aligner capable of handling multi-mapping reads, through new and compact index structures that reduce not only the overall memory usage but also the number of CPU operations per alignment. In fact the size of the index generated by mrsFAST-Ultra is 10 times smaller than that of mrsFAST. As importantly, mrsFAST-Ultra introduces new features such as being able to (i) obtain the best mapping loci for each read, and (ii) return all reads that have at most n mapping loci (within an error threshold), together with these loci, for any user specified n. Furthermore, mrsFAST-Ultra is SNP-aware, i.e. it can map reads to reference genome while discounting the mismatches that occur at common SNP locations provided by db-SNP; this significantly increases the number of reads that can be mapped to the reference genome. Notice that all of the above features are implemented within the index structure and are not simple post-processing steps and thus are performed highly efficiently. Finally, mrsFAST-Ultra utilizes multiple available cores and processors and can be tuned for various memory settings. Our results show that mrsFAST-Ultra is roughly five times faster than its predecessor mrsFAST. In comparison to newly enhanced popular tools such as Bowtie2, it is more sensitive (it can report 10 times or more mappings per read) and much faster (six times or

  5. Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms

    Directory of Open Access Journals (Sweden)

    Valeria Cardellini

    2014-01-01

    Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

  6. A Programming Framework for Scientific Applications on CPU-GPU Systems

    Energy Technology Data Exchange (ETDEWEB)

    Owens, John

    2013-03-24

    At a high level, my research interests center around designing, programming, and evaluating computer systems that use new approaches to solve interesting problems. The rapid change of technology allows a variety of different architectural approaches to computationally difficult problems, and a constantly shifting set of constraints and trends makes the solutions to these problems both challenging and interesting. One of the most important recent trends in computing has been a move to commodity parallel architectures. This sea change is motivated by the industry’s inability to continue to profitably increase performance on a single processor and instead to move to multiple parallel processors. In the period of review, my most significant work has been leading a research group looking at the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver superior performance on a broad range of problems than their CPU counterparts, but effectively mapping complex applications to a parallel programming model with an emerging programming environment is a significant and important research problem.

  7. Balancing Energy and Performance in Dense Linear System Solvers for Hybrid ARM+GPU platforms

    Directory of Open Access Journals (Sweden)

    Juan P. Silva

    2016-04-01

    Full Text Available The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

  8. Power-Aware Rationale for Using Coarse-Grained Transponders in IP-Over-WDM Networks

    DEFF Research Database (Denmark)

    Saldaña Cercos, Silvia; Resendo, Leandro C.; Ribeiro, Moises R. N.

    2015-01-01

    .e., using 10 Gbps technology)? (2) What is the long-term cost of coarse-grained designs? We define a power-aware mixed integer linear programming (MILP) formulation based on actual modular architectures where modules are upgraded as the network traffic increases. We introduce, for the first time, important...

  9. System-Awareness for Agent-based Power System Control

    DEFF Research Database (Denmark)

    Heussen, Kai; Saleem, Arshad; Lind, Morten

    2010-01-01

    transition. This paper presents a concept for the representation and organization of control- and resource-allocation, enabling computational reasoning and system awareness. The principles are discussed with respect to a recently proposed Subgrid operation concept.......Operational intelligence in electric power systems is focused in a small number of control rooms that coordinate their actions. A clear division of responsibility and a command hierarchy organize system operation. With multi-agent based control systems, this control paradigm may be shifted...... to a more decentralized openaccess collaboration control paradigm. This shift cannot happen at once, but must fit also with current operation principles. In order to establish a scalable and transparent system control architecture, organizing principles have to be identified that allow for a smooth...

  10. Performance Analysis of Sensor Systems for Space Situational Awareness

    Science.gov (United States)

    Choi, Eun-Jung; Cho, Sungki; Jo, Jung Hyun; Park, Jang-Hyun; Chung, Taejin; Park, Jaewoo; Jeon, Hocheol; Yun, Ami; Lee, Yonghui

    2017-12-01

    With increased human activity in space, the risk of re-entry and collision between space objects is constantly increasing. Hence, the need for space situational awareness (SSA) programs has been acknowledged by many experienced space agencies. Optical and radar sensors, which enable the surveillance and tracking of space objects, are the most important technical components of SSA systems. In particular, combinations of radar systems and optical sensor networks play an outstanding role in SSA programs. At present, Korea operates the optical wide field patrol network (OWL-Net), the only optical system for tracking space objects. However, due to their dependence on weather conditions and observation time, it is not reasonable to use optical systems alone for SSA initiatives, as they have limited operational availability. Therefore, the strategies for developing radar systems should be considered for an efficient SSA system using currently available technology. The purpose of this paper is to analyze the performance of a radar system in detecting and tracking space objects. With the radar system investigated, the minimum sensitivity is defined as detection of a 1-m2 radar cross section (RCS) at an altitude of 2,000 km, with operating frequencies in the L, S, C, X or Ku-band. The results of power budget analysis showed that the maximum detection range of 2,000 km, which includes the low earth orbit (LEO) environment, can be achieved with a transmission power of 900 kW, transmit and receive antenna gains of 40 dB and 43 dB, respectively, a pulse width of 2 ms, and a signal processing gain of 13.3 dB, at a frequency of 1.3 GHz. We defined the key parameters of the radar following a performance analysis of the system. This research can thus provide guidelines for the conceptual design of radar systems for national SSA initiatives.

  11. HEP specific benchmarks of virtual machines on multi-core CPU architectures

    International Nuclear Information System (INIS)

    Alef, M; Gable, I

    2010-01-01

    Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.

  12. Exploring the Relationship between Metacognitive Awareness and Listening Performance with Questionnaire Data

    Science.gov (United States)

    Goh, Christine C. M.; Hu, Guangwei

    2014-01-01

    This study sought to provide a nuanced understanding of the relationship between metacognitive awareness and listening performance by eliciting from 113 English-as-a-second-language (ESL) Chinese learners their metacognitive awareness with regard to knowledge of listening strategies used and perceptions of difficulty and anxiety following a…

  13. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Samfass, Philipp [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-26

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. One of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).

  14. Enhanced round robin CPU scheduling with burst time based time quantum

    Science.gov (United States)

    Indusree, J. R.; Prabadevi, B.

    2017-11-01

    Process scheduling is a very important functionality of Operating system. The main-known process-scheduling algorithms are First Come First Serve (FCFS) algorithm, Round Robin (RR) algorithm, Priority scheduling algorithm and Shortest Job First (SJF) algorithm. Compared to its peers, Round Robin (RR) algorithm has the advantage that it gives fair share of CPU to the processes which are already in the ready-queue. The effectiveness of the RR algorithm greatly depends on chosen time quantum value. Through this research paper, we are proposing an enhanced algorithm called Enhanced Round Robin with Burst-time based Time Quantum (ERRBTQ) process scheduling algorithm which calculates time quantum as per the burst-time of processes already in ready queue. The experimental results and analysis of ERRBTQ algorithm clearly indicates the improved performance when compared with conventional RR and its variants.

  15. Power-Aware Routing and Network Design with Bundled Links: Solutions and Analysis

    Directory of Open Access Journals (Sweden)

    Rosario G. Garroppo

    2013-01-01

    Full Text Available The paper deeply analyzes a novel network-wide power management problem, called Power-Aware Routing and Network Design with Bundled Links (PARND-BL, which is able to take into account both the relationship between the power consumption and the traffic throughput of the nodes and to power off both the chassis and even the single Physical Interface Card (PIC composing each link. The solutions of the PARND-BL model have been analyzed by taking into account different aspects associated with the actual applicability in real network scenarios: (i the time for obtaining the solution, (ii the deployed network topology and the resulting topology provided by the solution, (iii the power behavior of the network elements, (iv the traffic load, (v the QoS requirement, and (vi the number of paths to route each traffic demand. Among the most interesting and novel results, our analysis shows that the strategy of minimizing the number of powered-on network elements through the traffic consolidation does not always produce power savings, and the solution of this kind of problems, in some cases, can lead to spliting a single traffic demand into a high number of paths.

  16. Energy-Aware Cognitive Radio Systems

    KAUST Repository

    Bedeer, Ebrahim

    2016-01-15

    The concept of energy-aware communications has spurred the interest of the research community in the most recent years due to various environmental and economical reasons. It becomes indispensable for wireless communication systems to shift their resource allocation problems from optimizing traditional metrics, such as throughput and latency, to an environmental-friendly energy metric. Although cognitive radio systems introduce spectrum efficient usage techniques, they employ new complex technologies for spectrum sensing and sharing that consume extra energy to compensate for overhead and feedback costs. Considering an adequate energy efficiency metric—that takes into account the transmit power consumption, circuitry power, and signaling overhead—is of momentous importance such that optimal resource allocations in cognitive radio systems reduce the energy consumption. A literature survey of recent energy-efficient based resource allocations schemes is presented for cognitive radio systems. The energy efficiency performances of these schemes are analyzed and evaluated under power budget, co-channel and adjacent-channel interferences, channel estimation errors, quality-of-service, and/or fairness constraints. Finally, the opportunities and challenges of energy-aware design for cognitive radio systems are discussed.

  17. Two analytical models for evaluating performance of Gigabit Ethernet Hosts

    International Nuclear Information System (INIS)

    Salah, K.

    2006-01-01

    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also user application may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based Markov processes and queuing theory, while the second, which is more accurate but more complex is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency and stability condition, CPU utilization of interrupt handling and protocol processing and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More, importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation. (author)

  18. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  19. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  20. Efficient and Privacy-Aware Power Injection over AMI and Smart Grid Slice in Future 5G Networks

    Directory of Open Access Journals (Sweden)

    Yinghui Zhang

    2017-01-01

    Full Text Available Smart grid is critical to the success of next generation of power grid, which is expected to be characterized by efficiency, cleanliness, security, and privacy. In this paper, aiming to tackle the security and privacy issues of power injection, we propose an efficient and privacy-aware power injection (EPPI scheme suitable for advanced metering infrastructure and 5G smart grid network slice. In EPPI, each power storage unit first blinds its power injection bid and then gives the blinded bid together with a signature to the local gateway. The gateway removes a partial blind factor from each blinded bid and then sends to the utility company aggregated bid and signature by using a novel aggregation technique called hash-then-addition. The utility company can get the total amount of collected power at each time slot by removing a blind factor from the aggregated bid. Throughout the EPPI system, both the gateway and the utility company cannot know individual bids and hence user privacy is preserved. In particular, EPPI allows the utility company to check the integrity and authenticity of the collected data. Finally, extensive evaluations indicate that EPPI is secure and privacy-aware and it is efficient in terms of computation and communication cost.

  1. Study on the Context-Aware Middleware for Ubiquitous Greenhouses Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jeonghwang Hwang

    2011-04-01

    Full Text Available Wireless Sensor Network (WSN technology is one of the important technologies to implement the ubiquitous society, and it could increase productivity of agricultural and livestock products, and secure transparency of distribution channels if such a WSN technology were successfully applied to the agricultural sector. Middleware, which can connect WSN hardware, applications, and enterprise systems, is required to construct ubiquitous agriculture environment combining WSN technology with agricultural sector applications, but there have been insufficient studies in the field of WSN middleware in the agricultural environment, compared to other industries. This paper proposes a context-aware middleware to efficiently process data collected from ubiquitous greenhouses by applying WSN technology and used to implement combined services through organic connectivity of data. The proposed middleware abstracts heterogeneous sensor nodes to integrate different forms of data, and provides intelligent context-aware, event service, and filtering functions to maximize operability and scalability of the middleware. To evaluate the performance of the middleware, an integrated management system for ubiquitous greenhouses was implemented by applying the proposed middleware to an existing greenhouse, and it was tested by measuring the level of load through CPU usage and the response time for users’ requests when the system is working.

  2. Performance Analysis of Cyber Security Awareness Delivery Methods

    Science.gov (United States)

    Abawajy, Jemal; Kim, Tai-Hoon

    In order to decrease information security threats caused by human-related vulnerabilities, an increased concentration on information security awareness and training is necessary. There are numerous information security awareness training delivery methods. The purpose of this study was to determine what delivery method is most successful in providing security awareness training. We conducted security awareness training using various delivery methods such as text based, game based and a short video presentation with the aim of determining user preference delivery methods. Our study suggests that a combined delvery methods are better than individual secrity awareness delivery method.

  3. Environmental awareness - a spinoff success of public awareness outreach around Kudankulam Nuclear Power Project

    International Nuclear Information System (INIS)

    Jashi, K.B.; Sathish, A.V.; Vijayakumar, B.; Pandaram, P.; Kalirajan, S.

    2014-01-01

    The significance of public awareness (PA) programme at Kudankulam Nuclear Power Project (KKNPP) was well recognised since the inception stage itself and several PA programme were organised around Kudankulam through different means of communication. In its chequered progress, the Kudankulam project has seen ups and downs from the initial stage and in the year 2011, the site witnessed an impasse due to public interest and concerns on nuclear projects. Subsequently PA programmes were taken up on a war footing with persistent efforts, public fear on nuclear energy and safety concerns were allayed among local public in the villages in and around Kudankulam and also far and wide in Tamil Nadu and Kerala. This paper discusses the various measures initiated to disseminate the right information and educating public on nuclear energy as a clean energy option for environmental safety. In addition, it is a requirement of the country in the face of impending climate change concerns and warming of the earth's surface

  4. User Context Aware Base Station Power Flow Model

    OpenAIRE

    Walsh, Barbara; Farrell, Ronan

    2005-01-01

    At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...

  5. Architecture-Aware Optimization of an HEVC decoder on Asymmetric Multicore Processors

    OpenAIRE

    Rodríguez-Sánchez, Rafael; Quintana-Ortí, Enrique S.

    2016-01-01

    Low-power asymmetric multicore processors (AMPs) attract considerable attention due to their appealing performance-power ratio for energy-constrained environments. However, these processors pose a significant programming challenge due to the integration of cores with different performance capabilities, asking for an asymmetry-aware scheduling solution that carefully distributes the workload. The recent HEVC standard, which offers several high-level parallelization strategies, is an important ...

  6. Low power arcjet performance

    Science.gov (United States)

    Curran, Francis M.; Sarmiento, Charles J.

    1990-01-01

    An experimental investigation was performed to evaluate arcjet operation at low power. A standard, 1 kW, constricted arcjet was run using nozzles with three different constrictor diameters. Each nozzle was run over a range of current and mass flow rates to explore stability and performance in the low power regime. A standard pulse-width modulated power processor was modified to accommodate the high operating voltages required under certain conditions. Stable, reliable operation at power levels below 0.5 kW was obtained at efficiencies between 30 and 40 percent. The operating range was found to be somewhat dependent on constrictor geometry at low mass flow rates. Quasi-periodic voltage fluctuations were observed at the low power end of the operating envelope. The nozzle insert geometry was found to have little effect on the performance of the device. The observed performance levels show that specific impulse levels above 350 seconds can be obtained at the 0.5 kW power level.

  7. A Quantitative Team Situation Awareness Measurement Method Considering Technical and Nontechnical Skills of Teams

    Directory of Open Access Journals (Sweden)

    Ho Bin Yim

    2016-02-01

    Full Text Available Human capabilities, such as technical/nontechnical skills, have begun to be recognized as crucial factors for nuclear safety. One of the most common ways to improve human capabilities in general is training. The nuclear industry has constantly developed and used training as a tool to increase plant efficiency and safety. An integrated training framework was suggested for one of those efforts, especially during simulation training sessions of nuclear power plant operation teams. The developed training evaluation methods are based on measuring the levels of situation awareness of teams in terms of the level of shared confidence and consensus as well as the accuracy of team situation awareness. Verification of the developed methods was conducted by analyzing the training data of real nuclear power plant operation teams. The teams that achieved higher level of shared confidence showed better performance in solving problem situations when coupled with high consensus index values. The accuracy of nuclear power plant operation teams' situation awareness was approximately the same or showed a similar trend as that of senior reactor operators' situation awareness calculated by a situation awareness accuracy index (SAAI. Teams that had higher SAAI values performed better and faster than those that had lower SAAI values.

  8. Cooling performance of a notebook PC mounted with heat spreader

    Energy Technology Data Exchange (ETDEWEB)

    Noh, H.K. [Electronics and Telecommunications Research Institute, Taejeon (Korea); Lim, K.B. [Hanbat National University, Taejeon (Korea); Park, M.H. [Korea Power Engineering Company (Korea)

    2001-06-01

    Parametric study to investigate the cooling performance of a notebook PC mounted with heat spreader has been numerically performed. Two case of air-blowing and air-exhaust at inlet were tested. The cooling effect on parameters such as, inlet velocities in the cases of air-blowing and air-exhaust, materials of heat spreader, and CPU powers were simulated for two cases. Cooling performance in the case of air-blowing was better than the case of air-exhaust. (author). 9 refs., 7 figs., 5 tabs.

  9. Read margin analysis of crossbar arrays using the cell-variability-aware simulation method

    Science.gov (United States)

    Sun, Wookyung; Choi, Sujin; Shin, Hyungsoon

    2018-02-01

    This paper proposes a new concept of read margin analysis of crossbar arrays using cell-variability-aware simulation. The size of the crossbar array should be considered to predict the read margin characteristic of the crossbar array because the read margin depends on the number of word lines and bit lines. However, an excessively high-CPU time is required to simulate large arrays using a commercial circuit simulator. A variability-aware MATLAB simulator that considers independent variability sources is developed to analyze the characteristics of the read margin according to the array size. The developed MATLAB simulator provides an effective method for reducing the simulation time while maintaining the accuracy of the read margin estimation in the crossbar array. The simulation is also highly efficient in analyzing the characteristic of the crossbar memory array considering the statistical variations in the cell characteristics.

  10. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    Energy Technology Data Exchange (ETDEWEB)

    Priimak, Dmitri

    2014-12-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques.

  11. Finite difference numerical method for the superlattice Boltzmann transport equation and case comparison of CPU(C) and GPU(CUDA) implementations

    International Nuclear Information System (INIS)

    Priimak, Dmitri

    2014-01-01

    We present a finite difference numerical algorithm for solving two dimensional spatially homogeneous Boltzmann transport equation which describes electron transport in a semiconductor superlattice subject to crossed time dependent electric and constant magnetic fields. The algorithm is implemented both in C language targeted to CPU and in CUDA C language targeted to commodity NVidia GPU. We compare performances and merits of one implementation versus another and discuss various software optimisation techniques

  12. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Kovesdi, Casey Robert [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rice, Brandon Charles [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bower, Gordon Ross [Idaho National Lab. (INL), Idaho Falls, ID (United States); Spielman, Zachary Alexander [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hill, Rachael Ann [Idaho National Lab. (INL), Idaho Falls, ID (United States); LeBlanc, Katya Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-11-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  13. Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking

    International Nuclear Information System (INIS)

    Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross; Spielman, Zachary Alexander; Hill, Rachael Ann; LeBlanc, Katya Lee

    2015-01-01

    Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator's eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.

  14. Power performance assessment. Final report

    International Nuclear Information System (INIS)

    Frandsen, S.

    1998-12-01

    In the increasingly commercialised wind power marketplace, the lack of precise assessment methods for the output of an investment is becoming a barrier for wider penetration of wind power. Thus, addressing this problem, the overall objectives of the project are to reduce the financial risk in investment in wind power projects by significantly improving the power performance assessment methods. Ultimately, if this objective is successfully met, the project may also result in improved tuning of the individual wind turbines and in optimisation methods for wind farm operation. The immediate, measurable objectives of the project are: To prepare a review of existing contractual aspects of power performance verification procedures of wind farms; to provide information on production sensitivity to specific terrain characteristics and wind turbine parameters by analyses of a larger number of wind farm power performance data available to the proposers; to improve the understanding of the physical parameters connected to power performance in complex environment by comparing real-life wind farm power performance data with 3D computational flow models and 3D-turbulence wind turbine models; to develop the statistical framework including uncertainty analysis for power performance assessment in complex environments; and to propose one or more procedures for power performance evaluation of wind power plants in complex environments to be applied in contractual agreements between purchasers and manufacturers on production warranties. Although the focus in this project is on power performance assessment the possible results will also be of benefit to energy yield forecasting, since the two tasks are strongly related. (au) JOULE III. 66 refs.; In Co-operation Renewable Energy System Ltd. (GB); Centre for Renewable Energy (GR); Aeronautic Research Centre (SE); National Engineering Lab. (GB); Public Power Cooperation (GR)

  15. Credit Risk Evaluation of Large Power Consumers Considering Power Market Transaction

    Science.gov (United States)

    Fulin, Li; Erfeng, Xu; ke, Sun; Dunnan, Liu; Shuyi, Shen

    2018-03-01

    Large power users will participate in power market in various forms after power system reform. Meanwhile, great importance has always attached to the construction of the credit system in power industry. Due to the difference between the awareness of performance and the ability to perform, credit risk of power customer will emerge accordingly. Therefore, it is critical to evaluate credit risk of large power customers in the new situation of power market. Firstly, this paper constructs index system of credit risk of large power customers, and establishes evaluation model of interval number and AHP-entropy weight method.

  16. Understanding the I/O Performance Gap Between Cori KNL and Haswell

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jialin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Koziol, Quincey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Tang, Houjun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tessier, Francois [Argonne National Lab. (ANL), Argonne, IL (United States); Bhimji, Wahid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Thakur, Bhupender [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Lockwood, Glenn [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Prabhat, None [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)

    2017-05-01

    The Cori system at NERSC has two compute partitions with different CPU architectures: a 2,004 node Haswell partition and a 9,688 node KNL partition, which ranked as the 5th most powerful and fastest supercomputer on the November 2016 Top 500 list. The compute partitions share a common storage configuration, and understanding the IO performance gap between them is important, impacting not only to NERSC/LBNL users and other national labs, but also to the relevant hardware vendors and software developers. In this paper, we have analyzed performance of single core and single node IO comprehensively on the Haswell and KNL partitions, and have discovered the major bottlenecks, which include CPU frequencies and memory copy performance. We have also extended our performance tests to multi-node IO and revealed the IO cost difference caused by network latency, buffer size, and communication cost. Overall, we have developed a strong understanding of the IO gap between Haswell and KNL nodes and the lessons learned from this exploration will guide us in designing optimal IO solutions in many-core era.

  17. Phonological Awareness and Vocabulary Performance of Monolingual and Bilingual Preschool Children with Hearing Loss

    Science.gov (United States)

    Lund, Emily; Werfel, Krystal L.; Schuele, C. Melanie

    2015-01-01

    This pilot study compared the phonological awareness skills and vocabulary performance of English monolingual and Spanish-English bilingual children with and without hearing loss. Preschool children with varying degrees of hearing loss (n = 18) and preschool children without hearing loss (n = 19) completed measures of phonological awareness and…

  18. Building Brand Power

    Science.gov (United States)

    Lakshmi, S.; Muthumani, S., Dr.

    2017-05-01

    Brand power is established through brand awareness. It’s all about making consumers familiar about their products and services. Marketing strategies should make the customers extend the positive approach towards brand and continue through repeated purchases. There is a triple perspective approach to investigate the brand awareness in this research. The brand awareness and brand equity are studied and the relationship between those are analyzed. This also drills down about the brand performance and knowledge with awareness which tries to find out the brands value and utility among the public. Continuous improvement on package design, quality and buying experience will lead to customer loyalty and preference. Branding should happen though creative ads, eye catchers and special campaigns. Brand awareness is the extent to which consumers are familiar with their product or services. Power of a brand is resides in the minds of the customers. To build a strong brand, it is one of the great challenge for the marketers to ensure that customers have the right experiences with products and services and various marketing programs. So that tenderness, beliefs, perspective, perception and so on linked to the brand. If we are presenting the brand with no enthusiasm or spunk, people are going to forget about our brand. Even though that may seem harsh, it’s the naked truth in today’s marketing world. Brand must reach out to the community by special events, creating campaigns to keep the brand relevant also offer customer a unique experience. Here we study about the brand consciousness and to identify the cohesion between brand awareness with knowledge and performance and also to assess the effect of brand awareness on consumer purchase. In this study we necessary statistical tools like chi-square test ad t-test has been used to analyse the collected data. It is highly recommend to increase brand awareness, the marketers are constantly required to build brand awareness both

  19. Development of a power-assisted lifting device for construction and periodic inspection

    International Nuclear Information System (INIS)

    Hayatsu, M.; Yamada, M.; Takasu, H.; Tagawa, Y.; Kajiwara, K.

    2001-01-01

    This study focuses on the control system design and control performance of a power-assisted lifting device. The device consists of several electric chain-blocks, each controlled by force sensors and a CPU. The mechanism is as follows: (1) Force sensors detect any chain tension changes (by human force), (2) The CPU calculates the required output, (3) Electric chain-blocks move the object in the intended direction. The feature of this device is that it does not require any information related to the suspension points of the electric chain-blocks. The controller was designed using the H method, which considers disturbances and aims to provide robust stability under the operation conditions of construction verified through experiments using a 700 kg steel dummy mass (control object) suspended by four electric chain-blocks. In the experiments, the controller, which was designed using the H method, was compared to the PI controller method, and the effectiveness of the H controller was proven. A control object could be moved, translated, and rotated by human force (of less than 10 kg). Positioning performance errors were suppressed to less than 0.5 mm, and operation time was reduced by about 50%. This device will improve working efficiency and rationalize lifting operations in nuclear power plants. (author)

  20. Development of a power-assisted lifting device for construction and periodic inspection

    Energy Technology Data Exchange (ETDEWEB)

    Hayatsu, M.; Yamada, M.; Takasu, H. [Hitachi Plant Engineering and Construction, Chiba-ken (Japan); Tagawa, Y. [Tokyo Univ. of Agriculture and Technology (Japan); Kajiwara, K. [National Research Institute for Earth Science and Disaster Prevention, Tokyo (Japan)

    2001-07-01

    This study focuses on the control system design and control performance of a power-assisted lifting device. The device consists of several electric chain-blocks, each controlled by force sensors and a CPU. The mechanism is as follows: (1) Force sensors detect any chain tension changes (by human force), (2) The CPU calculates the required output, (3) Electric chain-blocks move the object in the intended direction. The feature of this device is that it does not require any information related to the suspension points of the electric chain-blocks. The controller was designed using the H method, which considers disturbances and aims to provide robust stability under the operation conditions of construction verified through experiments using a 700 kg steel dummy mass (control object) suspended by four electric chain-blocks. In the experiments, the controller, which was designed using the H method, was compared to the PI controller method, and the effectiveness of the H controller was proven. A control object could be moved, translated, and rotated by human force (of less than 10 kg). Positioning performance errors were suppressed to less than 0.5 mm, and operation time was reduced by about 50%. This device will improve working efficiency and rationalize lifting operations in nuclear power plants. (author)

  1. Effects of a power shortage in the Tokyo metropolitan area on awareness of nuclear power generation and power savings behavior

    International Nuclear Information System (INIS)

    Kitada, Atsuko

    2004-01-01

    The shutdown of a number of nuclear power stations of the Tokyo Electric Power Company in the summer of 2003 caused a power shortage problem in the Tokyo Metropolitan area. To examine the effects of the power shortage, in September 2003 a survey was conducted in the service areas of the Kansai Electric Power Company (Kansai region) and the Tokyo Electric Power Company (Kanto region). This survey was part of a wider opinion survey begun in 1993 concerning nuclear power generation. The results of the September 2003 survey are as follows: The degree of recognition of the power shortage problem in the Metropolitan area was high, with 40% of respondents in the Kansai region and nearly 70% in the Kanto region understanding that the shortage was caused by the shutdown of several nuclear power station. The overall awareness of nuclear power generation was little affected in both the Kansai and Kanto regions, though the sense of a shortage of the generating capacity had been raised slightly. Once respondents knew about the power shortage problem, they estimated the likelihood of an occurrence of large-scale service interruption to be low, nearly at an even chance, and they had been only slightly worried about it, essentially viewing the problem optimistically. In the Kanto region, where public relations activities for power savings had been actively pursued, the frequency of experiencing exposure to such public relations activities was remarkably higher than in the Kansai region. The relation between exposure to public relations activities for power savings and power savings behavior was analyzed using quantification method II. Analysis results suggest that public relations activities for power savings in the Kanto region had the effect of urging power savings behavior. However, the difference in the rate of putting power savings behavior into practice was small between the Kanto and Kansai regions, indicating that public relation activities for power savings in the Kanto

  2. Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks.

    Science.gov (United States)

    Naveros, Francisco; Garrido, Jesus A; Carrillo, Richard R; Ros, Eduardo; Luque, Niceto R

    2017-01-01

    Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under

  3. Improvement of CPU time of Linear Discriminant Function based on MNM criterion by IP

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2014-05-01

    Full Text Available Revised IP-OLDF (optimal linear discriminant function by integer programming is a linear discriminant function to minimize the number of misclassifications (NM of training samples by integer programming (IP. However, IP requires large computation (CPU time. In this paper, it is proposed how to reduce CPU time by using linear programming (LP. In the first phase, Revised LP-OLDF is applied to all cases, and all cases are categorized into two groups: those that are classified correctly or those that are not classified by support vectors (SVs. In the second phase, Revised IP-OLDF is applied to the misclassified cases by SVs. This method is called Revised IPLP-OLDF.In this research, it is evaluated whether NM of Revised IPLP-OLDF is good estimate of the minimum number of misclassifications (MNM by Revised IP-OLDF. Four kinds of the real data—Iris data, Swiss bank note data, student data, and CPD data—are used as training samples. Four kinds of 20,000 re-sampling cases generated from these data are used as the evaluation samples. There are a total of 149 models of all combinations of independent variables by these data. NMs and CPU times of the 149 models are compared with Revised IPLP-OLDF and Revised IP-OLDF. The following results are obtained: 1 Revised IPLP-OLDF significantly improves CPU time. 2 In the case of training samples, all 149 NMs of Revised IPLP-OLDF are equal to the MNM of Revised IP-OLDF. 3 In the case of evaluation samples, most NMs of Revised IPLP-OLDF are equal to NM of Revised IP-OLDF. 4 Generalization abilities of both discriminant functions are concluded to be high, because the difference between the error rates of training and evaluation samples are almost within 2%.   Therefore, Revised IPLP-OLDF is recommended for the analysis of big data instead of Revised IP-OLDF. Next, Revised IPLP-OLDF is compared with LDF and logistic regression by 100-fold cross validation using 100 re-sampling samples. Means of error rates of

  4. Ecological interface design for turbine secondary systems in a nuclear power plant : effects on operator situation awareness

    International Nuclear Information System (INIS)

    Kwok, J.

    2007-01-01

    Investigations into past accidents at nuclear power generating facilities such as that of Three Mile Island have identified human factors as one of the foremost critical aspects in plant safety. Errors resulting from limitations in human information processing are of particular concern for human-machine interfaces (HMI) in plant control rooms. This project examines the application of Ecological Interface Design (EID) in HMI information displays and the effects on operator situation awareness (SA) for turbine secondary systems based on the Swedish Forsmark 3 boiling-water reactor nuclear power plant. A work domain analysis was performed on the turbine secondary systems yielding part-whole decomposition and abstraction hierarchy models. Information display requirements were subsequently extracted from the models. The resulting EID information displays were implemented in a full-scope simulator and evaluated with six licensed operating crews from the Forsmark 3 plant. Three measures were used to examine SA: self-rated bias, Halden Open Probe Elicitation (HOPE), and Situation Awareness Control Room Inventory (SACRI). The data analysis revealed that operators achieved moderate to good SA; operators unfamiliar with EID information displays were able to develop and maintain comparable levels of SA to operators using traditional forms of single sensor-single indicator (SS-SI) information displays. With sufficient training and experience, operator SA is expected to benefit from the knowledge-based visual elements in the EID information displays. This project was researched in conjunction with the Cognitive Engineering Laboratory at the University of Toronto and the Institute for Energy Technology (IFE) in Halden, Norway. (author)

  5. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-09-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error associated with advanced control room equipment and configurations, (ii) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (iii) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms.

  6. FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS

    International Nuclear Information System (INIS)

    Ronald L Boring; David I Gertman; Tuan Q Tran; Brian F Gore

    2008-01-01

    This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (1) the estimation of human error associated with advanced control room equipment and configurations, (2) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (3) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and risk associated with human performance in next generation control rooms

  7. Performance Analysis of FEM Algorithmson GPU and Many-Core Architectures

    KAUST Repository

    Khurram, Rooh

    2015-04-27

    The roadmaps of the leading supercomputer manufacturers are based on hybrid systems, which consist of a mix of conventional processors and accelerators. This trend is mainly due to the fact that the power consumption cost of the future cpu-only Exascale systems will be unsustainable, thus accelerators such as graphic processing units (GPUs) and many-integrated-core (MIC) will likely be the integral part of the TOP500 (http://www.top500.org/) supercomputers, beyond 2020. The emerging supercomputer architecture will bring new challenges for the code developers. Continuum mechanics codes will particularly be affected, because the traditional synchronous implicit solvers will probably not scale on hybrid Exascale machines. In the previous study[1], we reported on the performance of a conjugate gradient based mesh motion algorithm[2]on Sandy Bridge, Xeon Phi, and K20c. In the present study we report on the comparative study of finite element codes, using PETSC and AmgX solvers on CPU and GPUs, respectively [3,4]. We believe this study will be a good starting point for FEM code developers, who are contemplating a CPU to accelerator transition.

  8. Power-aware transceiver design for half-duplex bidirectional chip-to-chip optical interconnects

    International Nuclear Information System (INIS)

    Sangirov Jamshid; Ukaegbu Ikechi Augustine; Lee Tae-Woo; Park Hyo-Hoon; Sangirov Gulomjon

    2013-01-01

    A power-aware transceiver for half-duplex bidirectional chip-to-chip optical interconnects has been designed and fabricated in a 0.13 μm complementary metal–oxide–semiconductor (CMOS) technology. The transceiver can detect the presence and absence of received signals and saves 55% power in Rx enabled mode and 45% in Tx enabled mode. The chip occupies an area of 1.034 mm 2 and achieves a 3-dB bandwidth of 6 GHz and 7 GHz in Tx and Rx modes, respectively. The disabled outputs for the Tx and Rx modes are isolated with 180 dB and 139 dB, respectively, from the enabled outputs. Clear eye diagrams are obtained at 4.25 Gbps for both the Tx and Rx modes. (semiconductor integrated circuits)

  9. Wide-area situation awareness in electric power grid

    Science.gov (United States)

    Greitzer, Frank L.

    2010-04-01

    Two primary elements of the US energy policy are demand management and efficiency and renewable sources. Major objectives are clean energy transmission and integration, reliable energy transmission, and grid cyber security. Development of the Smart Grid seeks to achieve these goals by lowering energy costs for consumers, achieving energy independence and reducing greenhouse gas emissions. The Smart Grid is expected to enable real time wide-area situation awareness (SA) for operators. Requirements for wide-area SA have been identified among interoperability standards proposed by the Federal Energy Regulatory Commission and the National Institute of Standards and Technology to ensure smart-grid functionality. Wide-area SA and enhanced decision support and visualization tools are key elements in the transformation to the Smart Grid. This paper discusses human factors research to promote SA in the electric power grid and the Smart Grid. Topics that will be discussed include the role of human factors in meeting US energy policy goals, the impact and challenges for Smart Grid development, and cyber security challenges.

  10. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  11. Utilizing a multiprocessor architecture - The performance of MIDAS

    International Nuclear Information System (INIS)

    Maples, C.; Logan, D.; Meng, J.; Rathbun, W.; Weaver, D.

    1983-01-01

    The MIDAS architecture organizes multiple CPUs into clusters called distributed subsystems. Each subsystem consists of an array of processors controlled by a supervisory CPU. The multiprocessor array is composed of commercial CPUs (with floating point hardware) and specialized processing elements. Interprocessor communication within the array may occur either through switched memory modules or common shared memory. The architecture permits multiple processors to be focused on single problems. A distributed subsystem has been constructed and tested. It currently consists of a supervisor CPU; 16 blocks of independently switchable memory; 9 general purpose, VAX-class CPUs; and 2 specialized pipelined processors to handle I/O. Results on a variety of problems indicate that the subsystem performs 8 to 15 times faster than a standard computer with an identical CPU. The difference in performance represents the effect of differing CPU and I/O requirements

  12. CPU SIM: A Computer Simulator for Use in an Introductory Computer Organization-Architecture Class.

    Science.gov (United States)

    Skrein, Dale

    1994-01-01

    CPU SIM, an interactive low-level computer simulation package that runs on the Macintosh computer, is described. The program is designed for instructional use in the first or second year of undergraduate computer science, to teach various features of typical computer organization through hands-on exercises. (MSE)

  13. Demonstration of a Novel Synchrophasor-based Situational Awareness System: Wide Area Power System Visualization, On-line Event Replay and Early Warning of Grid Problems

    Energy Technology Data Exchange (ETDEWEB)

    Rosso, A.

    2012-12-31

    Since the large North Eastern power system blackout on August 14, 2003, U.S. electric utilities have spent lot of effort on preventing power system cascading outages. Two of the main causes of the August 14, 2003 blackout were inadequate situational awareness and inadequate operator training In addition to the enhancements of the infrastructure of the interconnected power systems, more research and development of advanced power system applications are required for improving the wide-area security monitoring, operation and planning in order to prevent large- scale cascading outages of interconnected power systems. It is critically important for improving the wide-area situation awareness of the operators or operational engineers and regional reliability coordinators of large interconnected systems. With the installation of large number of phasor measurement units (PMU) and the related communication infrastructure, it will be possible to improve the operators’ situation awareness and to quickly identify the sequence of events during a large system disturbance for the post-event analysis using the real-time or historical synchrophasor data. The purpose of this project was to develop and demonstrate a novel synchrophasor-based comprehensive situational awareness system for control centers of power transmission systems. The developed system named WASA (Wide Area Situation Awareness) is intended to improve situational awareness at control centers of the power system operators and regional reliability coordinators. It consists of following main software modules: • Wide-area visualizations of real-time frequency, voltage, and phase angle measurements and their contour displays for security monitoring. • Online detection and location of a major event (location, time, size, and type, such as generator or line outage). • Near-real-time event replay (in seconds) after a major event occurs. • Early warning of potential wide-area stability problems. The system has been

  14. Procedure automation: the effect of automated procedure execution on situation awareness and human performance

    International Nuclear Information System (INIS)

    Andresen, Gisle; Svengren, Haakan; Heimdal, Jan O.; Nilsen, Svein; Hulsund, John-Einar; Bisio, Rossella; Debroise, Xavier

    2004-04-01

    As advised by the procedure workshop convened in Halden in 2000, the Halden Project conducted an experiment on the effect of automation of Computerised Procedure Systems (CPS) on situation awareness and human performance. The expected outcome of the study was to provide input for guidance on CPS design, and to support the Halden Project's ongoing research on human reliability analysis. The experiment was performed in HAMMLAB using the HAMBO BWR simulator and the COPMA-III CPS. Eight crews of operators from Forsmark 3 and Oskarshamn 3 participated. Three research questions were investigated: 1) Does procedure automation create Out-Of-The-Loop (OOTL) performance problems? 2) Does procedure automation affect situation awareness? 3) Does procedure automation affect crew performance? The independent variable, 'procedure configuration', had four levels: paper procedures, manual CPS, automation with breaks, and full automation. The results showed that the operators experienced OOTL problems in full automation, but that situation awareness and crew performance (response time) were not affected. One possible explanation for this is that the operators monitored the automated procedure execution conscientiously, something which may have prevented the OOTL problems from having negative effects on situation awareness and crew performance. In a debriefing session, the operators clearly expressed their dislike for the full automation condition, but that automation with breaks could be suitable for some tasks. The main reason why the operators did not like the full automation was that they did not feel being in control. A qualitative analysis addressing factors contributing to response time delays revealed that OOTL problems did not seem to cause delays, but that some delays could be explained by the operators having problems with the freeze function of the CPS. Also other factors such as teamwork and operator tendencies were of importance. Several design implications were drawn

  15. Progress in a novel architecture for high performance processing

    Science.gov (United States)

    Zhang, Zhiwei; Liu, Meng; Liu, Zijun; Du, Xueliang; Xie, Shaolin; Ma, Hong; Ding, Guangxin; Ren, Weili; Zhou, Fabiao; Sun, Wenqin; Wang, Huijuan; Wang, Donglin

    2018-04-01

    The high performance processing (HPP) is an innovative architecture which targets on high performance computing with excellent power efficiency and computing performance. It is suitable for data intensive applications like supercomputing, machine learning and wireless communication. An example chip with four application-specific integrated circuit (ASIC) cores which is the first generation of HPP cores has been taped out successfully under Taiwan Semiconductor Manufacturing Company (TSMC) 40 nm low power process. The innovative architecture shows great energy efficiency over the traditional central processing unit (CPU) and general-purpose computing on graphics processing units (GPGPU). Compared with MaPU, HPP has made great improvement in architecture. The chip with 32 HPP cores is being developed under TSMC 16 nm field effect transistor (FFC) technology process and is planed to use commercially. The peak performance of this chip can reach 4.3 teraFLOPS (TFLOPS) and its power efficiency is up to 89.5 gigaFLOPS per watt (GFLOPS/W).

  16. WhalePower tubercle blade power performance test report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-07-15

    Toronto-based WhalePower Corporation has developed turbine blades that are modeled after humpback whale flippers. The blades, which incorporate tubercles along the leading edge of the blade, have been fitted to a Wenvor 25 kW turbine installed in North Cape, Prince Edward Island at a test site for the Wind Energy Institute of Canada (WEICan). A test was conducted to characterize the power performance of the prototype wind turbine. This report described the wind turbine configuration with particular reference to turbine information, power rating, blade information, tower information, control systems and grid connections. The test site was also described along with test equipment and measurement procedures. Information regarding power output as a function of wind speed was included along with power curves, power coefficient and annual energy production. The results for the power curve and annual energy production contain a level of uncertainty. While measurements for this test were collected and analyzed in accordance with International Electrotechnical Commission (IEC) standards for performance measurements of electricity producing wind turbines (IEC 61400-12-1), the comparative performance data between the prototype WhalePower wind turbine blade and the Wenvor standard blade was not gathered to IEC data standards. Deviations from IEC-61400-12-1 procedures were listed. 6 tabs., 16 figs., 3 appendices.

  17. Self-attitude awareness training: An aid to effective performance in microgravity and virtual environments

    Science.gov (United States)

    Parker, Donald E.; Harm, D. L.; Florer, Faith L.

    1993-01-01

    This paper describes ongoing development of training procedures to enhance self-attitude awareness in astronaut trainees. The procedures are based on observations regarding self-attitude (perceived self-orientation and self-motion) reported by astronauts. Self-attitude awareness training is implemented on a personal computer system and consists of lesson stacks programmed using Hypertalk with Macromind Director movie imports. Training evaluation will be accomplished by an active search task using the virtual Spacelab environment produced by the Device for Orientation and Motion Environments Preflight Adaptation Trainer (DOME-PAT) as well as by assessment of astronauts' performance and sense of well-being during orbital flight. The general purpose of self-attitude awareness training is to use as efficiently as possible the limited DOME-PAT training time available to astronauts prior to a space mission. We suggest that similar training procedures may enhance the performance of virtual environment operators.

  18. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  19. Development of a Multi-Channel Ultrasonic Testing System for Automated Ultrasonic Pipe Inspection of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Lee, Hee Jong; Cho, Chan Hee; Cho, Hyun Joon

    2009-01-01

    Currently almost all in-service-inspection techniques, applied in domestic nuclear power plants, are partial to field inspection technique. These kinds of techniques are related to managing nuclear power plants by the operation of foreign-produced inspection devices. There have been so many needs for development of native in-service-inspection device because there is no native diagnosis device for nuclear power plant inspection yet in Korea. In this research, we developed several core techniques to make an automated ultrasonic pipe inspection system for nuclear power plants. A high performance multi-channel ultrasonic pulser/receiver module, an A/D converter module and a digital main CPU module were developed and the performance of the developed modules was verified. The S/N ratio, noise level and signal acquisition performance of the developed modules showed proper level as we designed in the beginning.

  20. Consciência fonológica e desempenho escolar Phonological awareness and school performance

    Directory of Open Access Journals (Sweden)

    Patrícia Aparecida Zuanetti

    2008-01-01

    Full Text Available OBJETIVOS: verificar a relação entre consciência fonológica e o desempenho acadêmico de escolares, e averiguar se a ordem de preferência para as tarefas acadêmicas é a mesma ordem conseguida por esta criança em seu desempenho. MÉTODOS: avaliou-se 24 crianças de uma sala da 2ª série do ensino fundamental de uma escola pública, aplicando a prova de consciência fonológica por produção oral e o teste de desempenho escolar. Excluíram-se todos os alunos que apresentavam distúrbios fonológicos. RESULTADOS: existe associação entre ordem de preferência e resultado apenas nas tarefas de aritmética e escrita, já na leitura não existe. Em relação ao desempenho nas tarefas de consciência fonológica, o grupo com desempenho escolar médio foi mais hábil em tarefas que envolviam síntese fonêmica, rima, segmentação fonêmica e manipulação fonêmica quando comparados aos alunos com desempenho acadêmico inferior. Quanto à correlação, esta foi positiva e forte entre as variáveis estudadas (consciência fonológica, desempenho escolar, leitura, escrita e aritmética. CONCLUSÕES: quanto mais desenvolvida é a consciência fonológica, melhor é a performance do aluno; Tarefas de rima, síntese, segmentação e manipulação fonêmica estão mais relacionadas à alfabetização. Existe associação entre ordem de preferência e resultado nas tarefas de aritmética e escrita.PURPOSE: to know the relationship between students’ phonological awareness and their school performance, and to study if the students preference order or school subjects agree with their performance order obtained in the tests. METHODS: 24 children in second degree of the public fundamental school was analyzed. They made a oral phonological awareness test and the school performance test. Every student with phonological disturbs was excluded from the study. RESULTS: we observed that there is an association between preference and performance orders just

  1. Video coding for decoding power-constrained embedded devices

    Science.gov (United States)

    Lu, Ligang; Sheinin, Vadim

    2004-01-01

    Low power dissipation and fast processing time are crucial requirements for embedded multimedia devices. This paper presents a technique in video coding to decrease the power consumption at a standard video decoder. Coupled with a small dedicated video internal memory cache on a decoder, the technique can substantially decrease the amount of data traffic to the external memory at the decoder. A decrease in data traffic to the external memory at decoder will result in multiple benefits: faster real-time processing and power savings. The encoder, given prior knowledge of the decoder"s dedicated video internal memory cache management scheme, regulates its choice of motion compensated predictors to reduce the decoder"s external memory accesses. This technique can be used in any standard or proprietary encoder scheme to generate a compliant output bit stream decodable by standard CPU-based and dedicated hardware-based decoders for power savings with the best quality-power cost trade off. Our simulation results show that with a relatively small amount of dedicated video internal memory cache, the technique may decrease the traffic between CPU and external memory over 50%.

  2. FY1995 study of low power LSI design automation software with parallel processing; 1995 nendo heiretsu shori wo katsuyoshita shodenryoku LSI muke sekkei jidoka software no kenkyu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The needs for low power LSIs have rapidly increased recently. For the low power LSI development, not only new circuit technologies but also new design automation tools supporting the new technologies are indispensable. The purpose of this project is to develop a new design automation software, which is able to design new digital LSIs with much lower power than that of conventional CMOS LSIs. A new design automation software for very low power LSIs has been developed targeting the pass-transistor logic SPL, a dedicated low power circuit technology. The software includes a logic synthesis function for pass-transistor-based macrocells and a macrocell placement function. Several new algorithms have been developed for the software, e.g. BDD construction. Some of them are designed and implemented for parallel processing in order to reduce the processing time. The logic synthesis function was tested on a set of benchmarks and finally applied to a low power CPU design. The designed 8-bit CPU was fully compatible with Zilog Z-80. The power dissipation of the CPU was compared with that of commercial CMOS Z-80. At most 82% of power of CMOS was reduced by the new CPU. On the other hand, parallel processing speed up was measured on the macrocell placement function. 34 folds speed up was realized. (NEDO)

  3. Transitivity performance, relational hierarchy knowledge and awareness: results of an instructional framing manipulation.

    Science.gov (United States)

    Kumaran, Dharshan; Ludwig, Hans

    2013-12-01

    The transitive inference (TI) paradigm has been widely used to examine the role of the hippocampus in generalization. Here we consider a surprising feature of experimental findings in this task: the relatively poor transitivity performance and levels of hierarchy knowledge achieved by adult human subjects. We focused on the influence of the task instructions on participants' subsequent performance--a single-word framing manipulation which either specified the relation between items as transitive (i.e., OLD-FRAME: choose which item is "older") or left it ambiguous (i.e., NO-FRAME: choose which item is "correct"). We show a marked but highly specific effect of manipulating prior knowledge through instruction: transitivity performance and levels of relational hierarchy knowledge were enhanced, but premise performance unchanged. Further, we show that hierarchy recall accuracy, but not conventional awareness scores, was a significant predictor of inferential performance across the entire group of participants. The current study has four main implications: first, our findings establish the importance of the task instructions, and prior knowledge, in the TI paradigm--suggesting that they influence the size of the overall hypothesis space (e.g., to favor a linear hierarchical structure over other possibilities in the OLD-FRAME). Second, the dissociable effects of the instructional frame on premise and inference performance provide evidence for the operation of distinct underlying mechanisms (i.e., an associative mechanism vs. relational hierarchy knowledge). Third, our findings suggest that a detailed measurement of hierarchy recall accuracy may be a more sensitive index of relational hierarchy knowledge, than conventional awareness score--and should be used in future studies investigating links between awareness and inferential performance. Finally, our study motivates an experimental setting that ensures robust hierarchy learning across participants

  4. A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling

    Directory of Open Access Journals (Sweden)

    Roger V Hoang

    2013-10-01

    Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.

  5. Awareness structure of the people with opinion that nuclear power is effective for preventing global warming

    International Nuclear Information System (INIS)

    Fukae, Chiyokazu

    2006-01-01

    Most of people think that nuclear power generation is not effective for preventing global warming. In this research, the reason why people think so was investigated with using questionnaire survey. As a result, the misunderstanding, the thermal effluent and radioactive substance etc. produced from a nuclear plant promotes global warming, has influenced on this issue. People have negative image against nuclear power in the background of this idea. This negative image is a factor to decrease the evaluation that nuclear power is useful for preventing global warming regardless of the presence of the misunderstanding. By the fear that the accident of the nuclear plant brings the environmental destruction, people evaluate that nuclear power doesn't have the capabilities for environmental preservation. Especially young people have such awareness. It is necessary to learn energy and environmental issues including the merits and demerits of nuclear power objectively in the academic training. (author)

  6. EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness.

    Science.gov (United States)

    Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon

    2015-07-14

    Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly "domain general" conflict processing mechanisms, instead of conflict source specific effects.

  7. EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness

    Science.gov (United States)

    Jiang, Jun; Zhang, Qinglin; Van Gaal, Simon

    2015-01-01

    Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict awareness was manipulated by masking a conflict-inducing color word preceding a color patch target. We isolated semantic from response conflict by introducing four color words/patches, of which two were matched to the same response. We observed that both semantic as well as response conflict were associated with mid-frontal theta-band and parietal alpha-band power modulations, irrespective of the level of conflict awareness (high vs. low), although awareness of conflict increased these conflict-related power dynamics. These results show that both semantic and response conflict can be processed in the human brain and suggest that the neural oscillatory mechanisms in EEG reflect mainly “domain general” conflict processing mechanisms, instead of conflict source specific effects. PMID:26169473

  8. Accumulo/Hadoop, MongoDB, and Elasticsearch Performance for Semi Structured Intrusion Detection (IDS) Data

    Science.gov (United States)

    2016-11-01

    RHEL patches were applied: 6 • 1 Dell PowerEdge R710 server ○ One 2.26-GHz Xeon 4 Core central processing unit (CPU) ○ Two 250-GB, 7,200-RPM...Hadoop, and Elasticsearch “master” server • 4 Dell PowerEdge R420 servers ○ Two 2.2-GHz Xeon E5-2430 6 Core CPU ○ Four 2-TB, 7,200-RPM SATA Drives...report are not to be construed as an official Department of the Army position unless so designated by other authorized documents . Citation of

  9. Thermal Power Plant Performance Analysis

    CERN Document Server

    2012-01-01

    The analysis of the reliability and availability of power plants is frequently based on simple indexes that do not take into account the criticality of some failures used for availability analysis. This criticality should be evaluated based on concepts of reliability which consider the effect of a component failure on the performance of the entire plant. System reliability analysis tools provide a root-cause analysis leading to the improvement of the plant maintenance plan.   Taking in view that the power plant performance can be evaluated not only based on  thermodynamic related indexes, such as heat-rate, Thermal Power Plant Performance Analysis focuses on the presentation of reliability-based tools used to define performance of complex systems and introduces the basic concepts of reliability, maintainability and risk analysis aiming at their application as tools for power plant performance improvement, including: ·         selection of critical equipment and components, ·         defini...

  10. The Effects of Nutrition Awareness and Knowledge on Health Habits and Performance Among Pharmacy Students in Egypt.

    Science.gov (United States)

    El-Ahmady, Sherweit; El-Wakeel, Lamia

    2017-04-01

    A cross-sectional study was conducted on a group of pharmacy students to assess the relation between nutritional knowledge and awareness of university students and their nutrition habits and health related performance and indicators. The students were subjected to a questionnaire designed to approach four health related topics including nutrition literacy, health awareness, nutritional habits and health related performance. Answers on each topic were collected and statistical analysis was performed using GraphPad Prism 5 software including a measure of gender differences and correlative studies. No significant difference between genders in the overall responses but discrepancies in certain questions were observed. Female students showed higher awareness of nutrition concepts and practices but poor implementation from their side was observed. The study revealed that a positive and significant correlation existed between health related performance and nutrition literacy (r = 0.32). Healthier eating habits and lifestyle were associated more with nutrition conscious students (r = 0.73) than knowledgeable students (r = 0.56). It was concluded that knowledge alone is not enough to stimulate individuals to practice healthy habits. Other implementations are required to raise awareness of the issues at hand.

  11. Performance indicators for power reactors

    International Nuclear Information System (INIS)

    Gillies, C.; White, M.

    1995-11-01

    A review of Canadian and worldwide performance indicator definitions and data was performed to identify a set of indicators that could be used for comparison of performance among nuclear power plants. The results of this review are to be used as input to an AECB team developing a consistent set of performance indicators for measuring Canadian power reactor safety performance. To support the identification of performance indicators, a set of criteria was developed to assess the effectiveness of each indicator for meaningful comparison of performance information. The project identified a recommended set of performance indicators that could be used by AECB staff to compare the performance of Canadian nuclear power plants among themselves, and with international performance. The basis for selection of the recommended set and exclusion of others is provided. This report provides definitions and calculation methods for each recommended performance indicator. In addition, a spreadsheet has been developed for comparison and trending for the recommended set of indicators. Example trend graphs are included to demonstrate the use of the spreadsheet. (author). 50 refs., 11 tabs., 3 figs

  12. A study on people's awareness about the restarting and decommissioning of nuclear power plants

    International Nuclear Information System (INIS)

    Goto, Manabu; Sakai, Yukimi

    2015-01-01

    In this study, we conducted two questionnaire surveys targeting a total of 918 respondents living in the cities of Kyoto, Osaka and Kobe, in order to elucidate people's awareness of three things: 1) restart of nuclear power plants; 2) extension of the operation period of aging plants; and 3) decommissioning. The results are as follows: 1) People who think that electrical power companies voluntarily take higher safety measures trust the power companies and do not oppose the restart of the nuclear power plants, as compared to people who think that power companies only meet the requirements set by the nuclear regulatory agency. 2) When people were given information about aging measures and conforming to new regulatory standards, their anxiety toward the operation of aging plants was reduced. 3) People thought that decommissioning work was important for society. However, a small number of people thought it was a job worthwhile doing. (author)

  13. Fuzzy logic based power-efficient real-time multi-core system

    CERN Document Server

    Ahmed, Jameel; Najam, Shaheryar; Najam, Zohaib

    2017-01-01

    This book focuses on identifying the performance challenges involved in computer architectures, optimal configuration settings and analysing their impact on the performance of multi-core architectures. Proposing a power and throughput-aware fuzzy-logic-based reconfiguration for Multi-Processor Systems on Chip (MPSoCs) in both simulation and real-time environments, it is divided into two major parts. The first part deals with the simulation-based power and throughput-aware fuzzy logic reconfiguration for multi-core architectures, presenting the results of a detailed analysis on the factors impacting the power consumption and performance of MPSoCs. In turn, the second part highlights the real-time implementation of fuzzy-logic-based power-efficient reconfigurable multi-core architectures for Intel and Leone3 processors. .

  14. Performances of Occupational Therapy in the museum context: awareness of the diversity

    Directory of Open Access Journals (Sweden)

    Desirée Nobre Salasar

    2016-01-01

    Full Text Available The paper discusses the Occupational Therapist actions in a museum. Therefore, we approach issues that characterize the areas of museum environment presenting it as a possible workplace for the professionals, and will discuss which activities can be performed by professional occupational therapy in a museum, its relevance, and the achievement gap when performed by the occupational therapist and other professionals. Thus, the study’s main objective is to present a new occupational therapy work field for the highlighting the importance of public awareness activities and how these can influence the museum visitor experiences. We report two distinct activities with visually impaired awareness theme, conducted between February and March 2015, at the Batalha Community Museum in Portugal. We report the activities and analyze the results, seeking to qualitatively assess the public participation, and its response to the impact that such activities may bear in the cultural inclusion of visually impaired people.

  15. Interrelationship between a brand’s environmental embeddedness, brand awareness and company performance

    Directory of Open Access Journals (Sweden)

    Ivana First

    2007-07-01

    Full Text Available The purpose of the research was to define and measure a brand’s environmental embeddedness that, unlike the constructs used in previous environmental research, measures to which extent brand identity is embedded in environmental values. The purpose was also to assess a correlation of this variable to brand awareness and company performance. This study is based on three overlapping theoretical backgrounds: brand management, corporate social responsibility and organizational culture. Secondary data as well as content analysis and survey-based primary data were used. The results indicate a correlation between the environmental embeddedness of brands and brand awareness, but no correlation of these two variables to company performance. Such results support the idea that companies should indeed invest in being environmentally friendly so as to increase their chances of being recognized, while also indicating that the companies that own strong brands cannot afford not to be environmentally conscious as that could hurt their corporate brand values.

  16. National Latino AIDS Awareness Day

    Centers for Disease Control (CDC) Podcasts

    This podcast highlights National Latino AIDS Awareness Day, to increase awareness of the disproportionate impact of HIV on the Hispanic or Latino population in the United States and dependent territories. The podcast reminds Hispanics or Latinos that they have the power to take control of their health and protect themselves against HIV.

  17. Meta-Cognitive Awareness of Writing Strategy Use among Iranian EFL Learners and Its Impact on Their Writing Performance

    Directory of Open Access Journals (Sweden)

    Muhammad Azizi

    2017-03-01

    Full Text Available It is believed that by improving students’ meta-cognitive awareness of elements of language, learning can be enhanced. Therefore, this study consisted of two main objectives. First, it aimed at examining meta-cognitive awareness of writing strategy use among Iranian EFL learners. Using a Friedman test to check if there was any significant difference among the participants in their use of writing strategies, it was found that the differences among the strategies were not significant. The second objective of the study was to examine the impact of the participants’ meta-cognitive awareness of writing strategy use on their L2 writing performance. This was answered using two statistical techniques, namely Pearson correlation and Multiple Regression. Using Pearson Correlation, it was found that there was a significant relationship between writing performance and all writing strategy categories (planning, monitoring, evaluation, and self-awareness. Moreover, using Multiple Regression, it was found that the p–value was significant only for evaluation strategy category, but not for the rest. That is, it was found that strategy categories such as planning, monitoring, and self-awareness did not predict students’ writing performance. The result of this study responds to the ongoing problems students have in their meta-cognitive awareness of writing strategy use which can contribute to raising proficiency levels in shorter time frames.

  18. A microprocessor-based power control data acquisition system

    International Nuclear Information System (INIS)

    Greenberg, S.

    1982-10-01

    The project reported deals with one of the aspects of power plant control and management. In order to perform optimal distribution of power and load switching, one has to solve a specific optimization problem. In order to solve this problem one needs to collect current and power expenditure data from a large number of channels and have them processed. This particular procedure is defined as data acquisition and it constitutes the main topic of this project. A microprocessor-based data acquisition system for power management is investigated and developed. The current and power data of about 100 analog channels are sampled and collected in real-time. These data are subsequently processed to calculate the power factor (cos phi) for each channel and the maximum demand. The data is processed by an AMD 9511 Arithmetic Processing Unit and the whole system is controlled by an Intel 8080A CPU. All this information is then transfered to a universal computer through a synchronized communication channel. The optimization computations would be performed by the high level computer. Different ways of performing the search of data over a large number of channels have been investigated. A particular solution to overcome the gain and offset drift of the A/D converter, using software, has been proposed. The 8080A supervises the collection and routing of data in real time, while the 9511 performs calculation, using these data. (Author)

  19. Simulation of small-angle scattering patterns using a CPU-efficient algorithm

    Science.gov (United States)

    Anitas, E. M.

    2017-12-01

    Small-angle scattering (of neutrons, x-ray or light; SAS) is a well-established experimental technique for structural analysis of disordered systems at nano and micro scales. For complex systems, such as super-molecular assemblies or protein molecules, analytic solutions of SAS intensity are generally not available. Thus, a frequent approach to simulate the corresponding patterns is to use a CPU-efficient version of the Debye formula. For this purpose, in this paper we implement the well-known DALAI algorithm in Mathematica software. We present calculations for a series of 2D Sierpinski gaskets and respectively of pentaflakes, obtained from chaos game representation.

  20. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  1. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  2. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    International Nuclear Information System (INIS)

    Xu, Chuanfu; Deng, Xiaogang; Zhang, Lilun; Fang, Jianbin; Wang, Guangxue; Jiang, Yi; Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua

    2014-01-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  3. Architectural and compiler techniques for energy reduction in high-performance microprocessors

    Science.gov (United States)

    Bellas, Nikolaos

    1999-11-01

    The microprocessor industry has started viewing power, along with area and performance, as a decisive design factor in today's microprocessors. The increasing cost of packaging and cooling systems poses stringent requirements on the maximum allowable power dissipation. Most of the research in recent years has focused on the circuit, gate, and register-transfer (RT) levels of the design. In this research, we focus on the software running on a microprocessor and we view the program as a power consumer. Our work concentrates on the role of the compiler in the construction of "power-efficient" code, and especially its interaction with the hardware so that unnecessary processor activity is saved. We propose techniques that use extra hardware features and compiler-driven code transformations that specifically target activity reduction in certain parts of the CPU which are known to be large power and energy consumers. Design for low power/energy at this level of abstraction entails larger energy gains than in the lower stages of the design hierarchy in which the design team has already made the most important design commitments. The role of the compiler in generating code which exploits the processor organization is also fundamental in energy minimization. Hence, we propose a hardware/software co-design paradigm, and we show what code transformations are necessary by the compiler so that "wasted" power in a modern microprocessor can be trimmed. More specifically, we propose a technique that uses an additional mini cache located between the instruction cache (I-Cache) and the CPU core; the mini cache buffers instructions that are nested within loops and are continuously fetched from the I-Cache. This mechanism can create very substantial energy savings, since the I-Cache unit is one of the main power consumers in most of today's high-performance microprocessors. Results are reported for the SPEC95 benchmarks in the R-4400 processor which implements the MIPS2 instruction

  4. An Energy-Aware Runtime Management of Multi-Core Sensory Swarms

    Directory of Open Access Journals (Sweden)

    Sungchan Kim

    2017-08-01

    Full Text Available In sensory swarms, minimizing energy consumption under performance constraint is one of the key objectives. One possible approach to this problem is to monitor application workload that is subject to change at runtime, and to adjust system configuration adaptively to satisfy the performance goal. As today’s sensory swarms are usually implemented using multi-core processors with adjustable clock frequency, we propose to monitor the CPU workload periodically and adjust the task-to-core allocation or clock frequency in an energy-efficient way in response to the workload variations. In doing so, we present an online heuristic that determines the most energy-efficient adjustment that satisfies the performance requirement. The proposed method is based on a simple yet effective energy model that is built upon performance prediction using IPC (instructions per cycle measured online and power equation derived empirically. The use of IPC accounts for memory intensities of a given workload, enabling the accurate prediction of execution time. Hence, the model allows us to rapidly and accurately estimate the effect of the two control knobs, clock frequency adjustment and core allocation. The experiments show that the proposed technique delivers considerable energy saving of up to 45%compared to the state-of-the-art multi-core energy management technique.

  5. Implementing Molecular Dynamics for Hybrid High Performance Computers - 1. Short Range Forces

    International Nuclear Information System (INIS)

    Brown, W. Michael; Wang, Peng; Plimpton, Steven J.; Tharrington, Arnold N.

    2011-01-01

    The use of accelerators such as general-purpose graphics processing units (GPGPUs) have become popular in scientific computing applications due to their low cost, impressive floating-point capabilities, high memory bandwidth, and low electrical power requirements. Hybrid high performance computers, machines with more than one type of floating-point processor, are now becoming more prevalent due to these advantages. In this work, we discuss several important issues in porting a large molecular dynamics code for use on parallel hybrid machines - (1) choosing a hybrid parallel decomposition that works on central processing units (CPUs) with distributed memory and accelerator cores with shared memory, (2) minimizing the amount of code that must be ported for efficient acceleration, (3) utilizing the available processing power from both many-core CPUs and accelerators, and (4) choosing a programming model for acceleration. We present our solution to each of these issues for short-range force calculation in the molecular dynamics package LAMMPS. We describe algorithms for efficient short range force calculation on hybrid high performance machines. We describe a new approach for dynamic load balancing of work between CPU and accelerator cores. We describe the Geryon library that allows a single code to compile with both CUDA and OpenCL for use on a variety of accelerators. Finally, we present results on a parallel test cluster containing 32 Fermi GPGPUs and 180 CPU cores.

  6. Object-Oriented Economic Power Dispatch of Electrical Power System with minimum pollution using a Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    T. Bouktir

    2005-06-01

    Full Text Available This paper presents solution of optimal power flow (OPF problem of electrical power system via a genetic algorithm of real type. The objective is to minimize the total fuel cost of generation and environmental pollution caused by fossil based thermal generating units and also maintain an acceptable system performance in terms of limits on generator real and reactive power outputs, bus voltages, shunt capacitors/reactors, transformers tap-setting and power flow of transmission lines. CPU times can be reduced by decomposing the optimization constraints to active constraints that affect directly the cost function manipulated directly the GA, and passive constraints such as generator bus voltages and transformer tap setting maintained in their soft limits using a conventional constraint load flow. The algorithm was developed in an Object Oriented fashion, in the C++ programming language. This option satisfies the requirements of flexibility, extensibility, maintainability and data integrity. The economic power dispatch is applied to IEEE 30-bus model system (6-generator, 41-line and 20-load. The numerical results have demonstrate the effectiveness of the stochastic search algorithms because its can provide accurate dispatch solutions with reasonable time. Further analyses indicate that this method is effective for large-scale power systems.

  7. Understanding the Performance of Low Power Raspberry Pi Cloud for Big Data

    Directory of Open Access Journals (Sweden)

    Wajdi Hajji

    2016-06-01

    Full Text Available Nowadays, Internet-of-Things (IoT devices generate data at high speed and large volume. Often the data require real-time processing to support high system responsiveness which can be supported by localised Cloud and/or Fog computing paradigms. However, there are considerably large deployments of IoT such as sensor networks in remote areas where Internet connectivity is sparse, challenging the localised Cloud and/or Fog computing paradigms. With the advent of the Raspberry Pi, a credit card-sized single board computer, there is a great opportunity to construct low-cost, low-power portable cloud to support real-time data processing next to IoT deployments. In this paper, we extend our previous work on constructing Raspberry Pi Cloud to study its feasibility for real-time big data analytics under realistic application-level workload in both native and virtualised environments. We have extensively tested the performance of a single node Raspberry Pi 2 Model B with httperf and a cluster of 12 nodes with Apache Spark and HDFS (Hadoop Distributed File System. Our results have demonstrated that our portable cloud is useful for supporting real-time big data analytics. On the other hand, our results have also unveiled that overhead for CPU-bound workload in virtualised environment is surprisingly high, at 67.2%. We have found that, for big data applications, the virtualisation overhead is fractional for small jobs but becomes more significant for large jobs, up to 28.6%.

  8. GPScheDVS: A New Paradigm of the Autonomous CPU Speed Control for Commodity-OS-based General-Purpose Mobile Computers with a DVS-friendly Task Scheduling

    OpenAIRE

    Kim, Sookyoung

    2008-01-01

    This dissertation studies the problem of increasing battery life-time and reducing CPU heat dissipation without degrading system performance in commodity-OS-based general-purpose (GP) mobile computers using the dynamic voltage scaling (DVS) function of modern CPUs. The dissertation especially focuses on the impact of task scheduling on the effectiveness of DVS in achieving this goal. The task scheduling mechanism used in most contemporary general-purpose operating systems (GPOS) prioritizes t...

  9. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  10. Consideration of Command and Control Performance during Accident Management Process at the Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, Nisrene M. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Sok Chul [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-10-15

    The accident at the Fukushima Daiichi nuclear power plants shifted the nuclear safety paradigm from risk management to on-site management capability during a severe accident. The kernel of on-site management capability during an accident at a nuclear power plant is situation awareness and agility of command and control. However, little consideration has been given to accident management. After the events of September 11, 2001 and the catastrophic Fukushima nuclear disaster, agility of command and control has emerged as a significant element for effective and efficient accident management, with many studies emphasizing accident management strategies, particularly man-machine interface, which is considered a key role in ensuring nuclear power plant safety during severe accident conditions. This paper proposes a conceptual model for evaluating command and control performance during the accident management process at a nuclear power plant. Communication and information processing while responding to an accident is one of the key issues needed to mitigate the accident. This model will give guidelines for accurate and fast communication response during accident conditions.

  11. Minimizing cache misses in an event-driven network server: A case study of TUX

    DEFF Research Database (Denmark)

    Bhatia, Sapan; Consel, Charles; Lawall, Julia Laetitia

    2006-01-01

    We analyze the performance of CPU-bound network servers and demonstrate experimentally that the degradation in the performance of these servers under high-concurrency workloads is largely due to inefficient use of the hardware caches. We then describe an approach to speeding up event-driven network...... servers by optimizing their use of the L2 CPU cache in the context of the TUX Web server, known for its robustness to heavy load. Our approach is based on a novel cache-aware memory allocator and a specific scheduling strategy that together ensure that the total working data set of the server stays...

  12. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.

    Science.gov (United States)

    Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin

    2014-10-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

  13. EEG neural oscillatory dynamics reveal semantic and response conflict at difference levels of conflict awareness

    OpenAIRE

    Jun Jiang; Qinglin Zhang; Simon Van Gaal

    2015-01-01

    Although previous work has shown that conflict can be detected in the absence of awareness, it is unknown how different sources of conflict (i.e., semantic, response) are processed in the human brain and whether these processes are differently modulated by conflict awareness. To explore this issue, we extracted oscillatory power dynamics from electroencephalographic (EEG) data recorded while human participants performed a modified version of the Stroop task. Crucially, in this task conflict a...

  14. The Importance of Strength and Power on Key Performance Indicators in Elite Youth Soccer.

    Science.gov (United States)

    Wing, Christopher E; Turner, Anthony N; Bishop, Chris J

    2018-01-24

    The purpose of this investigation was to examine the importance of strength and power in relation to key performance indicators (KPI's) within competitive soccer match play. This was achieved through using an experimental approach where fifteen subjects were recruited from a professional soccer club's scholarship squad during the 2013/14 season. Following anthropometric measures, power and strength were assessed across a range of tests which included the squat jump (SJ), countermovement jump (CMJ), 20 metre (m) sprint and arrowhead change of direction test. A predicted 1-repetition maximum (RM) was also obtained for strength by performing a 3RM test for both the back squat and bench press and a total score of athleticism (TSA) was provided by summing z-scores for all fitness tests together, providing one complete score for athleticism. Performance analysis data was collected during 16 matches for the following KPIs: passing, shooting, dribbling, tackling and heading. Alongside this, data concerning player ball involvements (touches) was recorded. Results showed that there was a significant correlation (p soccer performance, particularly when players are required to win duels of a physical nature. There were no other relationships found between the fitness data and the KPI's recorded during match play which may indicate that other aspects of player's development such as technical skill, cognitive function and sensory awareness are more important for soccer-specific performance.

  15. Energy-aware design of digital systems

    Energy Technology Data Exchange (ETDEWEB)

    Gruian, F.

    2000-02-01

    Power and energy consumption are important issues in many digital applications, for reasons such as packaging cost and battery life-span. With the development of portable computing and communication, an increasing number of research groups are addressing power and energy related issues at various stages during the design process. Most of the work done in this area focuses on lower abstraction levels, such as gate or transistor level. Ideally, a power and energy-efficient design flow should consider the power and energy issues at every stage in the design process. Therefore, power and energy aware methods, applicable early in the design process are required. In this trend, the thesis presents two high-level design methods addressing power and energy consumption minimization. The first of the two approaches we describe, targets power consumption minimization during behavioral synthesis. This is carried out by minimizing the switching activity, while taking the correlations between signals into account. The second approach performs energy consumption minimization during system-level design, by choosing the most energy-efficient schedule and configuration of resources. Both methods make use of the constraint programming paradigm to model the problems in an elegant manner. The experimental results presented in this thesis show the impact of addressing the power and energy related issues early in the design process.

  16. GE PETtrace RF power failures related to poor power quality

    OpenAIRE

    Bender, B. R.; Erdahl, C. E.; Dick, D. W.

    2015-01-01

    Introduction Anyone who has ever overseen the installation of a new cyclotron is aware of the importance of addressing the numerous vendor-supplied site specifications prior to its arrival. If the site is not adequately prepared, the facility may face project cost overruns, poor cyclotron performance and unintended maintenance costs. Once a facility has identified the space, providing sufficient power is the next step. Every cyclotron vendor will provide you with a set of power specificati...

  17. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  18. Schoolteachers' awareness about scholastic performance and nutritional status of Egyptian schoolchildren.

    Science.gov (United States)

    Galal, Osman M; Ismail, Ibrahim; Gohar, Azza S; Foster, Zoë

    2005-06-01

    Malnutrition disorders affect more than 30% of schoolchildren in Egypt. This problem appears to be largely attributable to poor dietary quality and micronutrient deficiencies, such as iron and vitamin A. Inadequate nutrition intake has important implications because malnutrition has been shown to negatively affect the cognitive development of primary schoolchildren. This study assesses the awareness of schoolteachers about the impact of malnutrition on the scholastic performance of primary schoolchildren living in Egypt. Two focus group discussions were conducted with Egyptian schoolteachers from the Quena and Kharbia Governorates. The study indicates that schoolteachers consider low body weight and thinness as the primary signs of malnutrition. They do not prioritize malnutrition as a factor for poor scholastic performance. They also suggest that unhealthful eating habits, especially a lack of breakfast, negatively affect children's interaction with schoolteachers and their ability to excel in their studies. Schoolteachers endorse a more reliable and nutritionally valuable school-feeding program as a way to increase the scholastic performance of their students. The teachers advocate developing integrated programs between the Ministry of Education, the Ministry of Health and Population, teachers, children, and parents that provide nutrition education. A lack of awareness among teachers about the relationship of nutrition and cognitive function can lead to the misdiagnosis or delayed management of malnourished and scholastically challenged schoolchildren. This paper suggests that proper school-feeding programs and nutrition education programs, which integrate government ministries, teachers, children and parents, should be developed to improve the physical and cognitive health status of Egyptian schoolchildren.

  19. Joint Optimized CPU and Networking Control Scheme for Improved Energy Efficiency in Video Streaming on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Sung-Woong Jo

    2017-01-01

    Full Text Available Video streaming service is one of the most popular applications for mobile users. However, mobile video streaming services consume a lot of energy, resulting in a reduced battery life. This is a critical problem that results in a degraded user’s quality of experience (QoE. Therefore, in this paper, a joint optimization scheme that controls both the central processing unit (CPU and wireless networking of the video streaming process for improved energy efficiency on mobile devices is proposed. For this purpose, the energy consumption of the network interface and CPU is analyzed, and based on the energy consumption profile a joint optimization problem is formulated to maximize the energy efficiency of the mobile device. The proposed algorithm adaptively adjusts the number of chunks to be downloaded and decoded in each packet. Simulation results show that the proposed algorithm can effectively improve the energy efficiency when compared with the existing algorithms.

  20. Awareness of deficits in traumatic brain injury: a multidimensional approach to assessing metacognitive knowledge and online-awareness.

    LENUS (Irish Health Repository)

    O'Keeffe, Fiadhnait

    2007-01-01

    Recent models of impaired awareness in brain injury draw a distinction between metacognitive knowledge of difficulties and online awareness of errors (emergent and anticipatory). We examined performance of 31 Traumatic Brain Injury (TBI) participants and 31 healthy controls using a three-strand approach to assessing awareness. Metacognitive knowledge was assessed with an awareness interview and discrepancy scores on three questionnaires--Patient Competency Rating Scale, Frontal Systems Behavioral Scale and the Cognitive Failures Questionnaire. Online Emergent Awareness was assessed using an online error-monitoring task while participants performed tasks of sustained attention. Online anticipatory awareness was examined using prediction performance on two cognitive tasks. Results indicated that the TBI Low Self-Awareness (SA) group and High SA group did not differ in terms of severity, chronicity or standard neuropsychological tasks but those with Low SA were more likely to exhibit disinhibition, interpersonal problems and more difficulties in total competency. Sustained attention abilities were associated with both types of online awareness (emergent and anticipatory). There was a strong relationship between online emergent and online anticipatory awareness. Metacognitive knowledge did not correlate with the other two measures. This study highlights the necessity in adopting a multidimensional approach to assessing the multifaceted phenomenon of awareness of deficits.

  1. Energy and Power Measurements for Network Coding in the Context of Green Mobile Clouds

    DEFF Research Database (Denmark)

    Paramanathan, Achuthan; Pedersen, Morten Videbæk; Roetter, Daniel Enrique Lucani

    2013-01-01

    results for inter-session network coding in Open-Mesh routers underline that the energy invested in performing network coding pays off by dramatically reducing the total energy for the transmission of data over wireless links. We also show measurements for intra-session network coding in three different......This paper presents an in-depth power and energy measurement campaign for inter- and intra-session network coding enabled communication in mobile clouds. The measurements are carried out on different commercial platforms with focus on routers and mobile phones with different CPU capabilities. Our...

  2. Power affects performance when the pressure is on: evidence for low-power threat and high-power lift.

    Science.gov (United States)

    Kang, Sonia K; Galinsky, Adam D; Kray, Laura J; Shirako, Aiwa

    2015-05-01

    The current research examines how power affects performance in pressure-filled contexts. We present low-power-threat and high-power-lift effects, whereby performance in high-stakes situations suffers or is enhanced depending on one's power; that is, the power inherent to a situational role can produce effects similar to stereotype threat and lift. Three negotiations experiments demonstrate that role-based power affects outcomes but only when the negotiation is diagnostic of ability and, therefore, pressure-filled. We link these outcomes conceptually to threat and lift effects by showing that (a) role power affects performance more strongly when the negotiation is diagnostic of ability and (b) underperformance disappears when the low-power negotiator has an opportunity to self-affirm. These results suggest that stereotype threat and lift effects may represent a more general phenomenon: When the stakes are raised high, relative power can act as either a toxic brew (stereotype/low-power threat) or a beneficial elixir (stereotype/high-power lift) for performance. © 2015 by the Society for Personality and Social Psychology, Inc.

  3. SpaceCubeX: A Framework for Evaluating Hybrid Multi-Core CPU FPGA DSP Architectures

    Science.gov (United States)

    Schmidt, Andrew G.; Weisz, Gabriel; French, Matthew; Flatley, Thomas; Villalpando, Carlos Y.

    2017-01-01

    The SpaceCubeX project is motivated by the need for high performance, modular, and scalable on-board processing to help scientists answer critical 21st century questions about global climate change, air quality, ocean health, and ecosystem dynamics, while adding new capabilities such as low-latency data products for extreme event warnings. These goals translate into on-board processing throughput requirements that are on the order of 100-1,000 more than those of previous Earth Science missions for standard processing, compression, storage, and downlink operations. To study possible future architectures to achieve these performance requirements, the SpaceCubeX project provides an evolvable testbed and framework that enables a focused design space exploration of candidate hybrid CPU/FPGA/DSP processing architectures. The framework includes ArchGen, an architecture generator tool populated with candidate architecture components, performance models, and IP cores, that allows an end user to specify the type, number, and connectivity of a hybrid architecture. The framework requires minimal extensions to integrate new processors, such as the anticipated High Performance Spaceflight Computer (HPSC), reducing time to initiate benchmarking by months. To evaluate the framework, we leverage a wide suite of high performance embedded computing benchmarks and Earth science scenarios to ensure robust architecture characterization. We report on our projects Year 1 efforts and demonstrate the capabilities across four simulation testbed models, a baseline SpaceCube 2.0 system, a dual ARM A9 processor system, a hybrid quad ARM A53 and FPGA system, and a hybrid quad ARM A53 and DSP system.

  4. ENERGY AWARE NETWORK: BAYESIAN BELIEF NETWORKS BASED DECISION MANAGEMENT SYSTEM

    Directory of Open Access Journals (Sweden)

    Santosh Kumar Chaudhari

    2011-06-01

    Full Text Available A Network Management System (NMS plays a very important role in managing an ever-evolving telecommunication network. Generally an NMS monitors & maintains the health of network elements. The growing size of the network warrants extra functionalities from the NMS. An NMS provides all kinds of information about networks which can be used for other purposes apart from monitoring & maintaining networks like improving QoS & saving energy in the network. In this paper, we add another dimension to NMS services, namely, making an NMS energy aware. We propose a Decision Management System (DMS framework which uses a machine learning technique called Bayesian Belief Networks (BBN, to make the NMS energy aware. The DMS is capable of analysing and making control decisions based on network traffic. We factor in the cost of rerouting and power saving per port. Simulations are performed on standard network topologies, namely, ARPANet and IndiaNet. It is found that ~2.5-6.5% power can be saved.

  5. FPGA Design and Verification Procedure for Nuclear Power Plant MMIS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Yoo, Kawnwoo; Ryoo, Kwangki [Hanbat National Univ., Daejeon (Korea, Republic of)

    2013-05-15

    In this paper, it is shown that it is possible to ensure reliability by performing the steps of the verification based on the FPGA development methodology, to ensure the safety of application to the NPP MMIS of the FPGA run along the step. Currently, the PLC (Programmable Logic Controller) which is being developed is composed of the FPGA (Field Programmable Gate Array) and CPU (Central Processing Unit). As the importance of the FPGA in the NPP (Nuclear Power Plant) MMIS (Man-Machine Interface System) has been increasing than before, the research on the verification of the FPGA has being more and more concentrated recently.

  6. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  7. Application of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the OECD/NRC BWR turbine trip benchmark and its performance on multi-processor computers

    International Nuclear Information System (INIS)

    Langenbuch, S.; Schmidt, K.D.; Velkov, K.

    2003-01-01

    The OECD/NRC BWR Turbine Trip (TT) Benchmark is investigated to perform code-to-code comparison of coupled codes including a comparison to measured data which are available from turbine trip experiments at Peach Bottom 2. This Benchmark problem for a BWR over-pressure transient represents a challenging application of coupled codes which integrate 3-dimensional neutron kinetics into thermal-hydraulic system codes for best-estimate simulation of plant transients. This transient represents a typical application of coupled codes which are usually performed on powerful workstations using a single CPU. Nowadays, the availability of multi-CPUs is much easier. Indeed, powerful workstations already provide 4 to 8 CPU, computer centers give access to multi-processor systems with numbers of CPUs in the order of 16 up to several 100. Therefore, the performance of the coupled code Athlet-Quabox/Cubbox on multi-processor systems is studied. Different cases of application lead to changing requirements of the code efficiency, because the amount of computer time spent in different parts of the code is varying. This paper presents main results of the coupled code Athlet-Quabox/Cubbox for the extreme scenarios of the BWR TT Benchmark together with evaluations of the code performance on multi-processor computers. (authors)

  8. CPU Server

    CERN Multimedia

    The CERN computer centre has hundreds of racks like these. They are over a million times more powerful than our first computer in the 1960's. This tray is a 'dual-core' server. This means it effectively has two CPUs in it (eg. two of your home computers minimised to fit into a single box). Also note the copper cooling fins, to help dissipate the heat.

  9. Performance Analysis of an Astrophysical Simulation Code on the Intel Xeon Phi Architecture

    OpenAIRE

    Noormofidi, Vahid; Atlas, Susan R.; Duan, Huaiyu

    2015-01-01

    We have developed the astrophysical simulation code XFLAT to study neutrino oscillations in supernovae. XFLAT is designed to utilize multiple levels of parallelism through MPI, OpenMP, and SIMD instructions (vectorization). It can run on both CPU and Xeon Phi co-processors based on the Intel Many Integrated Core Architecture (MIC). We analyze the performance of XFLAT on configurations with CPU only, Xeon Phi only and both CPU and Xeon Phi. We also investigate the impact of I/O and the multi-n...

  10. Predicting High-Power Performance in Professional Cyclists.

    Science.gov (United States)

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  11. Empirical LTE Smartphone Power Model with DRX Operation for System Level Simulations

    DEFF Research Database (Denmark)

    Lauridsen, Mads; Noël, Laurent; Mogensen, Preben

    2013-01-01

    An LTE smartphone power model is presented to enable academia and industry to evaluate users’ battery life on system level. The model is based on empirical measurements on a smartphone using a second generation LTE chipset, and the model includes functions of receive and transmit data rates...... and power levels. The first comprehensive Discontinuous Reception (DRX) power consumption measurements are reported together with cell bandwidth, screen and CPU power consumption. The transmit power level and to some extent the receive data rate constitute the overall power consumption, while DRX proves...

  12. Reducing power usage on demand

    Science.gov (United States)

    Corbett, G.; Dewhurst, A.

    2016-10-01

    The Science and Technology Facilities Council (STFC) datacentre provides large- scale High Performance Computing facilities for the scientific community. It currently consumes approximately 1.5MW and this has risen by 25% in the past two years. STFC has been investigating leveraging preemption in the Tier 1 batch farm to save power. HEP experiments are increasing using jobs that can be killed to take advantage of opportunistic CPU resources or novel cost models such as Amazon's spot pricing. Additionally, schemes from energy providers are available that offer financial incentives to reduce power consumption at peak times. Under normal operating conditions, 3% of the batch farm capacity is wasted due to draining machines. By using preempt-able jobs, nodes can be rapidly made available to run multicore jobs without this wasted resource. The use of preempt-able jobs has been extended so that at peak times machines can be hibernated quickly to save energy. This paper describes the implementation of the above and demonstrates that STFC could in future take advantage of such energy saving schemes.

  13. "The Heart Truth:" Using the Power of Branding and Social Marketing to Increase Awareness of Heart Disease in Women.

    Science.gov (United States)

    Long, Terry; Taubenheim, Ann; Wayman, Jennifer; Temple, Sarah; Ruoff, Beth

    2008-03-01

    In September 2002, the National Heart, Lung, and Blood Institute launched The Heart Truth, the first federally-sponsored national campaign aimed at increasing awareness among women about their risk of heart disease. A traditional social marketing approach, including an extensive formative research phase, was used to plan, implement, and evaluate the campaign. With the creation of the Red Dress as the national symbol for women and heart disease awareness, the campaign integrated a branding strategy into its social marketing framework. The aim was to develop and promote a women's heart disease brand that would create a strong emotional connection with women. The Red Dress brand has had a powerful appeal to a wide diversity of women and has given momentum to the campaign's three-part implementation strategy of partnership development, media relations, and community action. In addition to generating its own substantial programming, The Heart Truth became a catalyst for a host of other national and local educational initiatives, both large and small. By the campaign's fifth anniversary, surveys showed that women were increasingly aware of heart disease as their leading cause of death and that the rise in awareness was associated with increased action to reduce heart disease risk.

  14. An Adaptive and Integrated Low-Power Framework for Multicore Mobile Computing

    Directory of Open Access Journals (Sweden)

    Jongmoo Choi

    2017-01-01

    Full Text Available Employing multicore in mobile computing such as smartphone and IoT (Internet of Things device is a double-edged sword. It provides ample computing capabilities required in recent intelligent mobile services including voice recognition, image processing, big data analysis, and deep learning. However, it requires a great deal of power consumption, which causes creating a thermal hot spot and putting pressure on the energy resource in a mobile device. In this paper, we propose a novel framework that integrates two well-known low-power techniques, DPM (Dynamic Power Management and DVFS (Dynamic Voltage and Frequency Scaling for energy efficiency in multicore mobile systems. The key feature of the proposed framework is adaptability. By monitoring the online resource usage such as CPU utilization and power consumption, the framework can orchestrate diverse DPM and DVFS policies according to workload characteristics. Real implementation based experiments using three mobile devices have shown that it can reduce the power consumption ranging from 22% to 79%, while affecting negligibly the performance of workloads.

  15. Energy-Aware Sensor Networks via Sensor Selection and Power Allocation

    KAUST Repository

    Niyazi, Lama B.; Chaaban, Anas; Dahrouj, Hayssam; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2018-01-01

    sensor selection and power allocation algorithms of low complexity. Simulation results show an appreciable improvement in their performance over a system in which no selection strategy is applied, with a slight gap from derived lower bounds. The results

  16. A Dynamic Programming Solution for Energy-Optimal Video Playback on Mobile Devices

    Directory of Open Access Journals (Sweden)

    Minseok Song

    2016-01-01

    Full Text Available Due to the development of mobile technology and wide availability of smartphones, the Internet of Things (IoT starts to handle high volumes of video data to facilitate multimedia-based services, which requires energy-efficient video playback. In video playback, frames have to be decoded and rendered at high playback rate, increasing the computation cost on the CPU. To save the CPU power, dynamic voltage and frequency scaling (DVFS dynamically adjusts the operating voltage of the processor along with frequency, in which appropriate selection of frequency on power could achieve a balance between performance and power. We present a decoding model that allows buffering frames to let the CPU run at low frequency and then propose an algorithm that determines the CPU frequency needed to decode each frame in a video, with the aim of minimizing power consumption while meeting buffer size and deadline constraints, using a dynamic programming technique. We finally extend this algorithm to optimize CPU frequencies over a short sequence of frames, producing a practical method of reducing the energy required for video decoding. Experimental results show a system-wide reduction in energy of 27%, compared with a processor running at full speed.

  17. Achieving excellence in human performance through leadership, education, and training in nuclear power industry

    International Nuclear Information System (INIS)

    Clark, C.R.; Kazennov, A.; Kossilov, A.; Mazour, T.; Yoder, J.

    2004-01-01

    Full text: In order to achieve and maintain high levels of safety and productivity, nuclear power plants are required to be staffed with an adequate number of highly qualified and experienced personnel who are duly aware of the technical and administrative requirements for safety and are motivated to adopt a positive attitude to safety, as an element of safety culture. To establish and maintain a high level of human performance, appropriate education and training programmes should be in place and kept under constant review to ensure their relevance. As the nuclear power industry continues to be challenged by increasing safety requirements, a high level of competition and decreasing budgets, it becomes more important than ever to maintain excellence in human performance and ensure that NPP personnel training provides a value to the organization. Nuclear industry managers and supervisors bear the primary responsibility to assure that people perform their jobs safely and effectively. Training personnel must be responsive to the needs of the organization, working hand-in-hand with line managers and supervisors to ensure that human performance improvement needs are properly analyzed, and that training as well as other appropriate interventions are developed and implemented in the most effective and efficient way possible. The International Atomic Energy Agency together with its Member States has provided for coordinated information exchange and developed guidance on methods and practices to identify and improve the effectiveness NPP personnel training. This has resulted in: plant performance improvements, improved human performance, meeting goals and objectives of the business (quality, safety, productivity), and more effective training programs. This article describes the IAEA activities and achievements in the subject area for systematically understanding and improving human performance in nuclear power industry. The article also describes cooperation programmes

  18. Empirical research on an ecological interface design for improving situation awareness of operators in an advanced control room

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Suh, Sang Moon; Jang, Gwi Sook; Hong, Seung Kweon; Park, Jung Chul

    2012-01-01

    Highlights: ► An EID prototype for monitoring primary side of nuclear power plant is proposed. ► The effectiveness of the prototype is validated using a partial scoped dynamic mockup in terms of situation awareness. ► The validation is based on comparison of a mimic display with an EID plus mimics. - Abstract: The purpose of this study is to validate whether an ecological interface design (EID) improves operators’ situation awareness in an advanced control room of a nuclear power plant (NPP). EID is defined as an approach to interface design that was introduced specifically for complex socio-technical, real-time, and dynamic systems. The EID technology has not yet been adapted to the nuclear power industry due to lack of empirical studies. Especially in a situational awareness aspect, many researchers have predicted that the EID will support operators to detect unanticipated events. Just a few studies, however, unveiled the positive effect of the EID display on human performance using a full scoped simulator. In this study, to investigate whether an EID improves operators’ situational awareness, we developed an EID prototype for nuclear power operations and a partial scoped dynamic mockup to validate the effectiveness of the EID prototype. Three experienced operators were involved as subjects in our study and they were fully well trained for using the EID prototype. We compared two types of situations in terms of situation awareness. One is mimic based information display and the other is a mimic plus EID based information display. The result of our study revealed that a mimic plus EID based information display is more effective than a mimic based information display in terms of situation awareness. This study is significant in that the EID as an emerging technology is adoptable to a digitalized control room in an aspect of improving operators’ situation awareness.

  19. Bruce A - performance power

    Energy Technology Data Exchange (ETDEWEB)

    Boucher, P. [Bruce Power, Tiverton, ON (Canada)

    2015-07-01

    This paper discusses the strategy for improving performance at Bruce Power. The key to excellence is changing behaviours. Reinforcing and enforcing expectations, aligned with the 2015 operating to the Highest Standards Site Initiative. Long term equipment strategies, supported by the 2015 Equipment Health Site Initiative, individual and group accountability for online/outage Work Management, with further gains through 2015 Maintenance Alignment and Resource Strategy (MARS) Site Initiative. Results showed human performance improvement, more reliable and predictable units and outage performance improvement.

  20. Measuring Situation Awareness of Operating Team in Different Main Control Room Environments of Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Seung Woo Lee

    2016-02-01

    Full Text Available Environments in nuclear power plants (NPPs are changing as the design of instrumentation and control systems for NPPs is rapidly moving toward fully digital instrumentation and control, and modern computer techniques are gradually introduced into main control rooms (MCRs. Within the context of these environmental changes, the level of performance of operators in a digital MCR is a major concern. Situation awareness (SA, which is used within human factors research to explain to what extent operators of safety-critical systems know what is transpiring in the system and the environment, is considered a prerequisite factor to guarantee the safe operation of NPPs. However, the safe operation of NPPs can be guaranteed through a team effort. In this regard, the operating team's SA in a conventional and digital MCR should be measured in order to assess whether the new design features implemented in a digital MCR affect this parameter. This paper explains the team SA measurement method used in this study and the results of applying this measurement method to operating teams in different MCR environments. The paper also discusses several empirical lessons learned from the results.

  1. Stability-Aware Geographic Routing in Energy Harvesting Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tran Dinh Hieu

    2016-05-01

    Full Text Available A new generation of wireless sensor networks that harvest energy from environmental sources such as solar, vibration, and thermoelectric to power sensor nodes is emerging to solve the problem of energy limitation. Based on the photo-voltaic model, this research proposes a stability-aware geographic routing for reliable data transmissions in energy-harvesting wireless sensor networks (EH-WSNs to provide a reliable routes selection method and potentially achieve an unlimited network lifetime. Specifically, the influences of link quality, represented by the estimated packet reception rate, on network performance is investigated. Simulation results show that the proposed method outperforms an energy-harvesting-aware method in terms of energy consumption, the average number of hops, and the packet delivery ratio.

  2. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    Science.gov (United States)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  3. PERFORMANCE EVALUATION OF OR1200 PROCESSOR WITH EVOLUTIONARY PARALLEL HPRC USING GEP

    Directory of Open Access Journals (Sweden)

    R. Maheswari

    2012-04-01

    Full Text Available In this fast computing era, most of the embedded system requires more computing power to complete the complex function/ task at the lesser amount of time. One way to achieve this is by boosting up the processor performance which allows processor core to run faster. This paper presents a novel technique of increasing the performance by parallel HPRC (High Performance Reconfigurable Computing in the CPU/DSP (Digital Signal Processor unit of OR1200 (Open Reduced Instruction Set Computer (RISC 1200 using Gene Expression Programming (GEP an evolutionary programming model. OR1200 is a soft-core RISC processor of the Intellectual Property cores that can efficiently run any modern operating system. In the manufacturing process of OR1200 a parallel HPRC is placed internally in the Integer Execution Pipeline unit of the CPU/DSP core to increase the performance. The GEP Parallel HPRC is activated /deactivated by triggering the signals i HPRC_Gene_Start ii HPRC_Gene_End. A Verilog HDL(Hardware Description language functional code for Gene Expression Programming parallel HPRC is developed and synthesised using XILINX ISE in the former part of the work and a CoreMark processor core benchmark is used to test the performance of the OR1200 soft core in the later part of the work. The result of the implementation ensures the overall speed-up increased to 20.59% by GEP based parallel HPRC in the execution unit of OR1200.

  4. Discrimination-Aware Classifiers for Student Performance Prediction

    Science.gov (United States)

    Luo, Ling; Koprinska, Irena; Liu, Wei

    2015-01-01

    In this paper we consider discrimination-aware classification of educational data. Mining and using rules that distinguish groups of students based on sensitive attributes such as gender and nationality may lead to discrimination. It is desirable to keep the sensitive attributes during the training of a classifier to avoid information loss but…

  5. Attitudes of the general public and electric power company employees toward nuclear power generation

    International Nuclear Information System (INIS)

    Komiyama, Hisashi

    1997-01-01

    We conducted an awareness survey targeted at members of the general public residing in urban areas and in areas scheduled for construction of nuclear power plants as well as employees of electric power company in order to determine the awareness and attitude structures of people residing near scheduled construction sites of nuclear power plants with respect to nuclear power generation, and to examine ways of making improvements in terms of promoting nuclear power plant construction sites. Analysis of those results revealed that there are no significant differences in the awareness and attitudes of people residing in urban areas and in areas near scheduled construction sites. On the contrary, a general sense of apprehension regarding the construction of nuclear power plants was observed common to both groups. In addition, significant differences in awareness and attitudes with respect to various factors were determined to exist between members of the general public residing in urban areas and scheduled construction sites and employees of electric power company. (author)

  6. Low-power secure body area network for vital sensors toward IEEE802.15.6.

    Science.gov (United States)

    Kuroda, Masahiro; Qiu, Shuye; Tochikubo, Osamu

    2009-01-01

    Many healthcare/medical services have started using personal area networks, such as Bluetooth and ZigBee; these networks consist of various types of vital sensors. These works focus on generalized functions for sensor networks that expect enough battery capacity and low-power CPU/RF (Radio Frequency) modules, but less attention to easy-to-use privacy protection. In this paper, we propose a commercially-deployable secure body area network (S-BAN) with reduced computational burden on a real sensor that has limited RAM/ROM sizes and CPU/RF power consumption under a light-weight battery. Our proposed S-BAN provides vital data ordering among sensors that are involved in an S-BAN and also provides low-power networking with zero-administration security by automatic private key generation. We design and implement the power-efficient media access control (MAC) with resource-constraint security in sensors. Then, we evaluate the power efficiency of the S-BAN consisting of small sensors, such as an accessory type ECG and ring-type SpO2. The evaluation of power efficiency of the S-BAN using real sensors convinces us in deploying S-BAN and will also help us in providing feedbacks to the IEEE802.15.6 MAC, which will be the standard for BANs.

  7. Automatic Energy Schemes for High Performance Applications

    Energy Technology Data Exchange (ETDEWEB)

    Sundriyal, Vaibhav [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    Although high-performance computing traditionally focuses on the efficient execution of large-scale applications, both energy and power have become critical concerns when approaching exascale. Drastic increases in the power consumption of supercomputers affect significantly their operating costs and failure rates. In modern microprocessor architectures, equipped with dynamic voltage and frequency scaling (DVFS) and CPU clock modulation (throttling), the power consumption may be controlled in software. Additionally, network interconnect, such as Infiniband, may be exploited to maximize energy savings while the application performance loss and frequency switching overheads must be carefully balanced. This work first studies two important collective communication operations, all-to-all and allgather and proposes energy saving strategies on the per-call basis. Next, it targets point-to-point communications to group them into phases and apply frequency scaling to them to save energy by exploiting the architectural and communication stalls. Finally, it proposes an automatic runtime system which combines both collective and point-to-point communications into phases, and applies throttling to them apart from DVFS to maximize energy savings. The experimental results are presented for NAS parallel benchmark problems as well as for the realistic parallel electronic structure calculations performed by the widely used quantum chemistry package GAMESS. Close to the maximum energy savings were obtained with a substantially low performance loss on the given platform.

  8. An analysis of the awareness and performance of radiation workers' radiation/radioactivity protection in medical institutions : Focused on Busan regional medical institutions

    International Nuclear Information System (INIS)

    Park, Cheol Koo; Hwang, Chul Hwan; Kim, Dong Hyun

    2017-01-01

    The purpose of this study was to investigate safety management awareness and behavioral investigation of radiation/radioactivity performance defenses of radiation workers' in medical institutions. Data collection consisted of 267 radiation workers working in medical institutions using structured questionnaires. As a result, it was analyzed that radiation safety management awareness and performance were high in 40s, 50s group and higher education group. The analysis according to the radiation safety management knowledge was analyzed that the 'Know very well' group had higher scores on awareness and performance scores. The analysis according to the degree of safety management effort showed the high awareness scale and the performance scale in the group 'Receiving various education or studying the safety management contents through book'. The correlations between the sub-factors showed the highest positive correlation between perceived practician and personal perspective and perceived by patient and patient's caretaker perspective. Therefore, radiation safety management for workers, patients, and patient's caretaker should be conducted through continuous education of radiation safety management through various routes of radiation workers working at medical institutions

  9. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    Science.gov (United States)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  10. National Latino AIDS Awareness Day

    Centers for Disease Control (CDC) Podcasts

    2014-10-08

    This podcast highlights National Latino AIDS Awareness Day, to increase awareness of the disproportionate impact of HIV on the Hispanic or Latino population in the United States and dependent territories. The podcast reminds Hispanics or Latinos that they have the power to take control of their health and protect themselves against HIV.  Created: 10/8/2014 by Office of Health Equity, Office of the Director, Division of HIV/AIDS Prevention, National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention, Division of HIV/AIDS Prevention.   Date Released: 10/14/2014.

  11. The performances of R GPU implementations of the GMRES method

    Directory of Open Access Journals (Sweden)

    Bogdan Oancea

    2018-03-01

    Full Text Available Although the performance of commodity computers has improved drastically with the introduction of multicore processors and GPU computing, the standard R distribution is still based on single-threaded model of computation, using only a small fraction of the computational power available now for most desktops and laptops. Modern statistical software packages rely on high performance implementations of the linear algebra routines there are at the core of several important leading edge statistical methods. In this paper we present a GPU implementation of the GMRES iterative method for solving linear systems. We compare the performance of this implementation with a pure single threaded version of the CPU. We also investigate the performance of our implementation using different GPU packages available now for R such as gmatrix, gputools or gpuR which are based on CUDA or OpenCL frameworks.

  12. Power and performance software analysis and optimization

    CERN Document Server

    Kukunas, Jim

    2015-01-01

    Power and Performance: Software Analysis and Optimization is a guide to solving performance problems in modern Linux systems. Power-efficient chips are no help if the software those chips run on is inefficient. Starting with the necessary architectural background as a foundation, the book demonstrates the proper usage of performance analysis tools in order to pinpoint the cause of performance problems, and includes best practices for handling common performance issues those tools identify. Provides expert perspective from a key member of Intel's optimization team on how processors and memory

  13. A low power biomedical signal processor ASIC based on hardware software codesign.

    Science.gov (United States)

    Nie, Z D; Wang, L; Chen, W G; Zhang, T; Zhang, Y T

    2009-01-01

    A low power biomedical digital signal processor ASIC based on hardware and software codesign methodology was presented in this paper. The codesign methodology was used to achieve higher system performance and design flexibility. The hardware implementation included a low power 32bit RISC CPU ARM7TDMI, a low power AHB-compatible bus, and a scalable digital co-processor that was optimized for low power Fast Fourier Transform (FFT) calculations. The co-processor could be scaled for 8-point, 16-point and 32-point FFTs, taking approximate 50, 100 and 150 clock circles, respectively. The complete design was intensively simulated using ARM DSM model and was emulated by ARM Versatile platform, before conducted to silicon. The multi-million-gate ASIC was fabricated using SMIC 0.18 microm mixed-signal CMOS 1P6M technology. The die area measures 5,000 microm x 2,350 microm. The power consumption was approximately 3.6 mW at 1.8 V power supply and 1 MHz clock rate. The power consumption for FFT calculations was less than 1.5 % comparing with the conventional embedded software-based solution.

  14. Resource Isolation Method for Program’S Performance on CMP

    Science.gov (United States)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  15. Convergent Power Series of sech⁡(x and Solutions to Nonlinear Differential Equations

    Directory of Open Access Journals (Sweden)

    U. Al Khawaja

    2018-01-01

    Full Text Available It is known that power series expansion of certain functions such as sech⁡(x diverges beyond a finite radius of convergence. We present here an iterative power series expansion (IPS to obtain a power series representation of sech⁡(x that is convergent for all x. The convergent series is a sum of the Taylor series of sech⁡(x and a complementary series that cancels the divergence of the Taylor series for x≥π/2. The method is general and can be applied to other functions known to have finite radius of convergence, such as 1/(1+x2. A straightforward application of this method is to solve analytically nonlinear differential equations, which we also illustrate here. The method provides also a robust and very efficient numerical algorithm for solving nonlinear differential equations numerically. A detailed comparison with the fourth-order Runge-Kutta method and extensive analysis of the behavior of the error and CPU time are performed.

  16. A Multi-Temporal Context-Aware System for Competences Management

    Science.gov (United States)

    Rosa, João H.; Barbosa, Jorge L.; Kich, Marcos; Brito, Lucas

    2015-01-01

    The evolution of computing technology and wireless networks has contributed to the miniaturization of mobile devices and their increase in power, providing services anywhere and anytime. In this scenario, applications have considered the user's contexts to make decisions (Context Awareness). Context-aware applications have enabled new…

  17. Multi-microprocessor control of the main ring magnet power supply of the 12 GeV KEK proton synchrotron

    International Nuclear Information System (INIS)

    Sueno, T.; Mikawa, K.; Toda, M.; Toyama, T.; Sato, H.; Matsumoto, S.

    1992-01-01

    A general description of the computer control system of the KEK 12 GeV PS main ring magnet power supply is given, including its peripheral devices. The system consists of the main HIDIC-V90/25 CPU and of the input and output controllers HISEC-04M. The main CPU, supervised by UNIX, provides the man-machine interfacing and implements the repetitive control algorithm to correct for any magnet current deviation from reference. Two sub-CPU's are linked by a LAN and supported by a real time multi-task monitor. The output process controller distributes the control patterns to 16-bit DAC's, at 1.67 ms clock period in synchronism with the 3-phase ac line systems. The input controller logs the magnet current and voltage, via 16-bit ADC's at the same clock rate. (author)

  18. A Study of BUS Architecture Design for Controller of Nuclear Power Plant Using FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dongil; Yun, Donghwa; Hwang, Sungjae; Kim, Myeongyun; Lee, Dongyun [PONUTech Co. Ltd., Seoul (Korea, Republic of)

    2014-05-15

    CPU (Central Processing Unit) operating speed and communication rate have been more technically improved than before. However, whole system is been a degradation of performance by electronic and structural limitation of parallel bus. Transmission quantity and speed have a limit and need arbiter in order to do arbitration because several boards shared parallel bus. Arbiter is a high complexity in implementing so it increases component per chip. If a parallel bus uses, it will occurs some problems what are reflection noise, power/ground noise (or ground bounce) as SSN (Simultaneous Switching Noise) and crosstalk noise like magnetic coupling. In this paper, in order to solve a problem of parallel bus in controller of NPP (Nuclear Power Plant), proposes the bus architecture design using FPGA (Field Programmable Gate Array) based on LVDS (Low Voltage Differential Signaling)

  19. A Study of BUS Architecture Design for Controller of Nuclear Power Plant Using FPGA

    International Nuclear Information System (INIS)

    Lee, Dongil; Yun, Donghwa; Hwang, Sungjae; Kim, Myeongyun; Lee, Dongyun

    2014-01-01

    CPU (Central Processing Unit) operating speed and communication rate have been more technically improved than before. However, whole system is been a degradation of performance by electronic and structural limitation of parallel bus. Transmission quantity and speed have a limit and need arbiter in order to do arbitration because several boards shared parallel bus. Arbiter is a high complexity in implementing so it increases component per chip. If a parallel bus uses, it will occurs some problems what are reflection noise, power/ground noise (or ground bounce) as SSN (Simultaneous Switching Noise) and crosstalk noise like magnetic coupling. In this paper, in order to solve a problem of parallel bus in controller of NPP (Nuclear Power Plant), proposes the bus architecture design using FPGA (Field Programmable Gate Array) based on LVDS (Low Voltage Differential Signaling)

  20. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  1. [Power in the ICU: awareness and change].

    Science.gov (United States)

    van der Hoeven, J G

    2016-01-01

    - Power is a charged issue but, when executed with integrity, a potent instrument to improve quality of care.- Execution of power has various manifestations, which have an important effect on several aspects of the care process including the doctor-patient relationship, primary responsible physician and the relationship with other team members.- Abuse of power is a source of conflicts and may also result in emotional exhaustion and burnout.- Various measures may stimulate the appropriate use of power.

  2. FY17 CSSE L2 Milestone Report: Analyzing Power Usage Characteristics of Workloads Running on Trinity.

    Energy Technology Data Exchange (ETDEWEB)

    Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-01

    This report summarizes the work performed as part of a FY17 CSSE L2 milestone to in- vestigate the power usage behavior of ASC workloads running on the ATS-1 Trinity plat- form. Techniques were developed to instrument application code regions of interest using the Power API together with the Kokkos profiling interface and Caliper annotation library. Experiments were performed to understand the power usage behavior of mini-applications and the SNL/ATDM SPARC application running on ATS-1 Trinity Haswell and Knights Landing compute nodes. A taxonomy of power measurement approaches was identified and presented, providing a guide for application developers to follow. Controlled scaling study experiments were performed on up to 2048 nodes of Trinity along with smaller scale ex- periments on Trinity testbed systems. Additionally, power and energy system monitoring information from Trinity was collected and archived for post analysis of "in-the-wild" work- loads. Results were analyzed to assess the sensitivity of the workloads to ATS-1 compute node type (Haswell vs. Knights Landing), CPU frequency control, node-level power capping control, OpenMP configuration, Knights Landing on-package memory configuration, and algorithm/solver configuration. Overall, this milestone lays groundwork for addressing the long-term goal of determining how to best use and operate future ASC platforms to achieve the greatest benefit subject to a constrained power budget.

  3. Load-aware modeling for uplink cellular networks in a multi-channel environment

    KAUST Repository

    AlAmmouri, Ahmad

    2014-09-01

    We exploit tools from stochastic geometry to develop a tractable analytical approach for modeling uplink cellular networks. The developed model is load aware and accounts for per-user power control as well as the limited transmit power constraint for the users\\' equipment (UEs). The proposed analytical paradigm is based on a simple per-user power control scheme in which each user inverts his path-loss such that the signal is received at his serving base station (BS) with a certain power threshold ρ Due to the limited transmit power of the UEs, users that cannot invert their path-loss to their serving BSs are allowed to transmit with their maximum transmit power. We show that the proposed power control scheme not only provides a balanced cell center and cell edge user performance, it also facilitates the analysis when compared to the state-of-the-art approaches in the literature. To this end, we discuss how to manipulate the design variable ρ in response to the network parameters to optimize one or more of the performance metrics such as the outage probability, the network capacity, and the energy efficiency.

  4. Power Aware Dynamic Provisioning of HPC Networks

    Energy Technology Data Exchange (ETDEWEB)

    Groves, Taylor [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    Future exascale systems are under increased pressure to find power savings. The network, while it consumes a considerable amount of power is often left out of the picture when discussing total system power. Even when network power is being considered, the references are frequently a decade or older and rely on models that lack validation on modern inter- connects. In this work we explore how dynamic mechanisms of an Infiniband network save power and at what granularity we can engage these features. We explore this within the context of the host controller adapter (HCA) on the node and for the fabric, i.e. switches, using three different mechanisms of dynamic link width, frequency and disabling of links for QLogic and Mellanox systems. Our results show that while there is some potential for modest power savings, real world systems need to improved responsiveness to adjustments in order to fully leverage these savings. This page intentionally left blank.

  5. Scalability under a Power Bound using the GREMLINs Framework

    International Nuclear Information System (INIS)

    Maiterth, Matthias

    2015-01-01

    With the move towards exascale, system and software developers will have to deal with issues of extreme parallelism. The system properties affected most by the increase in node and core count are the shared resources on node and across the system. The increase in parallelism leads to reduced memory and bandwidth when regarding individual cores. Since power is a limiting factor for supercomputers, and power is not fully utilized in current systems, overprovisioning compute resources is a viable approach to maximized power utilization. To maximize system performance in regard to these changing conditions, it is necessary to understand how resource restrictions impact performance and system behavior. For the purpose of understanding anticipated system properties the GREMLINs Framework was developed. The framework gives the opportunity to add power restrictions, hinder memory properties and introduce faults to study resilience, among others. These features give the opportunity to use current petascale technology to study problems system designers and software developers will have to face when moving towards exascale and beyond. This work describes the initial release of the GREMLINs Framework, developed for this work, and shows how it can be used to study the scaling behavior of proxy applications. These proxy applications represent a selection of HPC workloads important to the scientific community. The proxy applications studied are AMG2013, an algebraic multi-grid linear system solver, CoMD, a classical molecular dynamics proxy application and NEKBONE, an application that uses a high order spectral element method to solve the Navier-Stokes equations. The main interest of these studies lies in analysis regarding their power behavior at scale under a power bound. These findings show how the GREMLINs Framework can help systems and software designers to attain better application performance and can also be used as basis for CPU power balancing tools to use power more

  6. Scalability under a Power Bound using the GREMLINs Framework

    Energy Technology Data Exchange (ETDEWEB)

    Maiterth, Matthias [Ludwig Maximilian Univ., Munich (Germany)

    2015-02-16

    With the move towards exascale, system and software developers will have to deal with issues of extreme parallelism. The system properties affected most by the increase in node and core count are the shared resources on node and across the system. The increase in parallelism leads to reduced memory and bandwidth when regarding individual cores. Since power is a limiting factor for supercomputers, and power is not fully utilized in current systems, overprovisioning compute resources is a viable approach to maximized power utilization. To maximize system performance in regard to these changing conditions, it is necessary to understand how resource restrictions impact performance and system behavior. For the purpose of understanding anticipated system properties the GREMLINs Framework was developed. The framework gives the opportunity to add power restrictions, hinder memory properties and introduce faults to study resilience, among others. These features give the opportunity to use current petascale technology to study problems system designers and software developers will have to face when moving towards exascale and beyond. This work describes the initial release of the GREMLINs Framework, developed for this work, and shows how it can be used to study the scaling behavior of proxy applications. These proxy applications represent a selection of HPC workloads important to the scientific community. The proxy applications studied are AMG2013, an algebraic multi-grid linear system solver, CoMD, a classical molecular dynamics proxy application and NEKBONE, an application that uses a high order spectral element method to solve the Navier-Stokes equations. The main interest of these studies lies in analysis regarding their power behavior at scale under a power bound. These findings show how the GREMLINs Framework can help systems and software designers to attain better application performance and can also be used as basis for CPU power balancing tools to use power more

  7. Attention without awareness: Attentional modulation of perceptual grouping without awareness.

    Science.gov (United States)

    Lo, Shih-Yu

    2018-04-01

    Perceptual grouping is the process through which the perceptual system combines local stimuli into a more global perceptual unit. Previous studies have shown attention to be a modulatory factor for perceptual grouping. However, these studies mainly used explicit measurements, and, thus, whether attention can modulate perceptual grouping without awareness is still relatively unexplored. To clarify the relationship between attention and perceptual grouping, the present study aims to explore how attention interacts with perceptual grouping without awareness. The task was to judge the relative lengths of two centrally presented horizontal bars while a railway-shaped pattern defined by color similarity was presented in the background. Although the observers were unaware of the railway-shaped pattern, their line-length judgment was biased by that pattern, which induced a Ponzo illusion, indicating grouping without awareness. More importantly, an attentional modulatory effect without awareness was manifested as evident by the observer's performance being more often biased when the railway-shaped pattern was formed by an attended color than when it was formed by an unattended one. Also, the attentional modulation effect was shown to be dynamic, being more pronounced with a short presentation time than a longer one. The results of the present study not only clarify the relationship between attention and perceptual grouping but also further contribute to our understanding of attention and awareness by corroborating the dissociation between attention and awareness.

  8. Architecture-Aware Configuration and Scheduling of Matrix Multiplication on Asymmetric Multicore Processors

    OpenAIRE

    Catalán, Sandra; Igual, Francisco D.; Mayo, Rafael; Rodríguez-Sánchez, Rafael; Quintana-Ortí, Enrique S.

    2015-01-01

    Asymmetric multicore processors (AMPs) have recently emerged as an appealing technology for severely energy-constrained environments, especially in mobile appliances where heterogeneity in applications is mainstream. In addition, given the growing interest for low-power high performance computing, this type of architectures is also being investigated as a means to improve the throughput-per-Watt of complex scientific applications. In this paper, we design and embed several architecture-aware ...

  9. Virginia power's human performance evaluation system (HPES)

    International Nuclear Information System (INIS)

    Patterson, W.E.

    1991-01-01

    This paper reports on the Human Performance Evaluation System (HPES) which was initially developed by the Institute of Nuclear Power Operations (INPO) using the Aviation Safety Reporting System (ASRS) as a guide. After a pilot program involving three utilities ended in 1983, the present day program was instituted. A methodology was developed, for specific application to nuclear power plant employees, to aid trained coordinators/evaluators in determining those factors that exert a negative influence on human behavior in the nuclear power plant environment. HPES is for anyone and everyone on site, from contractors to plant staff to plant management. No one is excluded from participation. Virginia Power's HPES program goal is to identify and correct the root causes of human performance problems. Evaluations are performed on reported real or perceived conditions that may have an adverse influence on members of the nuclear team. A report is provided to management identifying root cause and contributing factors along with recommended corrective actions

  10. “The Heart Truth:” Using the Power of Branding and Social Marketing to Increase Awareness of Heart Disease in Women

    Science.gov (United States)

    Long, Terry; Taubenheim, Ann; Wayman, Jennifer; Temple, Sarah; Ruoff, Beth

    2008-01-01

    In September 2002, the National Heart, Lung, and Blood Institute launched The Heart Truth, the first federally-sponsored national campaign aimed at increasing awareness among women about their risk of heart disease. A traditional social marketing approach, including an extensive formative research phase, was used to plan, implement, and evaluate the campaign. With the creation of the Red Dress as the national symbol for women and heart disease awareness, the campaign integrated a branding strategy into its social marketing framework. The aim was to develop and promote a women's heart disease brand that would create a strong emotional connection with women. The Red Dress brand has had a powerful appeal to a wide diversity of women and has given momentum to the campaign's three-part implementation strategy of partnership development, media relations, and community action. In addition to generating its own substantial programming, The Heart Truth became a catalyst for a host of other national and local educational initiatives, both large and small. By the campaign's fifth anniversary, surveys showed that women were increasingly aware of heart disease as their leading cause of death and that the rise in awareness was associated with increased action to reduce heart disease risk. PMID:19122892

  11. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  12. Performance testing of the finance platform based on LoadRunner

    Directory of Open Access Journals (Sweden)

    CHEN Mengyun

    2016-08-01

    Full Text Available In this paper,a performance testing scheme is designed to test the software interface of the financial management platform based on LoadRunner,and the scheme has been used for the performance testing of the interactive interface of the financial management platform of Shanghai Zuo Hao Network Technology Co.Ltd.The result shows that the performance characteristics of the system interface are different under different scenarios,and TPS and CPU usage performance characteristics are in an optimal status when the number of concurrent users is 40,while CPU becomes the bottleneck which is need to be addressed by system when the number of concurrent users is 50.At last,the effectiveness of the scheme has been proved.

  13. High bandwidth concurrent processing on commodity platforms

    CERN Document Server

    Boosten, M; Van der Stok, P D V

    1999-01-01

    The I/O bandwidth and real-time processing power required for high- energy physics experiments is increasing rapidly over time. The current requirements can only be met by using large-scale concurrent processing. We are investigating the use of a large PC cluster interconnected by Fast and Gigabit Ethernet to meet the performance requirements of the ATLAS second level trigger. This architecture is attractive because of its performance and competitive pricing. A major problem is obtaining frequent high-bandwidth I/O without sacrificing the CPU's processing power. We present a tight integration of a user-level scheduler and a zero-copy communication layer. This system closely approaches the performance of the underlying hardware in terms of both CPU power and I/O capacity. (0 refs).

  14. Good practices for improved nuclear power plant performance

    International Nuclear Information System (INIS)

    1989-04-01

    This report provides an overview of operational principles, practice and improvements which have contributed to good performance of eight selected world nuclear power stations. The IAEA Power Reactor Information System (PRIS) was used to identify a population of good performers. It is recognized that there are many other good performing nuclear power stations not included in this report. Specific criteria described in the introduction were used in selecting these eight stations. The information contained in this report was obtained by the staff from IAEA, Division of Nuclear Power. This was accomplished by visits to the stations and visits to a number of utility support groups and three independent organizations which provide support to more than one utility. The information in this report is intended as an aid for operating organizations to identify possible improvement initiatives to enhance plant performance. Figs and tabs

  15. Environmental awareness -- An interactive multimedia CD-ROM

    Energy Technology Data Exchange (ETDEWEB)

    Huntelmann, A.; Petruk, M.W.

    1998-07-01

    As corporations move to new and innovative ways of structuring high-performance work teams, effective training is being recognized as a key to insuring success. Time and scheduling constraints tend to limit the effectiveness of traditional approaches to training. This has led Edmonton Power Inc. to explore the use of CD-ROM based multimedia as a means of delivering individualized instruction in an effective and timely manner. This session will demonstrate a multimedia CD-ROM based course on Environmental Awareness designed for workers in the electrical utilities industry. The objective of the course is to make workers aware of their roles and responsibilities with respect to their impact on the environment. This session will also describe the instructional design strategy underlying this approach to training and will present some preliminary findings with respect to the effectiveness of this approach. Individuals who are interested in improving the effectiveness of their environmental training program as well as individuals who are interested in understanding the strengths of multimedia CD-ROM based training will find this session useful and informative.

  16. Achieving High Performance With TCP Over 40 GbE on NUMA Architectures for CMS Data Acquisition

    Energy Technology Data Exchange (ETDEWEB)

    Bawej, Tomasz; et al.

    2014-01-01

    TCP and the socket abstraction have barely changed over the last two decades, but at the network layer there has been a giant leap from a few megabits to 100 gigabits in bandwidth. At the same time, CPU architectures have evolved into the multicore era and applications are expected to make full use of all available resources. Applications in the data acquisition domain based on the standard socket library running in a Non-Uniform Memory Access (NUMA) architecture are unable to reach full efficiency and scalability without the software being adequately aware about the IRQ (Interrupt Request), CPU and memory affinities. During the first long shutdown of LHC, the CMS DAQ system is going to be upgraded for operation from 2015 onwards and a new software component has been designed and developed in the CMS online framework for transferring data with sockets. This software attempts to wrap the low-level socket library to ease higher-level programming with an API based on an asynchronous event driven model similar to the DAT uDAPL API. It is an event-based application with NUMA optimizations, that allows for a high throughput of data across a large distributed system. This paper describes the architecture, the technologies involved and the performance measurements of the software in the context of the CMS distributed event building.

  17. Behavior-aware cache hierarchy optimization for low-power multi-core embedded systems

    Science.gov (United States)

    Zhao, Huatao; Luo, Xiao; Zhu, Chen; Watanabe, Takahiro; Zhu, Tianbo

    2017-07-01

    In modern embedded systems, the increasing number of cores requires efficient cache hierarchies to ensure data throughput, but such cache hierarchies are restricted by their tumid size and interference accesses which leads to both performance degradation and wasted energy. In this paper, we firstly propose a behavior-aware cache hierarchy (BACH) which can optimally allocate the multi-level cache resources to many cores and highly improved the efficiency of cache hierarchy, resulting in low energy consumption. The BACH takes full advantage of the explored application behaviors and runtime cache resource demands as the cache allocation bases, so that we can optimally configure the cache hierarchy to meet the runtime demand. The BACH was implemented on the GEM5 simulator. The experimental results show that energy consumption of a three-level cache hierarchy can be saved from 5.29% up to 27.94% compared with other key approaches while the performance of the multi-core system even has a slight improvement counting in hardware overhead.

  18. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    Science.gov (United States)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  19. The Impact of Awareness of and Concern About Memory Performance on the Prediction of Progression From Mild Cognitive Impairment to Alzheimer Disease Dementia.

    Science.gov (United States)

    Munro, Catherine E; Donovan, Nancy J; Amariglio, Rebecca E; Papp, Kate V; Marshall, Gad A; Rentz, Dorene M; Pascual-Leone, Alvaro; Sperling, Reisa A; Locascio, Joseph J; Vannini, Patrizia

    2018-05-03

    To investigate the relationship of awareness of and concern about memory performance to progression from mild cognitive impairment (MCI) to Alzheimer disease (AD) dementia. Participants (n = 33) had a diagnosis of MCI at baseline and a diagnosis of MCI or AD dementia at follow-up. Participants were categorized as "Stable-MCI" if they retained an MCI diagnosis at follow-up (mean follow-up = 18.0 months) or "Progressor-MCI" if they were diagnosed with AD dementia at follow-up (mean follow-up = 21.6 months). Awareness was measured using the residual from regressing a participant's objective memory score onto their subjective complaint score (i.e., residualConcern was assessed using a questionnaire examining the degree of concern when forgetting. Logistic regression was used to determine whether the presence of these syndromes could predict future diagnosis of AD dementia, and repeated measures analysis of covariance tests were used to examine longitudinal patterns of these syndromes. Baseline anosognosia was apparent in the Progressor-MCI group, whereas participants in the Stable-MCI group demonstrated relative awareness of their memory performance. Baseline awareness scores successfully predicted whether an individual would progress to AD-dementia. Neither group showed change in awareness of performance over time. Neither group showed differences in concern about memory performance at baseline or change in concern about performance over time. These data suggest that anosognosia may appear prior to the onset of AD dementia, while anosodiaphoria likely does not appear until later in the AD continuum. Additionally, neither group showed significant changes in awareness or concern over time, suggesting that change in these variables may happen over longer periods. Copyright © 2018 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph method

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    2004-01-01

    In this study, The Fuzzy Signed Digraph method which has been researched and applied to the chemical process is improved and applied to the fault diagnosis of the pressurizer in nuclear power plants. The Fuzzy Signed-Digraph (FSD) is the method which applies the fuzzy number to the Signed-Digraph (SDG) method. The current SDG methods have many merits as follows: (1) SDG method can directly use the value of sensors not the alarm to the fault diagnosis. (2) This method can diagnose the fault independent on the pattern. (3) This method can diagnose the faults fastly because the method uses the cause-effect relation instead of the complex control equation among the variables. But, they are not proper to be applied to the diagnosis of the multi-faults and to diagnose faults on real time. It is because the unmeasured nodes in those methods must be connected to each other in order to find out the single fault under the single-fault assumption. These methods need long CPU time and cannot be applied to the multi-faults diagnosis. We propose a method in which the values of the unmeasured nodes are calculated from the relations between the unmeasured nodes and the measured nodes. By using this method, the CPU time for diagnosis can be reduced. This CPU time reduction makes the real-time diagnosis possible. This method can also be applied for the multi-faults diagnosis. This method is applied to the diagnosis of the pressurizer of the nuclear power plant KORI-2 in Korea. (author)

  1. Awareness of orthodontists regarding oral hygiene performance during active orthodontic treatment.

    Science.gov (United States)

    Berlin-Broner, Y; Levin, L; Ashkenazi, M

    2012-09-01

    The aim of the present study was orthodontist's awareness for maintenance of several home and professional prevention measures during active orthodontic treatment according to patients' report. A structured questionnaire was distributed to 122 patients undergoing active orthodontic treatment with fixed appliances. Patients were treated by 38 different orthodontists. The questionnaire accessed information regarding instructions patients received from their orthodontist concerning maintenance of their oral hygiene during orthodontic treatment. Most of the patients (94%) reported that their orthodontists informed them at least once about the importance of tooth-brushing, and 74.5% received instructions for correct performance of tooth brushing or alternatively were referred to dental hygienist. However, only 24.5% of the patients reported that their orthodontist instructed them to use the correct fluoride concentration in their toothpaste, to use daily fluoride mouthwash (31.5%) and to brush their teeth once a week with high concentration of fluoride gel (Elmex gel; 10.2%). Only 13.8% received application of high concentration of fluoride gel or varnish at the dental office, and 52% of the patients reported that their orthodontist verified that they attend regular check-ups by their dentist. A significant positive correlation was found between explaining the patients the importance of tooth brushing and the following variables: instructing them on how to brush their teeth correctly (p<0.0001), explaining them which type of toothbrush is recommended for orthodontic patients (p=0.002), recommending to perform daily fluoride oral rinse (p=0.036) and referring them to periodic check-ups (p=0.024). Orthodontists should increase their awareness and commitment for instructing their patient on how to maintain good oral hygiene in order to prevent caries and periodontal disease during orthodontic treatment.

  2. Effects of Strength vs. Ballistic-Power Training on Throwing Performance.

    Science.gov (United States)

    Zaras, Nikolaos; Spengos, Konstantinos; Methenitis, Spyridon; Papadopoulos, Constantinos; Karampatsos, Giorgos; Georgiadis, Giorgos; Stasinaki, Aggeliki; Manta, Panagiota; Terzis, Gerasimos

    2013-01-01

    The purpose of the present study was to investigate the effects of 6 weeks strength vs. ballistic-power (Power) training on shot put throwing performance in novice throwers. Seventeen novice male shot-put throwers were divided into Strength (N = 9) and Power (n = 8) groups. The following measurements were performed before and after the training period: shot put throws, jumping performance (CMJ), Wingate anaerobic performance, 1RM strength, ballistic throws and evaluation of architectural and morphological characteristics of vastus lateralis. Throwing performance increased significantly but similarly after Strength and Power training (7.0-13.5% vs. 6.0-11.5%, respectively). Muscular strength in leg press increased more after Strength than after Power training (43% vs. 21%, respectively), while Power training induced an 8.5% increase in CMJ performance and 9.0 - 25.8% in ballistic throws. Peak power during the Wingate test increased similarly after Strength and Power training. Muscle thickness increased only after Strength training (10%, p ballistic power training in novice throwers, but with dissimilar muscular adaptations. Key pointsBallistic-power training with 30% of 1RM is equally effective in increasing shot put performance as strength training, in novice throwers, during a short training cycle of six weeks.In novice shot putters with relatively low initial muscle strength/mass, short-term strength training might be more important since it can increase both muscle strength and shot put performance.The ballistic type of power training resulted in a significant increase of the mass of type IIx muscle fibres and no change in their proportion. Thus, this type of training might be used effectively during the last weeks before competition, when the strength training load is usually reduced, in order to increase muscle power and shot put performance in novice shot putters.

  3. EFFECTS OF STRENGTH VS. BALLISTIC-POWER TRAINING ON THROWING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Nikolaos Zaras

    2013-03-01

    Full Text Available The purpose of the present study was to investigate the effects of 6 weeks strength vs. ballistic-power (Power training on shot put throwing performance in novice throwers. Seventeen novice male shot-put throwers were divided into Strength (N = 9 and Power (n = 8 groups. The following measurements were performed before and after the training period: shot put throws, jumping performance (CMJ, Wingate anaerobic performance, 1RM strength, ballistic throws and evaluation of architectural and morphological characteristics of vastus lateralis. Throwing performance increased significantly but similarly after Strength and Power training (7.0-13.5% vs. 6.0-11.5%, respectively. Muscular strength in leg press increased more after Strength than after Power training (43% vs. 21%, respectively, while Power training induced an 8.5% increase in CMJ performance and 9.0 - 25.8% in ballistic throws. Peak power during the Wingate test increased similarly after Strength and Power training. Muscle thickness increased only after Strength training (10%, p < 0.05. Muscle fibre Cross Sectional Area (fCSA increased in all fibre types after Strength training by 19-26% (p < 0.05, while only type IIx fibres hypertrophied significantly after Power training. Type IIx fibres (% decreased after Strength but not after Power training. These results suggest that shot put throwing performance can be increased similarly after six weeks of either strength or ballistic power training in novice throwers, but with dissimilar muscular adaptations

  4. Performance of Flow-Aware Networking in LTE backbone

    DEFF Research Database (Denmark)

    Sniady, Aleksander; Soler, José

    2012-01-01

    technologies, such as Long Term Evolution (LTE). This paper proposes usage of a modified Flow Aware Networking (FAN) technique for enhancing Quality of Service (QoS) in the all-IP transport networks underlying LTE backbone. The results obtained with OPNET Modeler show that FAN, in spite of being relatively...

  5. An analysis of the awareness and performance of radiation workers' radiation/radioactivity protection in medical institutions : Focused on Busan regional medical institutions

    Energy Technology Data Exchange (ETDEWEB)

    Park, Cheol Koo [Dept. of Radiological Science, Graduate School of Catholic University of Pusan, Busan (Korea, Republic of); Hwang, Chul Hwan [Dept. of Radiation Oncology, Pusan National University Hospital, Busan (Korea, Republic of); Kim, Dong Hyun [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Busan (Korea, Republic of)

    2017-03-15

    The purpose of this study was to investigate safety management awareness and behavioral investigation of radiation/radioactivity performance defenses of radiation workers' in medical institutions. Data collection consisted of 267 radiation workers working in medical institutions using structured questionnaires. As a result, it was analyzed that radiation safety management awareness and performance were high in 40s, 50s group and higher education group. The analysis according to the radiation safety management knowledge was analyzed that the 'Know very well' group had higher scores on awareness and performance scores. The analysis according to the degree of safety management effort showed the high awareness scale and the performance scale in the group 'Receiving various education or studying the safety management contents through book'. The correlations between the sub-factors showed the highest positive correlation between perceived practician and personal perspective and perceived by patient and patient's caretaker perspective. Therefore, radiation safety management for workers, patients, and patient's caretaker should be conducted through continuous education of radiation safety management through various routes of radiation workers working at medical institutions.

  6. First Evaluation of the CPU, GPGPU and MIC Architectures for Real Time Particle Tracking based on Hough Transform at the LHC

    CERN Document Server

    Halyo, V.; Lujan, P.; Karpusenko, V.; Vladimirov, A.

    2014-04-07

    Recent innovations focused around {\\em parallel} processing, either through systems containing multiple processors or processors containing multiple cores, hold great promise for enhancing the performance of the trigger at the LHC and extending its physics program. The flexibility of the CMS/ATLAS trigger system allows for easy integration of computational accelerators, such as NVIDIA's Tesla Graphics Processing Unit (GPU) or Intel's \\xphi, in the High Level Trigger. These accelerators have the potential to provide faster or more energy efficient event selection, thus opening up possibilities for new complex triggers that were not previously feasible. At the same time, it is crucial to explore the performance limits achievable on the latest generation multicore CPUs with the use of the best software optimization methods. In this article, a new tracking algorithm based on the Hough transform will be evaluated for the first time on a multi-core Intel Xeon E5-2697v2 CPU, an NVIDIA Tesla K20c GPU, and an Intel \\x...

  7. A Closed-Loop Model of Operator Visual Attention, Situation Awareness, and Performance Across Automation Mode Transitions.

    Science.gov (United States)

    Johnson, Aaron W; Duda, Kevin R; Sheridan, Thomas B; Oman, Charles M

    2017-03-01

    This article describes a closed-loop, integrated human-vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment. Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator's estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness. We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator's estimates of system states. The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model's predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data. Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates. Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator's visual attention during control mode transitions can produce reallocations in situation awareness of certain states.

  8. Ultra-low power sensor for autonomous non-invasive voltage measurement in IoT solutions for energy efficiency

    Science.gov (United States)

    Villani, Clemente; Balsamo, Domenico; Brunelli, Davide; Benini, Luca

    2015-05-01

    Monitoring current and voltage waveforms is fundamental to assess the power consumption of a system and to improve its energy efficiency. In this paper we present a smart meter for power consumption which does not need any electrical contact with the load or its conductors, and which can measure both current and voltage. Power metering becomes easier and safer and it is also self-sustainable because an energy harvesting module based on inductive coupling powers the entire device from the output of the current sensor. A low cost 32-bit wireless CPU architecture is used for data filtering and processing, while a wireless transceiver sends data via the IEEE 802.15.4 standard. We describe in detail the innovative contact-less voltage measurement system, which is based on capacitive coupling and on an algorithm that exploits two pre-processing channels. The system self-calibrates to perform precise measurements regardless the cable type. Experimental results demonstrate accuracy in comparison with commercial high-cost instruments, showing negligible deviations.

  9. The effect of subjective awareness measures on performance in artificial grammar learning task.

    Science.gov (United States)

    Ivanchei, Ivan I; Moroshkina, Nadezhda V

    2018-01-01

    Systematic research into implicit learning requires well-developed awareness-measurement techniques. Recently, trial-by-trial measures have been widely used. However, they can increase complexity of a study because they are an additional experimental variable. We tested the effects of these measures on performance in artificial grammar learning study. Four groups of participants were assigned to different awareness measures conditions: confidence ratings, post-decision wagering, decision strategy attribution or none. Decision-strategy-attribution participants demonstrated better grammar learning and longer response times compared to controls. They also exhibited a conservative bias. Grammaticality by itself was a stronger predictor of strings endorsement in decision-strategy-attribution group compared to other groups. Confidence ratings and post-decision wagering only affected the response times. These results were supported by an additional experiment that used a balanced chunk strength design. We conclude that a decision-strategy-attribution procedure may force participants to adopt an analytical decision-making strategy and rely mostly on conscious knowledge of artificial grammar. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. hybrid\\scriptsize{{MANTIS}}: a CPU-GPU Monte Carlo method for modeling indirect x-ray detectors with columnar scintillators

    Science.gov (United States)

    Sharma, Diksha; Badal, Andreu; Badano, Aldo

    2012-04-01

    The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result

  11. Performance of on-power fuelling equipment at Rajasthan Atomic Power Station

    International Nuclear Information System (INIS)

    Jayabarathan, S.; Gopalakrishnan, S.

    1977-01-01

    Natural uranium reactors on account of their intrinsically low reactivity need frequent refuelling. The Rajasthan Atomic Power Station based on natural uranium reactors has, therefore, been provided with on-power fuel handling system which was installed in 1972. Its performance has met the design intent and operational objectives which are enumerated. However, continuous fuelling 7 to 10 days has not been possible because frequent maintenance of refuelling system is needed on account of certain deficiencies major of which is the heavy water leakage. For better performance, installation of a programmable logic controller is suggested. Mention has also been made of inadequate number of skilled man-power required for maintenance which leads to quick depletion of man-rem of all the available personnel trained for maintenance work. (M.G.B.)

  12. EHV AC undergrounding electrical power performance and planning

    CERN Document Server

    Benato, Roberto

    2014-01-01

    Analytical methods of cable performance in EHV AC electrical power are discussed in this comprehensive reference. Descriptions of energization, power quality, cable safety constraints and more, guide readers in cable planning and power network operations.

  13. Storage element performance optimization for CMS analysis jobs

    International Nuclear Information System (INIS)

    Behrmann, G; Dahlblom, J; Guldmyr, J; Happonen, K; Lindén, T

    2012-01-01

    Tier-2 computing sites in the Worldwide Large Hadron Collider Computing Grid (WLCG) host CPU-resources (Compute Element, CE) and storage resources (Storage Element, SE). The vast amount of data that needs to processed from the Large Hadron Collider (LHC) experiments requires good and efficient use of the available resources. Having a good CPU efficiency for the end users analysis jobs requires that the performance of the storage system is able to scale with I/O requests from hundreds or even thousands of simultaneous jobs. In this presentation we report on the work on improving the SE performance at the Helsinki Institute of Physics (HIP) Tier-2 used for the Compact Muon Experiment (CMS) at the LHC. Statistics from CMS grid jobs are collected and stored in the CMS Dashboard for further analysis, which allows for easy performance monitoring by the sites and by the CMS collaboration. As part of the monitoring framework CMS uses the JobRobot which sends every four hours 100 analysis jobs to each site. CMS also uses the HammerCloud tool for site monitoring and stress testing and it has replaced the JobRobot. The performance of the analysis workflow submitted with JobRobot or HammerCloud can be used to track the performance due to site configuration changes, since the analysis workflow is kept the same for all sites and for months in time. The CPU efficiency of the JobRobot jobs at HIP was increased approximately by 50 % to more than 90 %, by tuning the SE and by improvements in the CMSSW and dCache software. The performance of the CMS analysis jobs improved significantly too. Similar work has been done on other CMS Tier-sites, since on average the CPU efficiency for CMSSW jobs has increased during 2011. Better monitoring of the SE allows faster detection of problems, so that the performance level can be kept high. The next storage upgrade at HIP consists of SAS disk enclosures which can be stress tested on demand with HammerCloud workflows, to make sure that the I/O-performance

  14. Wide-bandwidth low-voltage PLL for powerPC(sup TM) microprocessors

    Science.gov (United States)

    Alvarez, Jose; Sanchez, Hector; Gerosa, Gianfranco; Countryman, Roger

    1995-04-01

    A 3.3 V Phase-Locked-Loop (PLL) clock synthesizer implemented in 0.5 micron CMOS technology is described. The PLL supports internal to external clock frequency ratios of 1, 1.5, 2, 3, and 4 as well as numerous static power down modes for PowerPC(sup TM) microprocessors. The CPU clock lock range spans from 6 to 175 MHz. Lock times below 15 mu s, PLL power dissipation below 10mW as well as phase error and jitter below +/- 100 ps have been measured. The total area of the PLL is 0.52 mm(exp 2).

  15. The readout performance evaluation of PowerPC

    International Nuclear Information System (INIS)

    Chu Yuanping; Zhang Hongyu; Zhao Jingwei; Ye Mei; Tao Ning; Zhu Kejun; Tang Suqiu; Guo Yanan

    2003-01-01

    PowerPC, as a powerful low-cost embedded computer, is one of the very important research objects in recent years in the project of BESIII data acquisition system. The researches on the embedded system and embedded computer have achieved many important results in the field of High Energy Physics especially in the data acquisition system. The one of the key points to design an acquisition system using PowerPC is to evaluate the readout ability of PowerPC correctly. The paper introduce some tests for the PowerPC readout performance. (authors)

  16. The Effect of Activating Metacognitive Strategies on the Listening Performance and Metacognitive Awareness of EFL Students

    Science.gov (United States)

    Rahimirad, Maryam; Shams, Mohammad Reza

    2014-01-01

    This study investigates the effect of activating metacognitive strategies on the listening performance of English as a foreign language (EFL) university students and explores the impact of such strategies on their metacognitive awareness of the listening task. The participants were N = 50 students of English literature at the state university of…

  17. Smarter Grid through Collective Intelligence: User Awareness for Enhanced Performance

    Directory of Open Access Journals (Sweden)

    Marcel Macarulla

    2015-02-01

    Full Text Available Purpose – This paper examines the scenario of a university campus, and the impact on energy consumption of the awareness of building managers and users (lecturers, students and administrative staff.Design/methodology/approach – This study draws a comparison between direct fruition of the information by both skilled (building managers and unskilled (users recipients, and the effect of peer pressure and beneficial competition between users in applying the good practices. In fact, the usage of edutainment, implemented by the automatic publication on the Twitter platform of energy consumption data from different users, can promote general users’ awareness on best practices and their effect on energy consumption. In addition, the use of a social network platform allows the interaction between users, sharing experiences and increasing the collective intelligence in the energy efficiency field.Findings – Tests revealed that enhanced awareness helped managers to identify strategies that, if implemented in the whole building, could reduce energy consumption by about 6%. The tests on university users’ awareness hint that the expected energy savings can reach 9%, in addition to the previous 6%. In fact, the measures were implemented in one of the three common rooms, and at building level the total energy consumption decreased by 3.42%, proving that a large deal of energy can be saved by capillary actions targeting society at large. The emerging collective intelligence of the final users ends up having a stronger effect on energy saving than the actions of more educated professionals.Practical implications – The approach used in this paper moved the burden of evolving the energy saving strategies to new scenarios onto the collective intelligence of the users, by connecting the users – and their experiences in new scenarios – using a social network to provide guidelines to other users involved in the same decision processes

  18. Provenance-aware optimization of workload for distributed data production

    Science.gov (United States)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2017-10-01

    Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.

  19. Wind turbine power performance verification in complex terrain and wind farms

    DEFF Research Database (Denmark)

    Friis Pedersen, Troels; Gjerding, S.; Enevoldsen, P.

    2002-01-01

    is a power performance verification procedure for individual wind turbines. The third is a power performance measurement procedure of whole wind farms, and the fourth is a power performance measurement procedurefor non-grid (small) wind turbines. This report presents work that was made to support the basis......The IEC/EN 61400-12 Ed 1 standard for wind turbine power performance testing is being revised. The standard will be divided into four documents. The first one of these is more or less a revision of the existing document on power performance measurementson individual wind turbines. The second one...... then been investigated in more detail. The work has given rise to a range of conclusionsand recommendations regarding: guaranties on power curves in complex terrain; investors and bankers experience with verification of power curves; power performance in relation to regional correction curves for Denmark...

  20. When does power disparity help or hurt group performance?

    Science.gov (United States)

    Tarakci, Murat; Greer, Lindred L; Groenen, Patrick J F

    2016-03-01

    Power differences are ubiquitous in social settings. However, the question of whether groups with higher or lower power disparity achieve better performance has thus far received conflicting answers. To address this issue, we identify 3 underlying assumptions in the literature that may have led to these divergent findings, including a myopic focus on static hierarchies, an assumption that those at the top of hierarchies are competent at group tasks, and an assumption that equality is not possible. We employ a multimethod set of studies to examine these assumptions and to understand when power disparity will help or harm group performance. First, our agent-based simulation analyses show that by unpacking these common implicit assumptions in power research, we can explain earlier disparate findings--power disparity benefits group performance when it is dynamically aligned with the power holder's task competence, and harms group performance when held constant and/or is not aligned with task competence. Second, our empirical findings in both a field study of fraud investigation groups and a multiround laboratory study corroborate the simulation results. We thereby contribute to research on power by highlighting a dynamic understanding of power in groups and explaining how current implicit assumptions may lead to opposing findings. (c) 2016 APA, all rights reserved).

  1. Is it me or not me? Modulation of perceptual-motor awareness and visuomotor performance by mindfulness meditation

    Directory of Open Access Journals (Sweden)

    Naranjo José

    2012-07-01

    Full Text Available Abstract Background Attribution of agency involves the ability to distinguish our own actions and their sensory consequences which are self-generated from those generated by external agents. There are several pathological cases in which motor awareness is dramatically impaired. On the other hand, awareness-enhancement practices like tai-chi and yoga are shown to improve perceptual-motor awareness. Meditation is known to have positive impacts on perception, attention and consciousness itself, but it is still unclear how meditation changes sensorimotor integration processes and awareness of action. The aim of this study was to investigate how visuomotor performance and self-agency is modulated by mindfulness meditation. This was done by studying meditators’ performance during a conflicting reaching task, where the congruency between actions and their consequences is gradually altered. This task was presented to novices in meditation before and after an intensive 8 weeks mindfulness meditation training (MBSR. The data of this sample was compared to a group of long-term meditators and a group of healthy non-meditators. Results Mindfulness resulted in a significant improvement in motor control during perceptual-motor conflict in both groups. Novices in mindfulness demonstrated a strongly increased sensitivity to detect external perturbation after the MBSR intervention. Both mindfulness groups demonstrated a speed/accuracy trade-off in comparison to their respective controls. This resulted in slower and more accurate movements. Conclusions Our results suggest that mindfulness meditation practice is associated with slower body movements which in turn may lead to an increase in monitoring of body states and optimized re-adjustment of movement trajectory, and consequently to better motor performance. This extended conscious monitoring of perceptual and motor cues may explain how, while dealing with perceptual-motor conflict, improvement in motor

  2. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    Science.gov (United States)

    Mahatme, Nihaar N.

    The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to

  3. Power Performance Verification of a Wind Farm Using the Friedman's Test.

    Science.gov (United States)

    Hernandez, Wilmar; López-Presa, José Luis; Maldonado-Correa, Jorge L

    2016-06-03

    In this paper, a method of verification of the power performance of a wind farm is presented. This method is based on the Friedman's test, which is a nonparametric statistical inference technique, and it uses the information that is collected by the SCADA system from the sensors embedded in the wind turbines in order to carry out the power performance verification of a wind farm. Here, the guaranteed power curve of the wind turbines is used as one more wind turbine of the wind farm under assessment, and a multiple comparison method is used to investigate differences between pairs of wind turbines with respect to their power performance. The proposed method says whether the power performance of the specific wind farm under assessment differs significantly from what would be expected, and it also allows wind farm owners to know whether their wind farm has either a perfect power performance or an acceptable power performance. Finally, the power performance verification of an actual wind farm is carried out. The results of the application of the proposed method showed that the power performance of the specific wind farm under assessment was acceptable.

  4. Identification of Barriers to Stroke Awareness and Risk Factor Management Unique to Hispanics

    Directory of Open Access Journals (Sweden)

    Marina Martinez

    2015-12-01

    Full Text Available Barriers to risk factor control may differ by race/ethnicity. The goal of this study was to identify barriers to stroke awareness and risk factor management unique to Hispanics as compared to non-Hispanic whites (NHWs. We performed a prospective study of stroke patients from an academic Stroke Center in Arizona and surveyed members of the general community. Questionnaires included: the Duke Social Support Index (DSSI, the Multidimensional Health Locus of Control (MHLC Scale, a stroke barriers questionnaire, and a Stroke Awareness Test. Of 145 stroke patients surveyed (72 Hispanic; 73 NHW, Hispanics scored lower on the Stroke Awareness Test compared to NHWs (72.5% vs. 79.1%, p = 0.029. Hispanic stroke patients also reported greater barriers related to medical knowledge, medication adherence, and healthcare access (p < 0.05 for all. Hispanics scored higher on the “powerful others” sub-scale (11.3 vs. 10, p < 0.05 of the MHLC. Of 177 members of the general public surveyed, Hispanics had lower stroke awareness compared to NHWs and tended to have lower awareness than Hispanic stroke patients. These results suggest that Hispanic stroke patients perceive less control over their health, experience more healthcare barriers, and demonstrate lower rates of stroke literacy. Interventions for stroke prevention and education in Hispanics should address these racial/ethnic differences in stroke awareness and barriers to risk factor control.

  5. Fiscal 1999 achievement report. Development of technology for reducing power consumption during standby (Research and development of technologies for application of standby power reduction to domestic and office-automation appliances); 1999 nendo taikiji shohi denryoku sakugen gijutsu kaihatsu seika hokokusho. Kaden oyobi OA kiki no taiki denryoku sakugen jitsuyoka gijutsu kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-05-01

    Efforts are exerted to develop power-efficient modules to be built into electrical products for reduction in power consumption in the standby state for domestic and office-automation appliances. In this study, television sets, audio sets, and air conditioners were selected out of domestic appliances and, out of office-automation appliances, notebook-size and desktop personal computers were selected. The standby power consumption is to be reduced to 3mW for domestic appliances, to 0.2W for notebook-size personal computers, and to 1/10-1/200 of the level being currently consumed in the case of desktop personal computers. For domestic appliances, a power efficient module not insulated from the AC power line was developed, to be built into a CPU-aided appliance to be turned on and off by remote control for the reduction of its standby power to 3mW. For notebook-size personal computers, a power-efficient power source insulated from the AC power line was developed, which consumes but 0.2W of standby power. It was built into a marketed notebook-size personal computer and tested for performance. For desktop personal computers, a 25mW power source insulated from the AC power line was fabricated, and tested for performance. (NEDO)

  6. QoE-Driven Energy-Aware Multipath Content Delivery Approach for MPTCP-Based Mobile Phones

    Institute of Scientific and Technical Information of China (English)

    Yuanlong Cao; Shengyang Chen; Qinghua Liu; Yi Zuo; Hao Wang; Minghe Huang

    2017-01-01

    Mobile phones equipped with multiple wireless interfaces can increase their goodput performance by making use of concurrent transmissions over multiple paths,enabled by the Multipath TCP (MPTCP).However,utilizing MPTCP for data delivery may generally result in higher energy consumption,while the battery power of a mobile phone is limited.Thus,how to optimize the energy usage becomes very crucial and urgent.In this paper,we propose MPTCP-QE,a novel quality of experience (QoE)-driven energy-aware multipath content delivery approach for MPTCP-based mobile phones.The main idea of MPTCP-QE is described as follows:it first provides an application rate-aware energy-efficient subflow management strategy to tradeoff throughput performance and energy consumption for mobile phones;then uses an available bandwidth-aware congestion window fast recovery strategy to make a sender avoid unnecessary slow-start and utilize wireless resource quickly;and further introduces a novel receiver-driven energy-efficient SACK strategy to help a receiver possible to detect SACK loss timely and trigger loss recovery in a more energy-efficient way.The simulation results show that with the MPTCP-QE,the energy usage is enhanced while the performance level is maintained compared to existing MPTCP solutions.

  7. Peripheral Social Awareness Information in Collaborative Work.

    Science.gov (United States)

    Spring, Michael B.; Vathanophas, Vichita

    2003-01-01

    Discusses being aware of other members of a team in a collaborative environment and reports on a study that examined group performance on a task that was computer mediated with and without awareness information. Examines how an awareness tool impacts the quality of a collaborative work effort and the communications between group members.…

  8. Thermo-economic performance of HTGR Brayton power cycles

    International Nuclear Information System (INIS)

    Linares, J. L.; Herranz, L. E.; Moratilla, B. Y.; Fernandez-Perez, A.

    2008-01-01

    High temperature reached in High and Very High Temperature Reactors (VHTRs) results in thermal efficiencies substantially higher than those of actual nuclear power plants. A number of studies mainly driven by achieving optimum thermal performance have explored several layout. However, economic assessments of cycle power configurations for innovative systems, although necessarily uncertain at this time, may bring valuable information in relative terms concerning power cycle optimization. This paper investigates the thermal and economic performance direct Brayton cycles. Based on the available parameters and settings of different designs of HTGR power plants (GTHTR-300 and PBMR) and using the first and second laws of thermodynamics, the effects of compressor inter-cooling and of the compressor-turbine arrangement (i.e., single vs. multiple axes) on thermal efficiency have been estimated. The economic analysis has been based on the El-Sayed methodology and on the indirect derivation of the reactor capital investment. The results of the study suggest that a 1-axis inter-cooled power cycle has a similar thermal performance to the 3-axes one (around 50%) and, what's more, it is substantially less taxed. A sensitivity study allowed assessing the potential impact of optimizing several variables on cycle performance. Further than that, the cycle components costs have been estimated and compared. (authors)

  9. Awareness as a foundation for developing effective spatial data infrastructures

    DEFF Research Database (Denmark)

    Clausen, Christian Bech; Rajabifard, Abbas; Enemark, Stig

    2006-01-01

    data. But what makes collaboration effective and successful? For example people often resist sharing data across organizational boundaries due to loss of control, power and independency. In the spatial community, the term awareness is often used when discussing issues concerned with inter-organizational...... addresses the problems spatial organizations currently encounter. As a result, the focus of this paper is on the nature and role of awareness. It explores why and how awareness plays a fundamental role in overcoming organizational constraints and in developing collaboration between organizations. The paper...... discusses the concept of awareness in the area of organizational collaboration in the spatial community, explains the important role awareness plays in the development of spatial data infrastructures, and introduces a methodology to promote awareness. Furthermore, the paper aims to make people...

  10. Study on the BES Ⅲ offline software performance

    International Nuclear Information System (INIS)

    Zhang Xiaomei; Sun Gongxing

    2011-01-01

    Performance monitor and analysis on the BESⅢ offline software system is very useful for the software optimization and the improvement of CPU and memory usage. It presented a feasible performance monitoring service based on GAUDI, and carried out performance tests and analysis on the BESⅢ simulation and reconstruction with the service. (authors)

  11. Solar power plant performance evaluation: simulation and experimental validation

    Science.gov (United States)

    Natsheh, E. M.; Albarbar, A.

    2012-05-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P&O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  12. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  13. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  14. Effects of Interruptibility-Aware Robot Behavior

    OpenAIRE

    Banerjee, Siddhartha; Silva, Andrew; Feigh, Karen; Chernova, Sonia

    2018-01-01

    As robots become increasingly prevalent in human environments, there will inevitably be times when a robot needs to interrupt a human to initiate an interaction. Our work introduces the first interruptibility-aware mobile robot system, and evaluates the effects of interruptibility-awareness on human task performance, robot task performance, and on human interpretation of the robot's social aptitude. Our results show that our robot is effective at predicting interruptibility at high accuracy, ...

  15. On the experience of feeling powerful: perceived power moderates the effect of stereotype threat on women's math performance.

    Science.gov (United States)

    Van Loo, Katie J; Rydell, Robert J

    2013-03-01

    This research examined whether feeling powerful can eliminate the deleterious effect of stereotype threat (i.e., concerns about confirming a negative self-relevant stereotype) on women's math performance. In Experiments 1 and 2, priming women with high power buffered them from reduced math performance in response to stereotype threat instructions, whereas women in the low and control power conditions showed poorer math performance in response to threat. Experiment 3 found that working memory capacity is one mechanism through which power moderates the effect of threat on women's math performance. In the low and control power conditions, women showed reduced working memory capacity in response to stereotype threat, accounting for threat's effect on performance. In contrast, women in the high power condition did not show reductions in working memory capacity or math performance in response to threat. This work demonstrates that perceived power moderates stereotype threat-based performance effects and explains why this occurs.

  16. RESURF power semiconductor devices - Performance and operating limits

    NARCIS (Netherlands)

    Ferrara, A.

    2016-01-01

    Power transmission is the transfer of energy from a generating source to a load which uses the energy to perform useful work. Since the end of the 19th century, electrical power transmission has replaced mechanical power transmission in all long distance applications. The alternating current (AC)

  17. RESURF power semiconductor devices: performance and operating limits

    NARCIS (Netherlands)

    Ferrara, A.

    2016-01-01

    Power transmission is the transfer of energy from a generating source to a load which uses the energy to perform useful work. Since the end of the 19th century, electrical power transmission has replaced mechanical power transmission in all long distance applications. The alternating current (AC)

  18. The Role of Awareness for Complex Planning Task Performance: A Microgaming Study

    Science.gov (United States)

    Lukosch, Heide; Groen, Daan; Kurapati, Shalini; Klemke, Roland; Verbraeck, Alexander

    2016-01-01

    This study introduces the concept of microgames to support situated learning in order to foster situational awareness (SA) of planners in seaport container terminals. In today's complex working environments, it is often difficult to develop the required level of understanding of a given situation, described as situational awareness. A container…

  19. Comparing Canadian and American cybersecurity awareness levels: Educational strategies to increase public awareness

    Science.gov (United States)

    Hoggard, Amy

    Cybersecurity awareness is an important issue that affects everyone who uses a computer or a mobile device. Canada and the United States both recognize the value of mitigating cybersecurity risks in terms of national safety, economic stability and protection of their citizens. The research performed compared the levels of cybersecurity awareness in Canadian and American Internet users. Canadian and American users were equally aware of cybersecurity measures, but were not implementing best practices to keep themselves safe. The research suggested users needed to understand why a cybersecurity measure was important before being motivated to implement it. Educational strategies were reviewed in both Canada and the United States and it was determined that although there were significant resources available, they were not being utilized by both the educators and the public. In order to increase cybersecurity awareness levels, nations should focus on increasing the public's awareness by using various types of messaging, such as cartoons, in media. One possible consideration is a compulsory awareness model before accessing the Internet. Cybersecurity topics should be included in the curriculum for students at all levels of education and a focus on providing training and resources to teachers will help increase the cybersecurity knowledge of children and youth.

  20. An Investigation of Unified Memory Access Performance in CUDA

    Science.gov (United States)

    Landaverde, Raphael; Zhang, Tiansheng; Coskun, Ayse K.; Herbordt, Martin

    2015-01-01

    Managing memory between the CPU and GPU is a major challenge in GPU computing. A programming model, Unified Memory Access (UMA), has been recently introduced by Nvidia to simplify the complexities of memory management while claiming good overall performance. In this paper, we investigate this programming model and evaluate its performance and programming model simplifications based on our experimental results. We find that beyond on-demand data transfers to the CPU, the GPU is also able to request subsets of data it requires on demand. This feature allows UMA to outperform full data transfer methods for certain parallel applications and small data sizes. We also find, however, that for the majority of applications and memory access patterns, the performance overheads associated with UMA are significant, while the simplifications to the programming model restrict flexibility for adding future optimizations. PMID:26594668

  1. Use of visualization technology to improve decision-making performance in nuclear power plants

    International Nuclear Information System (INIS)

    Hanes, Lewis F.; Naser, Joseph

    2005-01-01

    This paper contains a description of modern 2.5-D and 3-D visualization technology that may be applied to improve human situation awareness and decision-making in nuclear power plants. Visualization technology is being applied widely and successfully in several industries. Examples are presented of successful applications in the military, aviation, medical, entertainment, and nuclear industries. Additional opportunities are identified in the nuclear industry that may benefit from improved visualization

  2. Performance test of uninterruptible power system of PIEF

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong Chae; Kim, Eun Ka; Chun, Yong Bum; Park, Dea Gyu; Chu, Yong Sun; Bae, Sang Min; Koo, Dae Seo

    1998-02-01

    Because of the special features of post-irradiation examination (PIE) facility to handle very high radioactive materials like spent nuclear fuels, the electric system of the facility was designed and constructed according to a very strict requirement which is applied to nuclear power plant. A safety grade of Class 1E was adopted in the power utility system of PIEF to guarantee stable power supply to the facility without any expected interruption. In order cope with a emergency condition like a power interruption of KEPCO, a emergency power supplying system consisting of a diesel generator (3-phase, 6600/440, 1,000 kW) and uninterruptibel power supply (UPS) system was installed in PIEF. UPS power is connected to the radiation monitoring system and several other main safety devices to assure of normal operations of them for not less than 30 minutes. According to the recommendations and regulations in nuclear law, a monthly and yearly regular inspection for the UPS and emergency power supplying system are performed. In this report, a brief description to establish self-inspection technology and procedures for the above mentioned electric power supplying system at PIEF, including a principle of operation, inspection scheme, trouble shooting, and performance test techniques were made. (author). 8 refs., 3 tabs., 4 figs.

  3. Does power corrupt or enable? When and why power facilitates self-interested behavior.

    Science.gov (United States)

    DeCelles, Katherine A; DeRue, D Scott; Margolis, Joshua D; Ceranic, Tara L

    2012-05-01

    Does power corrupt a moral identity, or does it enable a moral identity to emerge? Drawing from the power literature, we propose that the psychological experience of power, although often associated with promoting self-interest, is associated with greater self-interest only in the presence of a weak moral identity. Furthermore, we propose that the psychological experience of power is associated with less self-interest in the presence of a strong moral identity. Across a field survey of working adults and in a lab experiment, individuals with a strong moral identity were less likely to act in self-interest, yet individuals with a weak moral identity were more likely to act in self-interest, when subjectively experiencing power. Finally, we predict and demonstrate an explanatory mechanism behind this effect: The psychological experience of power enhances moral awareness among those with a strong moral identity, yet decreases the moral awareness among those with a weak moral identity. In turn, individuals' moral awareness affects how they behave in relation to their self-interest. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  4. Solar power plant performance evaluation: simulation and experimental validation

    International Nuclear Information System (INIS)

    Natsheh, E M; Albarbar, A

    2012-01-01

    In this work the performance of solar power plant is evaluated based on a developed model comprise photovoltaic array, battery storage, controller and converters. The model is implemented using MATLAB/SIMULINK software package. Perturb and observe (P and O) algorithm is used for maximizing the generated power based on maximum power point tracker (MPPT) implementation. The outcome of the developed model are validated and supported by a case study carried out using operational 28.8kW grid-connected solar power plant located in central Manchester. Measurements were taken over 21 month's period; using hourly average irradiance and cell temperature. It was found that system degradation could be clearly monitored by determining the residual (the difference) between the output power predicted by the model and the actual measured power parameters. It was found that the residual exceeded the healthy threshold, 1.7kW, due to heavy snow in Manchester last winter. More important, the developed performance evaluation technique could be adopted to detect any other reasons that may degrade the performance of the P V panels such as shading and dirt. Repeatability and reliability of the developed system performance were validated during this period. Good agreement was achieved between the theoretical simulation and the real time measurement taken the online grid connected solar power plant.

  5. Improving Large-scale Storage System Performance via Topology-aware and Balanced Data Placement

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Oral, H Sarp [ORNL; Vazhkudai, Sudharshan S [ORNL

    2014-01-01

    With the advent of big data, the I/O subsystems of large-scale compute clusters are becoming a center of focus, with more applications putting greater demands on end-to-end I/O performance. These subsystems are often complex in design. They comprise of multiple hardware and software layers to cope with the increasing capacity, capability and scalability requirements of data intensive applications. The sharing nature of storage resources and the intrinsic interactions across these layers make it to realize user-level, end-to-end performance gains a great challenge. We propose a topology-aware resource load balancing strategy to improve per-application I/O performance. We demonstrate the effectiveness of our algorithm on an extreme-scale compute cluster, Titan, at the Oak Ridge Leadership Computing Facility (OLCF). Our experiments with both synthetic benchmarks and a real-world application show that, even under congestion, our proposed algorithm can improve large-scale application I/O performance significantly, resulting in both the reduction of application run times and higher resolution simulation runs.

  6. Performance evaluation of cogeneration power plants

    International Nuclear Information System (INIS)

    Bacone, M.

    2001-01-01

    The free market has changed the criteria for measuring the cogeneration plant performances. Further at the technical-economic parameters, are considered other connected at the profits of the power plant [it

  7. Public awareness of human papillomavirus.

    Science.gov (United States)

    Cuschieri, K S; Horne, A W; Szarewski, A; Cubie, H A

    2006-01-01

    The main objective of this study was to review the evidence relating to the level of awareness of human papillomavirus (HPV) in the general population and the implications for the potential introduction of HPV vaccination and HPV testing as part of screening. PubMed search performed on terms: 'HPV education', 'HPV awareness' 'Genital Warts Awareness' Results: Public awareness of HPV is generally very low, particularly with respect to its relation to abnormal smears and cervical cancer although knowledge levels vary to some extent according to sociodemographic characteristics. There is also much confusion around which types cause warts and the types that can cause cancer. The sexually transmissible nature of the infection is of major concern and confusion to women. Due to the lack of current awareness of HPV, significant education initiatives will be necessary should HPV vaccination and/or HPV testing be introduced. Organized edification of health-care workers and the media, who constitute the two most preferred sources of information, will be crucial.

  8. Fast Parallel Image Registration on CPU and GPU for Diagnostic Classification of Alzheimer's Disease

    Directory of Open Access Journals (Sweden)

    Denis P Shamonin

    2014-01-01

    Full Text Available Nonrigid image registration is an important, but time-consuming taskin medical image analysis. In typical neuroimaging studies, multipleimage registrations are performed, i.e. for atlas-based segmentationor template construction. Faster image registration routines wouldtherefore be beneficial.In this paper we explore acceleration of the image registrationpackage elastix by a combination of several techniques: iparallelization on the CPU, to speed up the cost function derivativecalculation; ii parallelization on the GPU building on andextending the OpenCL framework from ITKv4, to speed up the Gaussianpyramid computation and the image resampling step; iii exploitationof certain properties of the B-spline transformation model; ivfurther software optimizations.The accelerated registration tool is employed in a study ondiagnostic classification of Alzheimer's disease and cognitivelynormal controls based on T1-weighted MRI. We selected 299participants from the publicly available Alzheimer's DiseaseNeuroimaging Initiative database. Classification is performed with asupport vector machine based on gray matter volumes as a marker foratrophy. We evaluated two types of strategies (voxel-wise andregion-wise that heavily rely on nonrigid image registration.Parallelization and optimization resulted in an acceleration factorof 4-5x on an 8-core machine. Using OpenCL a speedup factor of ~2was realized for computation of the Gaussian pyramids, and 15-60 forthe resampling step, for larger images. The voxel-wise and theregion-wise classification methods had an area under thereceiver operator characteristic curve of 88% and 90%,respectively, both for standard and accelerated registration.We conclude that the image registration package elastix wassubstantially accelerated, with nearly identical results to thenon-optimized version. The new functionality will become availablein the next release of elastix as open source under the BSD license.

  9. Context-Aware Elevator Scheduling

    OpenAIRE

    Strang, Thomas; Bauer, Christian

    2007-01-01

    Research on context-aware systems is usually user-centric and thus focussed on the context of a specific user to serve his or her needs in an optimized way. In this paper, we want to apply core concepts developed in research on context-awareness in a system-centric way, namely to elevator systems. We show with three different examples that the performance of an elevator system can be significantly improved if the elevator control has access to contextual knowledge. The first example demons...

  10. Wind turbine power performance verification in complex terrain and wind farms

    Energy Technology Data Exchange (ETDEWEB)

    Friis Pedersen, T.; Gjerding, S.; Ingham, P.; Enevoldsen, P.; Kjaer Hansen, J.; Kanstrup Joergensen, H.

    2002-04-01

    The IEC/EN 61400-12 Ed 1 standard for wind turbine power performance testing is being revised. The standard will be divided into four documents. The first one of these is more or less a revision of the existing document on power performance measurements on individual wind turbines. The second one is a power performance verification procedure for individual wind turbines. The third is a power performance measurement procedure of whole wind farms, and the fourth is a power performance measurement procedure for non-grid (small) wind turbines. This report presents work that was made to support the basis for this standardisation work. The work addressed experience from several national and international research projects and contractual and field experience gained within the wind energy community on this matter. The work was wide ranging and addressed 'grey' areas of knowledge regarding existing methodologies, which has then been investigated in more detail. The work has given rise to a range of conclusions and recommendations regarding: guaranties on power curves in complex terrain; investors and bankers experience with verification of power curves; power performance in relation to regional correction curves for Denmark; anemometry and the influence of inclined flow. (au)

  11. Using international experience to improve performance of nuclear power plants

    International Nuclear Information System (INIS)

    Calori, F.; Csik, B.J.; Strickert, R.J.

    1989-01-01

    Information on performance achievements will assist nuclear power plant operating organizations to develop initiatives for improved or continued high performance of their plants. The paper describes the activities of the IAEA in reviewing and analysing the reasons for good performance by contacting operating organizations identified by its Power Reactor Information System as showing continued good performance. Discussions with operations personnel of utilities have indicated practices which have a major positive impact on good performance and which are generally common to all well performing organizations contacted. The IAEA also promotes further activities directed primarily to the achievement of standards of excellence in nuclear power operation. These are briefly commented

  12. Energy-Aware Broadcasting and Multicasting in Wireless Ad Hoc Networks: A Cross-Layering Approach

    National Research Council Canada - National Science Library

    Wieselthier, Jeffrey E; Nguyen, Gam D; Ephremides, Anthony

    2004-01-01

    ...) problems, especially when energy-aware operation is required. To address the specific problem of energy-aware tree construction in wireless ad hoc networks, we have developed the Broadcast Incremental Power (BIP...

  13. Generating units performances: power system requirements

    Energy Technology Data Exchange (ETDEWEB)

    Fourment, C; Girard, N; Lefebvre, H

    1994-08-01

    The part of generating units within the power system is more than providing power and energy. Their performance are not only measured by their energy efficiency and availability. Namely, there is a strong interaction between the generating units and the power system. The units are essential components of the system: for a given load profile the frequency variation follows directly from the behaviour of the units and their ability to adapt their power output. In the same way, the voltage at the units terminals are the key points to which the voltage profile at each node of the network is linked through the active and especially the reactive power flows. Therefore, the customer will experience the frequency and voltage variations induced by the units behaviour. Moreover, in case of adverse conditions, if the units do not operate as well as expected or trip, a portion of the system, may be the whole system, may collapse. The limitation of the performance of a unit has two kinds of consequences. Firstly, it may result in an increased amount of not supplied energy or loss of load probability: for example if the primary reserve is not sufficient, a generator tripping may lead to an abnormal frequency deviation, and load may have to be shed to restore the balance. Secondly, the limitation of a unit performance results in an economic over-cost for the system: for instance, if not enough `cheap` units are able to load-following, other units with higher operating costs have to be started up. We would like to stress the interest for the operators and design teams of the units on the one hand, and the operators and design teams of the system on the other hand, of dialog and information exchange, in operation but also at the conception stage, in order to find a satisfactory compromise between the system requirements and the consequences for the generating units. (authors). 11 refs., 4 figs.

  14. Silicon-Carbide Power MOSFET Performance in High Efficiency Boost Power Processing Unit for Extreme Environments

    Science.gov (United States)

    Ikpe, Stanley A.; Lauenstein, Jean-Marie; Carr, Gregory A.; Hunter, Don; Ludwig, Lawrence L.; Wood, William; Del Castillo, Linda Y.; Fitzpatrick, Fred; Chen, Yuan

    2016-01-01

    Silicon-Carbide device technology has generated much interest in recent years. With superior thermal performance, power ratings and potential switching frequencies over its Silicon counterpart, Silicon-Carbide offers a greater possibility for high powered switching applications in extreme environment. In particular, Silicon-Carbide Metal-Oxide- Semiconductor Field-Effect Transistors' (MOSFETs) maturing process technology has produced a plethora of commercially available power dense, low on-state resistance devices capable of switching at high frequencies. A novel hard-switched power processing unit (PPU) is implemented utilizing Silicon-Carbide power devices. Accelerated life data is captured and assessed in conjunction with a damage accumulation model of gate oxide and drain-source junction lifetime to evaluate potential system performance at high temperature environments.

  15. Long term energy performance analysis of Egbin thermal power ...

    African Journals Online (AJOL)

    This study is aimed at providing an energy performance analysis of Egbin thermal power plant. The plant operates on Regenerative Rankine cycle with steam as its working fluid .The model equations were formulated based on some performance parameters used in power plant analysis. The considered criteria were plant ...

  16. Power Performance Verification of a Wind Farm Using the Friedman’s Test

    Science.gov (United States)

    Hernandez, Wilmar; López-Presa, José Luis; Maldonado-Correa, Jorge L.

    2016-01-01

    In this paper, a method of verification of the power performance of a wind farm is presented. This method is based on the Friedman’s test, which is a nonparametric statistical inference technique, and it uses the information that is collected by the SCADA system from the sensors embedded in the wind turbines in order to carry out the power performance verification of a wind farm. Here, the guaranteed power curve of the wind turbines is used as one more wind turbine of the wind farm under assessment, and a multiple comparison method is used to investigate differences between pairs of wind turbines with respect to their power performance. The proposed method says whether the power performance of the specific wind farm under assessment differs significantly from what would be expected, and it also allows wind farm owners to know whether their wind farm has either a perfect power performance or an acceptable power performance. Finally, the power performance verification of an actual wind farm is carried out. The results of the application of the proposed method showed that the power performance of the specific wind farm under assessment was acceptable. PMID:27271628

  17. Power Performance Verification of a Wind Farm Using the Friedman’s Test

    Directory of Open Access Journals (Sweden)

    Wilmar Hernandez

    2016-06-01

    Full Text Available In this paper, a method of verification of the power performance of a wind farm is presented. This method is based on the Friedman’s test, which is a nonparametric statistical inference technique, and it uses the information that is collected by the SCADA system from the sensors embedded in the wind turbines in order to carry out the power performance verification of a wind farm. Here, the guaranteed power curve of the wind turbines is used as one more wind turbine of the wind farm under assessment, and a multiple comparison method is used to investigate differences between pairs of wind turbines with respect to their power performance. The proposed method says whether the power performance of the specific wind farm under assessment differs significantly from what would be expected, and it also allows wind farm owners to know whether their wind farm has either a perfect power performance or an acceptable power performance. Finally, the power performance verification of an actual wind farm is carried out. The results of the application of the proposed method showed that the power performance of the specific wind farm under assessment was acceptable.

  18. Public Awareness of Uterine Power Morcellation Through US Food and Drug Administration Communications: Analysis of Google Trends Search Term Patterns.

    Science.gov (United States)

    Wood, Lauren N; Jamnagerwalla, Juzar; Markowitz, Melissa A; Thum, D Joseph; McCarty, Philip; Medendorp, Andrew R; Raz, Shlomo; Kim, Ja-Hong

    2018-04-26

    Uterine power morcellation, where the uterus is shred into smaller pieces, is a widely used technique for removal of uterine specimens in patients undergoing minimally invasive abdominal hysterectomy or myomectomy. Complications related to power morcellation of uterine specimens led to US Food and Drug Administration (FDA) communications in 2014 ultimately recommending against the use of power morcellation for women undergoing minimally invasive hysterectomy. Subsequently, practitioners drastically decreased the use of morcellation. We aimed to determine the effect of increased patient awareness on the decrease in use of the morcellator. Google Trends is a public tool that provides data on temporal patterns of search terms, and we correlated this data with the timing of the FDA communication. Weekly relative search volume (RSV) was obtained from Google Trends using the term “morcellation.” Higher RSV corresponds to increases in weekly search volume. Search volumes were divided into 3 groups: the 2 years prior to the FDA communication, a 1-year period following, and thereafter, with the distribution of the weekly RSV over the 3 periods tested using 1-way analysis of variance. Additionally, we analyzed the total number of websites containing the term “morcellation” over this time. The mean RSV prior to the FDA communication was 12.0 (SD 15.8), with the RSV being 60.3 (SD 24.7) in the 1-year after and 19.3 (SD 5.2) thereafter (PGoogle search activity about morcellation of uterine specimens increased significantly after the FDA communications. This trend indicates an increased public awareness regarding morcellation and its complications. More extensive preoperative counseling and alteration of surgical technique and clinician practice may be necessary. ©Lauren N Wood, Juzar Jamnagerwalla, Melissa A Markowitz, D Joseph Thum, Philip McCarty, Andrew R Medendorp, Shlomo Raz, Ja-Hong Kim. Originally published in JMIR Public Health and Surveillance (http

  19. Inside Solid State Drives (SSDs)

    CERN Document Server

    Micheloni, Rino; Eshghi, Kam

    2013-01-01

    Solid State Drives (SSDs) are gaining momentum in enterprise and client applications, replacing Hard Disk Drives (HDDs) by offering higher performance and lower power. In the enterprise, developers of data center server and storage systems have seen CPU performance growing exponentially for the past two decades, while HDD performance has improved linearly for the same period. Additionally, multi-core CPU designs and virtualization have increased randomness of storage I/Os. These trends have shifted performance bottlenecks to enterprise storage systems. Business critical applications such as online transaction processing, financial data processing and database mining are increasingly limited by storage performance. In client applications, small mobile platforms are leaving little room for batteries while demanding long life out of them. Therefore, reducing both idle and active power consumption has become critical. Additionally, client storage systems are in need of significant performance improvement as well ...

  20. Effects of Automation Types on Air Traffic Controller Situation Awareness and Performance

    Science.gov (United States)

    Sethumadhavan, A.

    2009-01-01

    The Joint Planning and Development Office has proposed the introduction of automated systems to help air traffic controllers handle the increasing volume of air traffic in the next two decades (JPDO, 2007). Because fully automated systems leave operators out of the decision-making loop (e.g., Billings, 1991), it is important to determine the right level and type of automation that will keep air traffic controllers in the loop. This study examined the differences in the situation awareness (SA) and collision detection performance of individuals when they worked with information acquisition, information analysis, decision and action selection and action implementation automation to control air traffic (Parasuraman, Sheridan, & Wickens, 2000). When the automation was unreliable, the time taken to detect an upcoming collision was significantly longer for all the automation types compared with the information acquisition automation. This poor performance following automation failure was mediated by SA, with lower SA yielding poor performance. Thus, the costs associated with automation failure are greater when automation is applied to higher order stages of information processing. Results have practical implications for automation design and development of SA training programs.

  1. Performance analysis of a microcontroller based slip power recovery ...

    African Journals Online (AJOL)

    Slip power recovery wound rotor induction motor drives are used in high power, limited speed range applications where control of slip power provides the variable speed drive system. In this paper, the steady state performance analysis of conventional slip power recovery scheme using static line commutated inverter in the ...

  2. Energy-Aware Routing in Multiple Domains Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    Adriana FERNÁNDEZ-FERNÁNDEZ

    2016-12-01

    Full Text Available The growing energy consumption of communication networks has attracted the attention of the networking researchers in the last decade. In this context, the new architecture of Software-Defined Networks (SDN allows a flexible programmability, suitable for the power-consumption optimization problem. In this paper we address the issue of designing a novel distributed routing algorithm that optimizes the power consumption in large scale SDN with multiple domains. The solution proposed, called DEAR (Distributed Energy-Aware Routing, tackles the problem of minimizing the number of links that can be used to satisfy a given data traffic demand under performance constraints such as control traffic delay and link utilization. To this end, we present a complete formulation of the optimization problem that considers routing requirements for control and data plane communications. Simulation results confirm that the proposed solution enables the achievement of significant energy savings.

  3. Experimental analysis of the influence of context awareness on service discovery in PNs

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Nickelsen, Anders; Nielsen, Jimmy Jessen

    2006-01-01

    In this paper we present an experimental prototype for context aware service discovery specifically aimed for Personal Networks. In the paper the concept of context aware service discovery, an architecture and the necessary components for performing context aware service discovery in Personal...... Networks is presented. The paper also presents a set of preliminary performance results of context aware service discovery. This is compared to normal service discovery, and as expected context awareness costs in performance....

  4. Simulation of Oscillations in High Power Klystrons

    CERN Document Server

    Ko, K

    2003-01-01

    Spurious oscillations can seriously limit a klystron's performance from reaching its design specifications. These are modes with frequencies different from the drive frequency, and have been found to be localized in various regions of the tube. If left unsuppressed, such oscillations can be driven to large amplitudes by the beam. As a result, the main output signal may suffer from amplitude and phase instabilities which lead to pulse shortening or reduction in power generation efficiency, as observed during the testing of the first 150MW S-band klystron, which was designed and built at SLAC as a part of an international collaboration with DESY. We present efficient methods to identify suspicious modes and then test their possibility of oscillation. In difference to [3], where each beam-loaded quality-factor Qbl was calculated by time-consuming PIC simulations, now only tracking-simulations with much reduced cpu-time and less sensitivity against noise are applied. This enables the determination of Qbl for larg...

  5. "The fish becomes aware of the water in which it swims": revealing the power of culture in shaping teaching identity

    Science.gov (United States)

    Rahmawati, Yuli; Taylor, Peter Charles

    2017-08-01

    "The fish becomes aware of the water in which it swims" is a metaphor that represents Yuli's revelatory journey about the hidden power of culture in her personal identity and professional teaching practice. While engaging in a critical auto/ethnographic inquiry into her lived experience as a science teacher in Indonesian and Australian schools, she came to understand the powerful role of culture in shaping her teaching identity. Yuli realised that she is a product of cultural hybridity resulting from interactions of very different cultures—Javanese, Bimanese, Indonesian and Australian. Traditionally, Javanese and Indonesian cultures do not permit direct criticism of others. This influenced strongly the way she had learned to interact with students and caused her to be very sensitive to others. During this inquiry she learned the value of engaging students in open discourse and overt caring, and came to realise that teachers bringing their own cultures to the classroom can be both a source of power and a problem. In this journey, Yuli came to understand the hegemonic power of culture in her teaching identity, and envisioned how to empower herself as a good teacher educator of pre-service science teachers.

  6. Musrfit-Real Time Parameter Fitting Using GPUs

    Science.gov (United States)

    Locans, Uldis; Suter, Andreas

    High transverse field μSR (HTF-μSR) experiments typically lead to a rather large data sets, since it is necessary to follow the high frequencies present in the positron decay histograms. The analysis of these data sets can be very time consuming, usually due to the limited computational power of the hardware. To overcome the limited computing resources rotating reference frame transformation (RRF) is often used to reduce the data sets that need to be handled. This comes at a price typically the μSR community is not aware of: (i) due to the RRF transformation the fitting parameter estimate is of poorer precision, i.e., more extended expensive beamtime is needed. (ii) RRF introduces systematic errors which hampers the statistical interpretation of χ2 or the maximum log-likelihood. We will briefly discuss these issues in a non-exhaustive practical way. The only and single purpose of the RRF transformation is the sluggish computer power. Therefore during this work GPU (Graphical Processing Units) based fitting was developed which allows to perform real-time full data analysis without RRF. GPUs have become increasingly popular in scientific computing in recent years. Due to their highly parallel architecture they provide the opportunity to accelerate many applications with considerably less costs than upgrading the CPU computational power. With the emergence of frameworks such as CUDA and OpenCL these devices have become more easily programmable. During this work GPU support was added to Musrfit- a data analysis framework for μSR experiments. The new fitting algorithm uses CUDA or OpenCL to offload the most time consuming parts of the calculations to Nvidia or AMD GPUs. Using the current CPU implementation in Musrfit parameter fitting can take hours for certain data sets while the GPU version can allow to perform real-time data analysis on the same data sets. This work describes the challenges that arise in adding the GPU support to t as well as results obtained

  7. Scalable and Power Efficient Data Analytics for Hybrid Exascale Systems

    Energy Technology Data Exchange (ETDEWEB)

    Choudhary, Alok [Northwestern Univ., Evanston, IL (United States); Samatova, Nagiza [North Carolina State Univ., Raleigh, NC (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Liao, Wei-keng [Northwestern Univ., Evanston, IL (United States)

    2015-03-19

    This project developed a generic and optimized set of core data analytics functions. These functions organically consolidate a broad constellation of high performance analytical pipelines. As the architectures of emerging HPC systems become inherently heterogeneous, there is a need to design algorithms for data analysis kernels accelerated on hybrid multi-node, multi-core HPC architectures comprised of a mix of CPUs, GPUs, and SSDs. Furthermore, the power-aware trend drives the advances in our performance-energy tradeoff analysis framework which enables our data analysis kernels algorithms and software to be parameterized so that users can choose the right power-performance optimizations.

  8. Dynamic Performance of the ITER Reactive Power Compensation System

    International Nuclear Information System (INIS)

    Sheng Zhicai; Fu Peng; Xu Liuwei

    2011-01-01

    Dynamic performance of a reactive power compensation (RPC) system for the international thermonuclear experimental reactor (ITER) power supply is presented. Static var compensators (SVCs) are adopted to mitigate voltage fluctuation and reduce the reactive power down to a level acceptable for the French/European 400 kV grid. A voltage feedback and load power feedforward controller for SVC is proposed, with the feedforward loop intended to guarantee short response time and the feedback loop ensuring good dynamics and steady characteristics of SVC. A mean filter was chosen to measure the control signals to improve the dynamic response. The dynamic performance of the SVC is verified by simulations using PSCAD/EMTDC codes.

  9. An effortless hybrid method to solve economic load dispatch problem in power systems

    International Nuclear Information System (INIS)

    Pourakbari-Kasmaei, M.; Rashidi-Nejad, M.

    2011-01-01

    Highlights: → We proposed a fast method to get feasible solution and avoid futile search. → The method dramatically improves search efficiency and solution quality. → Applied to solve constrained ED problems of power systems with 6 and 15 unit. → Superiority of this method in both aspects of financial and CPU time is remarkable. - Abstract: This paper proposes a new approach and coding scheme for solving economic dispatch problems (ED) in power systems through an effortless hybrid method (EHM). This novel coding scheme can effectively prevent futile searching and also prevents obtaining infeasible solutions through the application of stochastic search methods, consequently dramatically improves search efficiency and solution quality. The dominant constraint of an economic dispatch problem is power balance. The operational constraints, such as generation limitations, ramp rate limits, prohibited operating zones (POZ), network loss are considered for practical operation. Firstly, in the EHM procedure, the output of generator is obtained with a lambda iteration method and without considering POZ and later in a genetic based algorithm this constraint is satisfied. To demonstrate its efficiency, feasibility and fastness, the EHM algorithm was applied to solve constrained ED problems of power systems with 6 and 15 units. The simulation results obtained from the EHM were compared to those achieved from previous literature in terms of solution quality and computational efficiency. Results reveal that the superiority of this method in both aspects of financial and CPU time.

  10. N-power

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    A recent study by the Markinor market research firm shows the more experience you have with nuclear power stations, the less likely you are to want more. A survey of people's attitudes and levels of awareness of nuclear power was done

  11. Parallelizing ATLAS Reconstruction and Simulation: Issues and Optimization Solutions for Scaling on Multi- and Many-CPU Platforms

    International Nuclear Information System (INIS)

    Leggett, C; Jackson, K; Tatarkhanov, M; Yao, Y; Binet, S; Levinthal, D

    2011-01-01

    Thermal limitations have forced CPU manufacturers to shift from simply increasing clock speeds to improve processor performance, to producing chip designs with multi- and many-core architectures. Further the cores themselves can run multiple threads as a zero overhead context switch allowing low level resource sharing (Intel Hyperthreading). To maximize bandwidth and minimize memory latency, memory access has become non uniform (NUMA). As manufacturers add more cores to each chip, a careful understanding of the underlying architecture is required in order to fully utilize the available resources. We present AthenaMP and the Atlas event loop manager, the driver of the simulation and reconstruction engines, which have been rewritten to make use of multiple cores, by means of event based parallelism, and final stage I/O synchronization. However, initial studies on 8 andl6 core Intel architectures have shown marked non-linearities as parallel process counts increase, with as much as 30% reductions in event throughput in some scenarios. Since the Intel Nehalem architecture (both Gainestown and Westmere) will be the most common choice for the next round of hardware procurements, an understanding of these scaling issues is essential. Using hardware based event counters and Intel's Performance Tuning Utility, we have studied the performance bottlenecks at the hardware level, and discovered optimization schemes to maximize processor throughput. We have also produced optimization mechanisms, common to all large experiments, that address the extreme nature of today's HEP code, which due to it's size, places huge burdens on the memory infrastructure of today's processors.

  12. Reasoning about the value of cultural awareness in international collaboration

    Directory of Open Access Journals (Sweden)

    Helena Bernáld

    Full Text Available As international collaborations become a part of everyday life, cultural awareness becomes crucial for our ability to work with people from other countries. People see, evaluate, and interpret things differently depending on their cultural background and cultural awareness. This includes aspects such as appreciation of different communication patterns, the awareness of different value systems and, not least, to become aware of our own cultural values, beliefs and perceptions. This paper addresses the value of cultural awareness in general through describing how it was introduced in two computer science courses with a joint collaboration between students from the US and Sweden. The cultural seminars provided to the students are presented, as well as a discussion of the students\\' reflections and the teachers\\' experiences. The cultural awareness seminars provided students with a new understanding of cultural differences which greatly improved the international collaboration. Cultural awareness may be especially important for small countries like New Zealand and Sweden, since it could provide an essential edge in collaborations with representatives from more \\'powerful\\' countries.

  13. Cadmium-emitter self-powered thermal neutron detector performance characterization & reactor power tracking capability experiments performed in ZED-2

    Energy Technology Data Exchange (ETDEWEB)

    LaFontaine, M.W., E-mail: physics@execulink.com [LaFontaine Consulting, Kitchener, Ontario (Canada); Zeller, M.B. [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada); Nielsen, K. [Royal Military College of Canada, SLOWPOKE-2 Reactor, Kingston, Ontario (Canada)

    2014-07-01

    Cadmium-emitter self-powered thermal neutron flux detectors (SPDs), are typically used for flux monitoring and control applications in low temperature, test reactors such as the SLOWPOKE-2. A collaborative program between Atomic Energy of Canada, academia (Royal Military College of Canada (RMCC)) and industry (LaFontaine Consulting) was initiated to characterize the incore performance of a typical Cd-emitter SPD; and to obtain a definitive measure of the capability of the detector to track changes in reactor power in real time. Prior to starting the experiment proper, Chalk River Laboratories' ZED-2 was operated at low power (5 watts nominal) to verify the predicted moderator critical height. Test measurements were then performed with the vertical center of the SPD emitter positioned at the vertical mid-plane of the ZED-2 reactor core. Measurements were taken with the SPD located at lattice position L0 (near center), and repeated at lattice position P0 (in D{sub 2}O reflector). An ionization chamber (part of the ZED-2 control instrumentation) monitored reactor power at a position located on the south side of the outside wall of the reactor's calandria. These experiments facilitated measurement of the absolute thermal neutron sensitivity of the subject Cd-emitter SPD, and validated the power tracking capability of said SPD. Procedural details of the experiments, data, calculations and associated graphs, are presented and discussed. (author)

  14. An Integrated Agent Model Addressing Situation Awareness and Functional State in Decision Making

    NARCIS (Netherlands)

    Hoogendoorn, M.; van Lambalgen, R.M.; Treur, J.

    2011-01-01

    In this paper, an integrated agent model is introduced addressing mutually interacting Situation Awareness and Functional State dynamics in decision making. This shows how a human's functional state, more specific a human's exhaustion and power, can influence a human's situation awareness, and in

  15. Performance model to assist solar thermal power plant siting in northern Chile based on backup fuel consumption

    Energy Technology Data Exchange (ETDEWEB)

    Larrain, Teresita; Escobar, Rodrigo; Vergara, Julio [Departamento de Ingenieria Mecanica y Metalurgica, Pontificia Universidad Catolica de Chile, Vicuna Mackenna 4860, Macul, Santiago (Chile)

    2010-08-15

    In response to environmental awareness, Chile introduced sustainability goals in its electricity law. Power producers must deliver 5% from renewable sources by 2010 and 10% by 2024. The Chilean desert has a large available surface with one of the highest radiation levels and clearest skies in the World. These factors imply that solar power is an option for this task. However, a commercial plant requires a fossil fuel system to backup the sunlight intermittency. The authors developed a thermodynamical model to estimate the backup fraction needed in a 100 MW hybrid -solar-fossil- parabolic trough power plant. This paper presents the model aiming to predicting the performance and exploring its usefulness in assisting site selection among four locations. Since solar radiation data are only available in a monthly average, we introduced two approaches to feed the model. One data set provided an average month with identical days throughout and the other one considered an artificial month of different daylight profiles on an hourly basis for the same monthly average. We recommend a best plant location based on minimum fossil fuel backup, contributing to optimal siting from the energy perspective. Utilities will refine their policy goals more closely when a precise solar energy data set becomes available. (author)

  16. Design and performance of PEP dc-power systems

    International Nuclear Information System (INIS)

    Jackson, T.

    1981-03-01

    The PEP Magnet Power Supply System represents a significant departure from previous technology with the goal of improved performance at lower cost. In nineteen of the magnet families around the ring, Chopper power supplies are used. The many choppers are powered from two 2 MW dc supplies, and control the average power to the various magnet loads by pulse-width modulation at a 2 kilohertz repetition rate. Each chopper utilizes SCR's for switching, and stores sufficient capacitive energy for turn-off on command. Most of the energy is recirculated, resulting in high-efficiency. The two kilohertz chopping rate allows a one kilohertz unity-gain bandwidth in the current-regulator loop, and this wide bandwidth, coupled with low drift components in the error-detection system, provides a high-performance system. The PEP system has also shown that the chopper system is economical compared to standard multi-pulse controlled-rectifier

  17. PBF: A New Privacy-Aware Billing Framework for Online Electric Vehicles with Bidirectional Auditability

    Directory of Open Access Journals (Sweden)

    Rasheed Hussain

    2017-01-01

    Full Text Available Recently an online electric vehicle (OLEV concept has been introduced, where vehicles are propelled by the wirelessly transmitted electrical power from the infrastructure installed under the road while moving. The absence of secure-and-fair billing is one of the main hurdles to widely adopt this promising technology. This paper introduces a new secure and privacy-aware fair billing framework for OLEV on the move through the charging plates installed under the road. We first propose two extreme lightweight mutual authentication mechanisms, a direct authentication and a hash chain-based authentication between vehicles and the charging plates that can be used for different vehicular speeds on the road. Second, we propose a secure and privacy-aware wireless power transfer on move for the vehicles with bidirectional auditability guarantee by leveraging game theoretic approach. Each charging plate transfers a fixed amount of energy to the vehicle and bills the vehicle in a privacy-aware way accordingly. Our protocol guarantees secure, privacy-aware, and fair billing mechanism for the OLEVs while receiving electric power from the infrastructure installed under the road. Moreover, our proposed framework can play a vital role in eliminating the security and privacy challenges in the deployment of power transfer technology to the OLEVs.

  18. Information Technology Service Management with Cloud Computing Approach to Improve Administration System and Online Learning Performance

    Directory of Open Access Journals (Sweden)

    Wilianto Wilianto

    2015-10-01

    Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54  ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440  bits/s.  With  cloud  computing,  the  average  delay  is 45 ms,  load  2 800  bits/s,  CPU  utilization  0.03%,  and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39  ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP    server 100 bits/sec, and HTTP server 95  bits/s.  Thus,  the  delay, the load, and the CPU utilization decrease; but,  the throughput increases. Information technology service management with cloud computing approach has better performance.

  19. Effective electron-density map improvement and structure validation on a Linux multi-CPU web cluster: The TB Structural Genomics Consortium Bias Removal Web Service.

    Science.gov (United States)

    Reddy, Vinod; Swanson, Stanley M; Segelke, Brent; Kantardjieff, Katherine A; Sacchettini, James C; Rupp, Bernhard

    2003-12-01

    Anticipating a continuing increase in the number of structures solved by molecular replacement in high-throughput crystallography and drug-discovery programs, a user-friendly web service for automated molecular replacement, map improvement, bias removal and real-space correlation structure validation has been implemented. The service is based on an efficient bias-removal protocol, Shake&wARP, and implemented using EPMR and the CCP4 suite of programs, combined with various shell scripts and Fortran90 routines. The service returns improved maps, converted data files and real-space correlation and B-factor plots. User data are uploaded through a web interface and the CPU-intensive iteration cycles are executed on a low-cost Linux multi-CPU cluster using the Condor job-queuing package. Examples of map improvement at various resolutions are provided and include model completion and reconstruction of absent parts, sequence correction, and ligand validation in drug-target structures.

  20. Large power supply facilities for fusion research

    International Nuclear Information System (INIS)

    Miyahara, Akira; Yamamoto, Mitsuyoshi.

    1976-01-01

    The authors had opportunities to manufacture and to operate two power supply facilities, that is, 125MVA computer controlled AC generator with a fly wheel for JIPP-T-2 stellerator in Institute of Plasma Physics, Nagoya University and 3MW trial superconductive homopolar DC generator to the Japan Society for Promotion of Machine Industry. The 125MVA fly-wheel generator can feed both 60MW (6kV x 10kA) DC power for toroidal coils and 20MW (0.5kV x 40kA) DC power for helical coils. The characteristic features are possibility of Bung-Bung control based on Pontrjagin's maximum principle, constant current control or constant voltage control for load coils, and cpu control for routine operation. The 3MW (150V-20000A) homopolar generator is the largest in the world as superconductive one, however, this capacity is not enough for nuclear fusion research. The problems of power supply facilities for large Tokamak devices are discussed

  1. Radio Context Awareness and Applications

    Directory of Open Access Journals (Sweden)

    Luca Reggiani

    2013-01-01

    Full Text Available The context refers to “any information that can be used to characterize the situation of an entity, where an entity can be a person, place, or physical object.” Radio context awareness is defined as the ability of detecting and estimating a system state or parameter, either globally or concerning one of its components, in a radio system for enhancing performance at the physical, network, or application layers. In this paper, we review the fundamentals of context awareness and the recent advances in the main radio techniques that increase the context awareness and smartness, posing challenges and renewed opportunities to added-value applications in the context of the next generation of wireless networks.

  2. A Performance Improvement of Power Supply Module for Safety-related Controller

    International Nuclear Information System (INIS)

    Kim, Jong-Kyun; Yun, Dong-Hwa; Hwang, Sung-Jae; Lee, Myeong-Kyun; Yoo, Kwan-Woo

    2015-01-01

    In this paper, in relation to voltage shortage state when power supply module is a slave mode, the performance improvement by modifying a PFC(Power Factor Correction) circuit is presented. With the modification of the PFC circuit, the performance improvement in respect of the voltage shortage state when the power supply module is a slave mode is checked. As a result, POSAFE-Q PLC can ensure the stability with the redundant power supply module. The purpose of this paper is to improve the redundant performance of power supply module(NSPS-2Q). It is one of components in POSAFE-Q which is a PLC(Programmable Logic Controller) that has been developed for the evaluation of safety-related. Power supply module provides a stable power in order that POSAFE-Q can be operated normally. It is possible to be mounted two power supply modules in POSAFE-Q for a redundant(Master/Slave) function. So that even if a problem occurs in one power supply module, another power supply module will provide a power to POSAFE-Q stably

  3. A Performance Improvement of Power Supply Module for Safety-related Controller

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong-Kyun; Yun, Dong-Hwa; Hwang, Sung-Jae; Lee, Myeong-Kyun; Yoo, Kwan-Woo [PONUTech Co., Seoul (Korea, Republic of)

    2015-10-15

    In this paper, in relation to voltage shortage state when power supply module is a slave mode, the performance improvement by modifying a PFC(Power Factor Correction) circuit is presented. With the modification of the PFC circuit, the performance improvement in respect of the voltage shortage state when the power supply module is a slave mode is checked. As a result, POSAFE-Q PLC can ensure the stability with the redundant power supply module. The purpose of this paper is to improve the redundant performance of power supply module(NSPS-2Q). It is one of components in POSAFE-Q which is a PLC(Programmable Logic Controller) that has been developed for the evaluation of safety-related. Power supply module provides a stable power in order that POSAFE-Q can be operated normally. It is possible to be mounted two power supply modules in POSAFE-Q for a redundant(Master/Slave) function. So that even if a problem occurs in one power supply module, another power supply module will provide a power to POSAFE-Q stably.

  4. A hybrid CPU-GPU accelerated framework for fast mapping of high-resolution human brain connectome.

    Directory of Open Access Journals (Sweden)

    Yu Wang

    Full Text Available Recently, a combination of non-invasive neuroimaging techniques and graph theoretical approaches has provided a unique opportunity for understanding the patterns of the structural and functional connectivity of the human brain (referred to as the human brain connectome. Currently, there is a very large amount of brain imaging data that have been collected, and there are very high requirements for the computational capabilities that are used in high-resolution connectome research. In this paper, we propose a hybrid CPU-GPU framework to accelerate the computation of the human brain connectome. We applied this framework to a publicly available resting-state functional MRI dataset from 197 participants. For each subject, we first computed Pearson's Correlation coefficient between any pairs of the time series of gray-matter voxels, and then we constructed unweighted undirected brain networks with 58 k nodes and a sparsity range from 0.02% to 0.17%. Next, graphic properties of the functional brain networks were quantified, analyzed and compared with those of 15 corresponding random networks. With our proposed accelerating framework, the above process for each network cost 80∼150 minutes, depending on the network sparsity. Further analyses revealed that high-resolution functional brain networks have efficient small-world properties, significant modular structure, a power law degree distribution and highly connected nodes in the medial frontal and parietal cortical regions. These results are largely compatible with previous human brain network studies. Taken together, our proposed framework can substantially enhance the applicability and efficacy of high-resolution (voxel-based brain network analysis, and have the potential to accelerate the mapping of the human brain connectome in normal and disease states.

  5. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease.

    Science.gov (United States)

    Shamonin, Denis P; Bron, Esther E; Lelieveldt, Boudewijn P F; Smits, Marion; Klein, Stefan; Staring, Marius

    2013-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4-5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15-60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license.

  6. [Gender stereotypes arising in a state of gender awareness].

    Science.gov (United States)

    Ito, Y

    2001-12-01

    This study examined the structure of gender stereotypes which might arise in the state of gender awareness that was triggered by social situations where people perceived their gender differences strongly. Out of 1,500 residents in Tokyo aged between 20-60, 342 females and 313 males were randomly chosen and answered the questions about gender consciousness in the state of gender awareness. A factor analysis revealed that "maternity" and "trustworthiness" were the dominant dimensions of gender stereotypes in the state of gender awareness, and that trustworthiness particularly formed the basis of gender stereotypes. Generation differences in gender stereotypes were also revealed between women in their 40 s and 50 s, and between men in their 30 s and 40 s. Generally, power for men and nurture for women were more likely to be perceived in a state of gender awareness.

  7. Low-power grating detection system chip for high-speed low-cost length and angle precision measurement

    Science.gov (United States)

    Hou, Ligang; Luo, Rengui; Wu, Wuchen

    2006-11-01

    This paper forwards a low power grating detection chip (EYAS) on length and angle precision measurement. Traditional grating detection method, such as resister chain divide or phase locked divide circuit are difficult to design and tune. The need of an additional CPU for control and display makes these methods' implementation more complex and costly. Traditional methods also suffer low sampling speed for the complex divide circuit scheme and CPU software compensation. EYAS is an application specific integrated circuit (ASIC). It integrates micro controller unit (MCU), power management unit (PMU), LCD controller, Keyboard interface, grating detection unit and other peripherals. Working at 10MHz, EYAS can afford 5MHz internal sampling rate and can handle 1.25MHz orthogonal signal from grating sensor. With a simple control interface by keyboard, sensor parameter, data processing and system working mode can be configured. Two LCD controllers can adapt to dot array LCD or segment bit LCD, which comprised output interface. PMU alters system between working and standby mode by clock gating technique to save power. EYAS in test mode (system action are more frequently than real world use) consumes 0.9mw, while 0.2mw in real world use. EYAS achieved the whole grating detection system function, high-speed orthogonal signal handling in a single chip with very low power consumption.

  8. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  9. Transfer of strength and power training to sports performance.

    Science.gov (United States)

    Young, Warren B

    2006-06-01

    The purposes of this review are to identify the factors that contribute to the transference of strength and power training to sports performance and to provide resistance-training guidelines. Using sprinting performance as an example, exercises involving bilateral contractions of the leg muscles resulting in vertical movement, such as squats and jump squats, have minimal transfer to performance. However, plyometric training, including unilateral exercises and horizontal movement of the whole body, elicits significant increases in sprint acceleration performance, thus highlighting the importance of movement pattern and contraction velocity specificity. Relatively large gains in power output in nonspecific movements (intramuscular coordination) can be accompanied by small changes in sprint performance. Research on neural adaptations to resistance training indicates that intermuscular coordination is an important component in achieving transfer to sports skills. Although the specificity of resistance training is important, general strength training is potentially useful for the purposes of increasing body mass, decreasing the risk of soft-tissue injuries, and developing core stability. Hypertrophy and general power exercises can enhance sports performance, but optimal transfer from training also requires a specific exercise program.

  10. TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems

    Directory of Open Access Journals (Sweden)

    Tosiron Adegbija

    2017-12-01

    Full Text Available Embedded systems have stringent design constraints, which has necessitated much prior research focus on optimizing energy consumption and/or performance. Since embedded systems typically have fewer cooling options, rising temperature, and thus temperature optimization, is an emergent concern. Most embedded systems only dissipate heat by passive convection, due to the absence of dedicated thermal management hardware mechanisms. The embedded system’s temperature not only affects the system’s reliability, but can also affect the performance, power, and cost. Thus, embedded systems require efficient thermal management techniques. However, thermal management can conflict with other optimization objectives, such as execution time and energy consumption. In this paper, we focus on managing the temperature using a synergy of cache optimization and dynamic frequency scaling, while also optimizing the execution time and energy consumption. This paper provides new insights on the impact of cache parameters on efficient temperature-aware cache tuning heuristics. In addition, we present temperature-aware phase-based tuning, TaPT, which determines Pareto optimal clock frequency and cache configurations for fine-grained execution time, energy, and temperature tradeoffs. TaPT enables autonomous system optimization and also allows designers to specify temperature constraints and optimization priorities. Experiments show that TaPT can effectively reduce execution time, energy, and temperature, while imposing minimal hardware overhead.

  11. Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations

    Directory of Open Access Journals (Sweden)

    Marco Lanuzza

    2011-04-01

    Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.

  12. Fairness-Aware and Energy Efficiency Resource Allocation in Multiuser OFDM Relaying System

    Directory of Open Access Journals (Sweden)

    Guangjun Liang

    2016-01-01

    Full Text Available A fairness-aware resource allocation scheme in a cooperative orthogonal frequency division multiple (OFDM network is proposed based on jointly optimizing the subcarrier pairing, power allocation, and channel-user assignment. Compared with traditional OFDM relaying networks, the source is permitted to retransfer the same data transmitted by it in the first time slot, further improving the system capacity performance. The problem which maximizes the energy efficiency (EE of the system with total power constraint and minimal spectral efficiency constraint is formulated into a mixed-integer nonlinear programming (MINLP problem which has an intractable complexity in general. The optimization model is simplified into a typical fractional programming problem which is testified to be quasiconcave. Thus we can adopt Dinkelbach method to deal with MINLP problem proposed to achieve the optimal solution. The simulation results show that the joint resource allocation method proposed can achieve an optimal EE performance under the minimum system service rate requirement with a good global convergence.

  13. [Working memory, phonological awareness and spelling hypothesis].

    Science.gov (United States)

    Gindri, Gigiane; Keske-Soares, Márcia; Mota, Helena Bolli

    2007-01-01

    Working memory, phonological awareness and spelling hypothesis. To verify the relationship between working memory, phonological awareness and spelling hypothesis in pre-school children and first graders. Participants of this study were 90 students, belonging to state schools, who presented typical linguistic development. Forty students were preschoolers, with the average age of six and 50 students were first graders, with the average age of seven. Participants were submitted to an evaluation of the working memory abilities based on the Working Memory Model (Baddeley, 2000), involving phonological loop. Phonological loop was evaluated using the Auditory Sequential Test, subtest 5 of Illinois Test of Psycholinguistic Abilities (ITPA), Brazilian version (Bogossian & Santos, 1977), and the Meaningless Words Memory Test (Kessler, 1997). Phonological awareness abilities were investigated using the Phonological Awareness: Instrument of Sequential Assessment (CONFIAS - Moojen et al., 2003), involving syllabic and phonemic awareness tasks. Writing was characterized according to Ferreiro & Teberosky (1999). Preschoolers presented the ability of repeating sequences of 4.80 digits and 4.30 syllables. Regarding phonological awareness, the performance in the syllabic level was of 19.68 and in the phonemic level was of 8.58. Most of the preschoolers demonstrated to have a pre-syllabic writing hypothesis. First graders repeated, in average, sequences of 5.06 digits and 4.56 syllables. These children presented a phonological awareness of 31.12 in the syllabic level and of 16.18 in the phonemic level, and demonstrated to have an alphabetic writing hypothesis. The performance of working memory, phonological awareness and spelling level are inter-related, as well as being related to chronological age, development and scholarity.

  14. Distributed context-aware systems

    CERN Document Server

    Ferreira, Paulo

    2014-01-01

    Context-aware systems aim to deliver a rich user experience by taking into?account the current user context (location, time, activity, etc.), possibly?captured without his intervention. For example, cell phones are now able to?continuously update a user's location while, at the same time, users execute?an increasing amount of activities online, where their actions may be easily?captured (e.g. login in a web application) without user consent. In the last decade, this topic has seen numerous developments that demonstrate its relevance and usefulness. The?trend was accelerated with the widespread?availability of powerful mobile devices (e.g. smartphones) that include a myriad of?sensors which enable applications to capture the user context. However, there are several challenges that must be addressed; we focus on scalability?(large number of context aware messages) and privacy (personal data that may be propagated).?This book is organized in five chapters starting with an introduction to?the theme raising the mo...

  15. Load-aware modeling for uplink cellular networks in a multi-channel environment

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Alouini, Mohamed-Slim

    2014-01-01

    We exploit tools from stochastic geometry to develop a tractable analytical approach for modeling uplink cellular networks. The developed model is load aware and accounts for per-user power control as well as the limited transmit power constraint

  16. Optimization of Selected Remote Sensing Algorithms for Embedded NVIDIA Kepler GPU Architecture

    Science.gov (United States)

    Riha, Lubomir; Le Moigne, Jacqueline; El-Ghazawi, Tarek

    2015-01-01

    This paper evaluates the potential of embedded Graphic Processing Units in the Nvidias Tegra K1 for onboard processing. The performance is compared to a general purpose multi-core CPU and full fledge GPU accelerator. This study uses two algorithms: Wavelet Spectral Dimension Reduction of Hyperspectral Imagery and Automated Cloud-Cover Assessment (ACCA) Algorithm. Tegra K1 achieved 51 for ACCA algorithm and 20 for the dimension reduction algorithm, as compared to the performance of the high-end 8-core server Intel Xeon CPU with 13.5 times higher power consumption.

  17. Measurement of Temporal Awareness in Air Traffic Control

    Science.gov (United States)

    Rantanen, E.M.

    2009-01-01

    Temporal awareness, or level 3 situation awareness, is critical to successful control of air traffic, yet the construct remains ill-defined and difficult to measure. This research sought evidence for air traffic controllers awareness of temporal characteristics of their tasks in data from a high-fidelity system evaluation simulation. Five teams of controllers worked on four scenarios with different traffic load. Several temporal parameters were defined for each task controllers performed during a simulation run and their actions on the tasks were timed relative to them. Controllers showed a strong tendency to prioritize tasks according to a first come, first served principle. This trend persisted as task load increased. Also evident was awareness of the urgency of tasks, as tasks with impending closing of a window of opportunity were performed before tasks that had longer time available before closing of the window.

  18. Extended performance electric propulsion power processor design study. Volume 2: Technical summary

    Science.gov (United States)

    Biess, J. J.; Inouye, L. Y.; Schoenfeld, A. D.

    1977-01-01

    Electric propulsion power processor technology has processed during the past decade to the point that it is considered ready for application. Several power processor design concepts were evaluated and compared. Emphasis was placed on a 30 cm ion thruster power processor with a beam power rating supply of 2.2KW to 10KW for the main propulsion power stage. Extension in power processor performance were defined and were designed in sufficient detail to determine efficiency, component weight, part count, reliability and thermal control. A detail design was performed on a microprocessor as the thyristor power processor controller. A reliability analysis was performed to evaluate the effect of the control electronics redesign. Preliminary electrical design, mechanical design and thermal analysis were performed on a 6KW power transformer for the beam supply. Bi-Mod mechanical, structural and thermal control configurations were evaluated for the power processor and preliminary estimates of mechanical weight were determined.

  19. Accounting for the speed shear in wind turbine power performance measurement

    DEFF Research Database (Denmark)

    Wagner, Rozenn

    the vertical wind shear and the turbulence intensity. The work presented in this thesis consists of the description and the investigation of a simple method to account for the wind speed shear in the power performance measurement. Ignoring this effect was shown to result in a power curve dependant on the shear...... for turbulence intensity suggested by Albers. The second method was found to be more suitable for normalising the power curve for the turbulence intensity. Using the equivalent wind speed accounting for the wind shear in the power performance measurement was shown to result in a more repeatable power curve than......The power curve of a wind turbine is the primary characteristic of the machine as it is the basis of the warranty for it power production. The current IEC standard for power performance measurement only requires the measurement of the wind speed at hub height and the air density to characterise...

  20. Message maps for Safety Barrier Awareness

    DEFF Research Database (Denmark)

    of the risks in a given situation, time or place, and by enabling them to observe and judge whether the relevant safety barriers are in place and in good order. This can be considered as “Situational Awareness (SA)”, which is an essential competence for at en employee can perform his/her job safely....... This Situational Awareness puts a number of requirements on people, work conditions, management, learning, knowledge, experience, motivation, etc. The Dutch WORM and RAM projects led to the identification of 64 types of risks and the safety barriers and performance factors that are linked to these risks...

  1. The effects of carbon nanotubes on CPU cooling

    Science.gov (United States)

    Challa, Sashi Kiran

    Computers today have evolved from being big bulky machines that took up rooms of space into small simple machines for net browsing and into small but complicated multi-core servers and supercomputing architectures. This has been possible due to the evolution of the processors. Today processors have reached 45nm specifications with millions of transistors. Transistors produce heat when they run. Today more than ever we have a growing need for managing this heat efficiently. It is indicated that increasing power density can cause a difficulty in managing temperatures on a chip. It is also mentioned that we need to move to a more temperature aware architecture. In this research we try and address the issue of handling the heat produced by processors in an efficient manner. We have tried to see if the use of carbon nanotubes will prove useful in dissipating the heat produced by the processor in a more efficient way. In the process we have also tried to come up with a repeatable experimental setup as there is not work that we have been able to find describing this exact procedure. The use of carbon nanotubes seemed natural as they have a very high thermal conductivity value. Also one of the uncertain aspects of the experiment is the use of carbon nanotubes as they are still under study and their properties have not been completely understood and there has been some inconsistency in the theoretical values of their properties and the experimental results obtained so far. The results that we got were not exactly what we expected but were close, and were in the right direction indicating that more work in future would show better and consistent results.

  2. Mechanical power, thrust power and propelling efficiency: relationships with elite sprint swimming performance.

    Science.gov (United States)

    Gatta, Giorgio; Cortesi, Matteo; Swaine, Ian; Zamparo, Paola

    2018-03-01

    The purpose of this study was to explore the relationships between mechanical power, thrust power, propelling efficiency and sprint performance in elite swimmers. Mechanical power was measured in 12 elite sprint male swimmers: (1) in the laboratory, by using a whole-body swimming ergometer (W' TOT ) and (2) in the pool, by measuring full tethered swimming force (F T ) and maximal swimming velocity (V max ): W' T  = F T  · V max . Propelling efficiency (η P ) was estimated based on the "paddle wheel model" at V max . V max was 2.17 ± 0.06 m · s -1 , η P was 0.39 ± 0.02, W' T was 374 ± 62 W and W' TOT was 941 ± 92 W. V max was better related to W' T (useful power output: R = 0.943, P swimming performance. The ratio W' T /W' TOT (0.40 ± 0.04) represents the fraction of total mechanical power that can be utilised in water (e.g., η P ) and was indeed the same as that estimated based on the "paddle wheel model"; this supports the use of this model to estimate η P in swimming.

  3. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    Science.gov (United States)

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2013-01-01

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices. PMID:23271602

  4. An on-time power-aware scheduling scheme for medical sensor SoC-based WBAN systems.

    Science.gov (United States)

    Hwang, Tae-Ho; Kim, Dong-Sun; Kim, Jung-Guk

    2012-12-27

    The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD), which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time) and the power consumption optimization. The scheduler was embedded into a system on chip (SoC) developed to support the wireless body area network-a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  5. Deployment of 464XLAT (RFC6877) alongside IPv6-only CPU resources at WLCG sites

    Science.gov (United States)

    Froy, T. S.; Traynor, D. P.; Walker, C. J.

    2017-10-01

    IPv4 is now officially deprecated by the IETF. A significant amount of effort has already been expended by the HEPiX IPv6 Working Group on testing dual-stacked hosts and IPv6-only CPU resources. Dual-stack adds complexity and administrative overhead to sites that may already be starved of resource. This has resulted in a very slow uptake of IPv6 from WLCG sites. 464XLAT (RFC6877) is intended for IPv6 single-stack environments that require the ability to communicate with IPv4-only endpoints. This paper will present a deployment strategy for 464XLAT, operational experiences of using 464XLAT in production at a WLCG site and important information to consider prior to deploying 464XLAT.

  6. Awareness of fetal echo in Indian scenario

    International Nuclear Information System (INIS)

    Warrier, Dhanya; Saraf, Rahul; Maheshwari, Sunita; Suresh, PV; Shah, Sejal

    2012-01-01

    Fetal echocardiography is a well established sensitive tool to diagnose congenital heart disease (CHD) in utero. One of the determinants of effective utilization of fetal echocardiography is its awareness in the general population. The present hospital based study was undertaken to assess the awareness of the need for fetal echocardiography amongst Indian parents. One thousand one hundred and thirty eight consecutive parents who visited the pediatric cardiology outpatient department of a tertiary care centre over a period of two months were asked to fill up a questionnaire that included their demographic data, educational status, history of CHD in children, awareness of fetal echocardiography and source of information and timing of fetal echocardiogram if performed. The data was categorized and awareness was noted in different groups. The awareness in the study population was 2.2%. Awareness was found to be similar across the study population irrespective of the demographics and high risk status of the parents. The awareness of fetal echocardiography, an important tool in reducing the incidence of complex CHD, thereby impacting public health, is alarmingly low in the population studied. Appropriate action to increase awareness of fetal echocardiography needs to be looked into

  7. Context Aware Middleware Architectures: Survey and Challenges

    Directory of Open Access Journals (Sweden)

    Xin Li

    2015-08-01

    Full Text Available Context aware applications, which can adapt their behaviors to changing environments, are attracting more and more attention. To simplify the complexity of developing applications, context aware middleware, which introduces context awareness into the traditional middleware, is highlighted to provide a homogeneous interface involving generic context management solutions. This paper provides a survey of state-of-the-art context aware middleware architectures proposed during the period from 2009 through 2015. First, a preliminary background, such as the principles of context, context awareness, context modelling, and context reasoning, is provided for a comprehensive understanding of context aware middleware. On this basis, an overview of eleven carefully selected middleware architectures is presented and their main features explained. Then, thorough comparisons and analysis of the presented middleware architectures are performed based on technical parameters including architectural style, context abstraction, context reasoning, scalability, fault tolerance, interoperability, service discovery, storage, security & privacy, context awareness level, and cloud-based big data analytics. The analysis shows that there is actually no context aware middleware architecture that complies with all requirements. Finally, challenges are pointed out as open issues for future work.

  8. Team situation awareness in nuclear power plant process control: A literature review, task analysis and future research

    International Nuclear Information System (INIS)

    Ma, R.; Kaber, D. B.; Jones, J. M.; Starkey, R. L.

    2006-01-01

    Operator achievement and maintenance of situation awareness (SA) in nuclear power plant (NPP) process control has emerged as an important concept in defining effective relationships between humans and automation in this complex system. A literature review on factors influencing SA revealed several variables to be important to team SA, including the overall task and team goals, individual tasks, team member roles, and the team members themselves. Team SA can also be adversely affected by a range of factors, including stress, mental over- or under-loading, system design (including human-machine interface design), complexity, human error in perception, and automation. Our research focused on the analysis of 'shared' SA and team SA among an assumed three-person, main-control-room team. Shared SA requirements represent the knowledge that is held in common by NPP operators, and team SA represents the collective, unique knowledge of all operators. The paper describes an approach to goal-directed task analysis (GDTA) applied to NPP main control room operations. In general, the GDTA method reveals critical operator decision and information requirements. It identifies operator SA requirements relevant to performing complex systems control. The GDTA can reveal requirements at various levels of cognitive processing, including perception, comprehension and projection, in NPP process control. Based on the literature review and GDTA approach, a number of potential research issues are proposed with an aim toward understanding and facilitating team SA in NPP process control. (authors)

  9. Measuring situation awareness of operation teams in NPPs using a verbal protocol analysis

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Park, Jinkyun; Kim, Ar ryum; Seong, Poong Hyun

    2012-01-01

    Highlights: ► A method for measuring team situation awareness is developed. ► Verbal protocol analysis is adopted in this method. ► This method resolves uncertainties from conventional methods. ► This method can be used in evaluating the human–system interfaces. - Abstract: Situation awareness (SA) continues to receive a considerable amount of attention from the ergonomics community given that need for operators to maintain SA is frequently cited as a key to effective and efficient performance. Although complex and dynamic environments such as that of a main control room (MCR) in a nuclear power plant (NPP) are operated by operation teams, and while team situation awareness (TSA) is also cited as an important factor, research is limited to individual SA. However, understanding TSA can provide a window onto the characteristics of team acquisition as well as the performance of a complex skill. Therefore, such knowledge can be valuable in diagnosing team performance successes and failures. Moreover, training and design interventions can target the cognitive underpinnings of team performance, with implications for the design of technological aids to improve team performance. Despite these advantages and the importance of understanding TSA, measures and methods targeting TSA are sparse and fail to address it properly. In this study, an objective TSA measurement method is developed in an effort to understand TSA. First, key considerations for developing a method are derived. Based on these considerations, the proposed method is developed while mainly focusing on the creation of logical connections between team communications and TSA. A speech act coding scheme is also implemented to analyze team communications. The TSA measurement method developed in this study provides a measure for each level of TSA. It was revealed from a preliminary study that this TSA measurement method is feasible for measuring TSA to a fair extent. Useful insight into TSA is also derived.

  10. Effect of Low Pressure End Conditions on Steam Power Plant Performance

    Directory of Open Access Journals (Sweden)

    Ali Syed Haider

    2014-07-01

    Full Text Available Most of the electricity produced throughout the world today is from steam power plants and improving the performance of power plants is crucial to minimize the greenhouse gas emissions and fuel consumption. Energy efficiency of a thermal power plant strongly depends on its boiler-condenser operating conditions. The low pressure end conditions of a condenser have influence on the power output, steam consumption and efficiency of a plant. Hence, the objective this paper is to study the effect of the low pressure end conditions on a steam power plant performance. For the study each component was modelled thermodynamically. Simulation was done and the results showed that performance of the condenser is highly a function of its pressure which in turn depends on the flow rate and temperature of the cooling water. Furthermore, when the condenser pressure increases both net power output and plant efficiency decrease whereas the steam consumption increases. The results can be used to run a steam power cycle at optimum conditions.

  11. Development and Analysis of a Resource-Aware Power Management System as Applied to Small Spacecraft

    Energy Technology Data Exchange (ETDEWEB)

    Shriver, Patrick [Univ. of Colorado, Boulder, CO (United States)

    2005-01-01

    In this thesis, an overall framework and solution method for managing the limited power resources of a small spacecraft is presented. Analogous to mobile computing technology, a primary limiting factor is the available power resources. In spite of the millions of dollars budgeted for research and development over decades, improvements in battery efficiency remains low. This situation is exacerbated by advances in payload technology that lead to increasingly power-hungry and data-intensive instruments. The challenge for the small spacecraft is to maximize capabilities and performance while meeting difficult design requirements and small project budgets.

  12. High Power Flex-Propellant Arcjet Performance

    Science.gov (United States)

    Litchford, Ron J.

    2011-01-01

    A MW-class electrothermal arcjet based on a water-cooled, wall-stabilized, constricted arc discharge configuration was subjected to extensive performance testing using hydrogen and simulated ammonia propellants with the deliberate aim of advancing technology readiness level for potential space propulsion applications. The breadboard design incorporates alternating conductor/insulator wafers to form a discharge barrel enclosure with a 2.5-cm internal bore diameter and an overall length of approximately 1 meter. Swirling propellant flow is introduced into the barrel, and a DC arc discharge mode is established between a backplate tungsten cathode button and a downstream ringanode/ spin-coil assembly. The arc-heated propellant then enters a short mixing plenum and is accelerated through a converging-diverging graphite nozzle. This innovative design configuration differs substantially from conventional arcjet thrusters, in which the throat functions as constrictor and the expansion nozzle serves as the anode, and permits the attainment of an equilibrium sonic throat (EST) condition. During the test program, applied electrical input power was varied between 0.5-1 MW with hydrogen and simulated ammonia flow rates in the range of 4-12 g/s and 15-35 g/s, respectively. The ranges of investigated specific input energy therefore fell between 50-250 MJ/kg for hydrogen and 10-60 MJ/kg for ammonia. In both cases, observed arc efficiencies were between 40-60 percent as determined via a simple heat balance method based on electrical input power and coolant water calorimeter measurements. These experimental results were found to be in excellent agreement with theoretical chemical equilibrium predictions, thereby validating the EST assumption and enabling the utilization of standard TDK nozzle expansion analyses to reliably infer baseline thruster performance characteristics. Inferred specific impulse performance accounting for recombination kinetics during the expansion process

  13. Awareness of Cancer Susceptibility Genetic Testing

    Science.gov (United States)

    Mai, Phuong L.; Vadaparampil, Susan Thomas; Breen, Nancy; McNeel, Timothy S.; Wideroff, Louise; Graubard, Barry I.

    2014-01-01

    Background Genetic testing for several cancer susceptibility syndromes is clinically available; however, existing data suggest limited population awareness of such tests. Purpose To examine awareness regarding cancer genetic testing in the U.S. population aged ≥25 years in the 2000, 2005, and 2010 National Health Interview Surveys. Methods The weighted percentages of respondents aware of cancer genetic tests, and percent changes from 2000–2005 and 2005–2010, overall and by demographic, family history, and healthcare factors were calculated. Interactions were used to evaluate the patterns of change in awareness between 2005 and 2010 among subgroups within each factor. To evaluate associations with awareness in 2005 and 2010, percentages were adjusted for covariates using multiple logistic regression. The analysis was performed in 2012. Results Awareness decreased from 44.4% to 41.5% (pAwareness increased between 2005 and 2010 in most subgroups, particularly among individuals in the South (p-interaction=0.03) or with a usual place of care (p-interaction=0.01). In 2005 and 2010, awareness was positively associated with personal or family cancer history and high perceived cancer risk, and inversely associated with racial/ethnic minorities, age 25–39 or ≥60 years, male gender, lower education and income levels, public or no health insurance, and no provider contact in 12 months. Conclusions Despite improvement from 2005 to 2010, ≤50% of the U.S. adult population was aware of cancer genetic testing in 2010. Notably, disparities persist for racial/ethnic minorities and individuals with limited health care access or income. PMID:24745633

  14. IMPROVING THE PERFORMANCE OF THE LINEAR SYSTEMS SOLVERS USING CUDA

    Directory of Open Access Journals (Sweden)

    BOGDAN OANCEA

    2012-05-01

    Full Text Available Parallel computing can offer an enormous advantage regarding the performance for very large applications in almost any field: scientific computing, computer vision, databases, data mining, and economics. GPUs are high performance many-core processors that can obtain very high FLOP rates. Since the first idea of using GPU for general purpose computing, things have evolved and now there are several approaches to GPU programming: CUDA from NVIDIA and Stream from AMD. CUDA is now a popular programming model for general purpose computations on GPU for C/C++ programmers. A great number of applications were ported to CUDA programming model and they obtain speedups of orders of magnitude comparing to optimized CPU implementations. In this paper we present an implementation of a library for solving linear systems using the CCUDA framework. We present the results of performance tests and show that using GPU one can obtain speedups of about of approximately 80 times comparing with a CPU implementation.

  15. Development of an Objective Measurement Method for Situation Awareness of Operation Teams in NPPs

    International Nuclear Information System (INIS)

    Lee, Seung Woo; Kim, Ar Ryum; Kim, Hyoung Ju; Seong, Poong Hyun; Park, Jin Kyun

    2011-01-01

    Situation awareness (SA) continues to receive a considerable amount of attention from the ergonomics community since the need for operators to maintain SA is frequently cited as a key to effective and efficient performance. Even though complex and dynamic environments such as main control room (MCR) in the nuclear power plants (NPPs) is operated in teams and still SA which teams posses is important, research is currently focused on individual SA not for team situation awareness (TSA). Since there are not many measurement methods developed for TSA, individual SA measurement methods are at first reviewed and the critical requirements which new TSA measurements should consider are derived. With an assumption that TSA is an integration of individual SA, a new and objective TSA measurement method is developed. This method is developed mainly based on logical connections between TSA and team communication and implements verbal protocol analysis. This method provides measure for each level of TSA. By performing preliminary analysis with this method, it was shown that this method is feasible to some extent

  16. Development of an Objective Measurement Method for Situation Awareness of Operation Teams in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Woo; Kim, Ar Ryum; Kim, Hyoung Ju; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun [KAERI, Daejeon (Korea, Republic of)

    2011-08-15

    Situation awareness (SA) continues to receive a considerable amount of attention from the ergonomics community since the need for operators to maintain SA is frequently cited as a key to effective and efficient performance. Even though complex and dynamic environments such as main control room (MCR) in the nuclear power plants (NPPs) is operated in teams and still SA which teams posses is important, research is currently focused on individual SA not for team situation awareness (TSA). Since there are not many measurement methods developed for TSA, individual SA measurement methods are at first reviewed and the critical requirements which new TSA measurements should consider are derived. With an assumption that TSA is an integration of individual SA, a new and objective TSA measurement method is developed. This method is developed mainly based on logical connections between TSA and team communication and implements verbal protocol analysis. This method provides measure for each level of TSA. By performing preliminary analysis with this method, it was shown that this method is feasible to some extent.

  17. The Impact of Power Switching Devices on the Thermal Performance of a 10 MW Wind Power NPC Converter

    Directory of Open Access Journals (Sweden)

    Ke Ma

    2012-07-01

    Full Text Available Power semiconductor switching devices play an important role in the performance of high power wind energy generation systems. The state-of-the-art device choices in the wind power application as reported in the industry include IGBT modules, IGBT press-pack and IGCT press-pack. Because of significant deviation in the packaging structure, electrical characteristics, as well as thermal impedance, these available power switching devices may have various thermal cycling behaviors, which will lead to converter solutions with very different cost, size and reliability performance. As a result, this paper aimed to investigate the thermal related characteristics of some important power switching devices. Their impact on the thermal cycling of a 10 MW three-level Neutral-Point-Clamped wind power converter is then evaluated under various operating conditions; the main focus will be on the grid connected inverter. It is concluded that the thermal performances of the 3L-NPC wind power converter can be significantly changed by the power device technology as well as their parallel configurations.

  18. Preparatory power posing affects nonverbal presence and job interview performance.

    Science.gov (United States)

    Cuddy, Amy J C; Wilmuth, Caroline A; Yap, Andy J; Carney, Dana R

    2015-07-01

    The authors tested whether engaging in expansive (vs. contractive) "power poses" before a stressful job interview--preparatory power posing--would enhance performance during the interview. Participants adopted high-power (i.e., expansive, open) poses or low-power (i.e., contractive, closed) poses, and then prepared and delivered a speech to 2 evaluators as part of a mock job interview. All interview speeches were videotaped and coded for overall performance and hireability and for 2 potential mediators: verbal content (e.g., structure, content) and nonverbal presence (e.g., captivating, enthusiastic). As predicted, those who prepared for the job interview with high- (vs. low-) power poses performed better and were more likely to be chosen for hire; this relation was mediated by nonverbal presence, but not by verbal content. Although previous research has focused on how a nonverbal behavior that is enacted during interactions and observed by perceivers affects how those perceivers evaluate and respond to the actor, this experiment focused on how a nonverbal behavior that is enacted before the interaction and unobserved by perceivers affects the actor's performance, which, in turn, affects how perceivers evaluate and respond to the actor. This experiment reveals a theoretically novel and practically informative result that demonstrates the causal relation between preparatory nonverbal behavior and subsequent performance and outcomes. (c) 2015 APA, all rights reserved).

  19. Using a Voltage Domain Programmable Technique for Low-Power Management Cell-Based Design

    Directory of Open Access Journals (Sweden)

    Ching-Hwa Cheng

    2011-09-01

    Full Text Available The Multi-voltage technique is an effective way to reduce power consumption. In the proposed cell-based voltage domain programmable (VDP technique, the high and low voltages applied to logic gates are programmable. The flexible voltage domain reassignment allows the chip performance and power consumption to be dynamically adjusted. In the proposed technique, the power switches possess the feature of flexible programming after chip manufacturing. This VDP method does not use an external voltage regulator to regulate the supply voltage level from outside of the chip but can be easily integrated within the design. This novel technique is proven by use of a video decoder test chip, which shows 55% and 61% power reductions compared to conventional single-Vdd and low-voltage designs, respectively. This power-aware performance adjusting mechanism shows great power reduction with a good power-performance management mechanism.

  20. Development of staffing evaluation principle for advanced main control room and the effect on situation awareness and mental workload

    International Nuclear Information System (INIS)

    Lin, Chiuhsiang Joe; Hsieh, Tsung-Ling; Lin, Shiau-Feng

    2013-01-01

    Highlights: • A staffing evaluation principle was developed for the advanced main control room. • The principle proposed to improve situation awareness and mental workload. • The principle has good validity that was examined by experimental design. - Abstract: Situation awareness and mental workload, both of which influence operator performance in the advanced main control room of a nuclear power plant, can be affected by staffing level. The key goal of staffing is to ensure the proper number of personnel to support plant operations and events. If the staffing level is not adaptive, the operators may have low situation awareness and an excessive mental workload, which lead to human error. Accordingly, this study developed a staffing evaluation principle based on CPM-GOMS modeling for operations in the advanced main control room. A within-subject experiment was designed to examine the validity of the staffing evaluation principle. The results indicated that the situation awareness, mental workload, and operating performance of the staffing level determined by the staffing evaluation principle was significantly better than that of the non-evaluated staffing level; thus, the validity of the staffing evaluation technique is acceptable. The implications of the findings of this study on managerial practice are discussed

  1. Development of staffing evaluation principle for advanced main control room and the effect on situation awareness and mental workload

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiuhsiang Joe, E-mail: cjoelin@mail.ntust.edu.tw [Department of Industrial Management, National Taiwan University of Science and Technology, 43, Section 4, Keelung Road, Taipei 10607, Taiwan (China); Hsieh, Tsung-Ling, E-mail: bm1129@gmail.com [Institute of Nuclear Energy Research, 1000, Wenhua Road, Chiaan Village, Lungtan 32546, Taiwan (China); Lin, Shiau-Feng, E-mail: g9602411@cycu.edu.tw [Department of Industrial Engineering, Chung-Yuan Christian University, 200, Chung Pei Road, Chung-Li 32023, Taiwan (China)

    2013-12-15

    Highlights: • A staffing evaluation principle was developed for the advanced main control room. • The principle proposed to improve situation awareness and mental workload. • The principle has good validity that was examined by experimental design. - Abstract: Situation awareness and mental workload, both of which influence operator performance in the advanced main control room of a nuclear power plant, can be affected by staffing level. The key goal of staffing is to ensure the proper number of personnel to support plant operations and events. If the staffing level is not adaptive, the operators may have low situation awareness and an excessive mental workload, which lead to human error. Accordingly, this study developed a staffing evaluation principle based on CPM-GOMS modeling for operations in the advanced main control room. A within-subject experiment was designed to examine the validity of the staffing evaluation principle. The results indicated that the situation awareness, mental workload, and operating performance of the staffing level determined by the staffing evaluation principle was significantly better than that of the non-evaluated staffing level; thus, the validity of the staffing evaluation technique is acceptable. The implications of the findings of this study on managerial practice are discussed.

  2. Energy-Aware RFID Anti-Collision Protocol.

    Science.gov (United States)

    Arjona, Laura; Simon, Hugo Landaluce; Ruiz, Asier Perallos

    2018-06-11

    The growing interest in mobile devices is transforming wireless identification technologies. Mobile and battery-powered Radio Frequency Identification (RFID) readers, such as hand readers and smart phones, are are becoming increasingly attractive. These RFID readers require energy-efficient anti-collision protocols to minimize the tag collisions and to expand the reader's battery life. Furthermore, there is an increasing interest in RFID sensor networks with a growing number of RFID sensor tags. Thus, RFID application developers must be mindful of tag anti-collision protocols. Energy-efficient protocols involve a low reader energy consumption per tag. This work presents a thorough study of the reader energy consumption per tag and analyzes the main factor that affects this metric: the frame size update strategy. Using the conclusion of this analysis, the anti-collision protocol Energy-Aware Slotted Aloha (EASA) is presented to decrease the energy consumption per tag. The frame size update strategy of EASA is configured to minimize the energy consumption per tag. As a result, EASA presents an energy-aware frame. The performance of the proposed protocol is evaluated and compared with several state of the art Aloha-based anti-collision protocols based on the current RFID standard. Simulation results show that EASA, with an average of 15 mJ consumed per tag identified, achieves a 6% average improvement in the energy consumption per tag in relation to the strategies of the comparison.

  3. Thermal diagnostics in power plant to improve performance

    International Nuclear Information System (INIS)

    Meister, H.

    1995-01-01

    The improvement of older power plants by changing poor performing components is a cost effective method to increase the capacity of the units. The necessary information for the detection of components that are to be replaced can be obtained from heat rate and component tests with accuracy instrumentation. The discussed methods and tools provided by ABB Were used with success in several power plants in Europe. These tools are in the process of permanent improvement and can be used in almost any type of power plant. Due to the reasons discussed above, there is a high potential for improvement of a lot of power plants in the next decade. (author)

  4. Nursing Performance and Mobile Phone Use: Are Nurses Aware of Their Performance Decrements?

    Science.gov (United States)

    McBride, Deborah; LeVasseur, Sandra A; Li, Dongmei

    2015-04-23

    Prior research has documented the effect of concurrent mobile phone use on medical care. This study examined the extent of hospital registered nurses' awareness of their mobile-phone-associated performance decrements. The objective of this study was to compare self-reported performance with reported observed performance of others with respect to mobile phone use by hospital registered nurses. In March 2014, a previously validated survey was emailed to the 10,978 members of the Academy of Medical Surgical Nurses. The responses were analyzed using a two-proportion z test (alpha=.05, two-tailed) to examine whether self-reported and observed rates of error were significantly different. All possible demographic and employment confounders which could potentially contribute to self-reported and observed performance errors were tested for significance. Of the 950 respondents, 825 (8.68%, 825/950) met the inclusion criteria for analysis. The representativeness of the sample relative to the US nursing workforce was assessed using a two-proportion z test. This indicated that sex and location of primary place of employment (urban/rural) were represented appropriately in the study sample. Respondents in the age groups 55 years old were overrepresented. Whites, American Indians/Alaskan natives, and Native Hawaiian or Pacific Islanders were underrepresented, while Hispanic and multiple/other ethnicities were overrepresented. It was decided to report the unweighted, rather than the weighted survey data, with the recognition that the results, while valuable, may not be generalizable to the entire US registered nursing workforce. A significant difference was found between registered nurses' self-reported and observed rates of errors associated with concurrent mobile phone use in following three categories (1) work performance (z=-26.6142, Pmobile phone use by nurses at work was a serious distraction; always (13%, 107/825), often (29.6%, 244/825), sometimes (44.6%, 368/825), rarely

  5. An On-Time Power-Aware Scheduling Scheme for Medical Sensor SoC-Based WBAN Systems

    Directory of Open Access Journals (Sweden)

    Jung-Guk Kim

    2012-12-01

    Full Text Available The focus of many leading technologies in the field of medical sensor systems is on low power consumption and robust data transmission. For example, the implantable cardioverter-defibrillator (ICD, which is used to maintain the heart in a healthy state, requires a reliable wireless communication scheme with an extremely low duty-cycle, high bit rate, and energy-efficient media access protocols. Because such devices must be sustained for over 5 years without access to battery replacement, they must be designed to have extremely low power consumption in sleep mode. Here, an on-time, energy-efficient scheduling scheme is proposed that performs power adjustments to minimize the sleep-mode current. The novelty of this scheduler is that it increases the determinacy of power adjustment and the predictability of scheduling by employing non-pre-emptible dual priority scheduling. This predictable scheduling also guarantees the punctuality of important periodic tasks based on their serialization, by using their worst case execution time and the power consumption optimization. The scheduler was embedded into a system on chip (SoC developed to support the wireless body area network—a wakeup-radio and wakeup-timer for implantable medical devices. This scheduling system is validated by the experimental results of its performance when used with life-time extensions of ICD devices.

  6. Survey of women's awareness about radiation

    International Nuclear Information System (INIS)

    Onishi, Keiko; Aomi, Yuki; Asada, Kiyoe; Kamiya, Masami; Mitsuishi, Haruko

    2008-01-01

    A project in a voluntary group 'Women's Energy Network' conducted two questionnaire surveys on Japanese women's awareness about radiation. The survey was conducted to investigate how women(non-experts) perceive radiation and radioactivity, what is their image about radiation, to what extent they are aware of the use of radiation in their daily life, and whether they find nuclear related information useful or not. The results of those surveys have led WEN to publish a booklet entitled 'Our Life and Radiation' to be used for public communication and to hold public forums in various cities in Japan. The first survey was conducted in 2001 to those living in big cities such as Tokyo and Osaka and to those living in the area where the nuclear power plant is installed. The response rate was 72.4% (1,028 out of 1,419). The second one was done in 2005 to those living in Tokyo and other big cities. The response rate was 84.7% (888 our of 983). It was derived from the two surveys that they were not so much aware of various applications of radiation for daily use (awareness rate was low), but they considered those information would be useful when it becomes available for them and they were interested in knowing about it. As for the image of radiation, about 80% have shown fear when they see or hear a word 'radiation'. This report provides the result of questionnaire surveys on women's awareness about radiation conducted by 'Our Daily Life and Radiation' project in Women's Energy Network. (author)

  7. Long Term Analysis of Adaptive Low-Power Instrument Platform Power and Battery Performance

    Science.gov (United States)

    Edwards, T.; Bowman, J. R.; Clauer, C. R.

    2017-12-01

    Operation of the Autonomous Adaptive Low-Power Instrument Platform (AAL-PIP) by the Magnetosphere-Ionosphere Science Team (MIST) at Virginia Tech has been ongoing for about 10 years. These instrument platforms are deployed on the East Antarctic Plateau in remote locations that are difficult to access regularly. The systems have been designed to operate unattended for at least 5 years. During the Austral summer, the systems charge batteries using solar panels and power is provided by the batteries during the winter months. If the voltage goes below a critical level, the systems go into hibernation and wait for voltage from the solar panels to initiate a restart sequence to begin operation and battery charging. Our first system was deployed on the East Antarctic Plateau in 2008 and we report here on an analysis of the power and battery performance over multiple years and provide an estimate for how long these systems can operate before major battery maintenance must be performed.

  8. A Practical Framework to Study Low-Power Scheduling Algorithms on Real-Time and Embedded Systems

    Directory of Open Access Journals (Sweden)

    Jian (Denny Lin

    2014-05-01

    Full Text Available With the advanced technology used to design VLSI (Very Large Scale Integration circuits, low-power and energy-efficiency have played important roles for hardware and software implementation. Real-time scheduling is one of the fields that has attracted extensive attention to design low-power, embedded/real-time systems. The dynamic voltage scaling (DVS and CPU shut-down are the two most popular techniques used to design the algorithms. In this paper, we firstly review the fundamental advances in the research of energy-efficient, real-time scheduling. Then, a unified framework with a real Intel PXA255 Xscale processor, namely real-energy, is designed, which can be used to measure the real performance of the algorithms. We conduct a case study to evaluate several classical algorithms by using the framework. The energy efficiency and the quantitative difference in their performance, as well as the practical issues found in the implementation of these algorithms are discussed. Our experiments show a gap between the theoretical and real results. Our framework not only gives researchers a tool to evaluate their system designs, but also helps them to bridge this gap in their future works.

  9. The role of phonological awareness in reading comprehension

    Directory of Open Access Journals (Sweden)

    Maria Silvia Cárnio

    Full Text Available ABSTRACT Purpose: to characterize the performance of 4th grade-Elementary School students with and without signs of reading and writing disorders as for phonological awareness and reading comprehension, and also verify possible correlations between them. Methods: 60 children enrolled in the 4th grade of Elementary School from two public schools, whose parents signed the Informed Consent Form, participated in the present study. They were selected and organized in groups, with and without signs of reading and writing disorders. All students were individually assessed regarding their phonological awareness and reading comprehension of sentences and texts through standardized tests. The data underwent statistical analysis. Results: those with signs of reading and writing disorders showed the lowest performance in the reading comprehension of sentences and texts. A correlation was found between phonological awareness and reading comprehension of sentences and texts in both groups. Conclusion: students with no signs of reading and writing disorders had a higher performance in the skills assessed. The correlation found between phonological awareness and reading comprehension of sentences and texts shows not only the importance of metaphonological skills for a proficient reading, but also for a comprehensive one.

  10. Power optimized variation aware dual-threshold SRAM cell design technique

    Directory of Open Access Journals (Sweden)

    Aminul Islam

    2011-02-01

    Full Text Available Aminul Islam1, Mohd Hasan21Department of Electronics and Communication Engineering, Birla Institute of Technology, Mesra, Ranchi, Jharkhand, India; 2Department of Electronics Engineering, Aligarh Muslim University, Aligarh, Uttar Pradesh, IndiaAbstract: Bulk complementary metal-oxide semiconductor (CMOS technology is facing enormous challenges at channel lengths below 45 nm, such as gate tunneling, device mismatch, random dopant fluctuations, and mobility degradation. Although multiple gate transistors and strained silicon devices overcome some of the bulk CMOS problems, it is sensible to look for revolutionary new materials and devices to replace silicon. It is obvious that future technology materials should exhibit higher mobility, better channel electrostatics, scalability, and robustness against process variations. Carbon nanotube-based technology is very promising because it has most of these desired features. There is a need to explore the potential of this emerging technology by designing circuits based on this technology and comparing their performance with that of existing bulk CMOS technology. In this paper, we propose a low-power variation-immune dual-threshold voltage carbon nanotube field effect transistor (CNFET-based seven-transistor (7T static random access memory (SRAM cell. The proposed CNFET-based 7T SRAM cell offers ~1.2× improvement in standby power, ~1.3× improvement in read delay, and ~1.1× improvement in write delay. It offers narrower spread in write access time (1.4× at optimum energy point [OEP] and 1.2× at 1 V. It features 56.3% improvement in static noise margin and 40% improvement in read static noise margin. All the simulation measurements are taken at proposed OEP decided by the optimum results obtained after extensive simulation on HSPICE (high-performance simulation program with integrated circuit emphasis environment.Keywords: carbon nanotube field effect transistor (CNFET, chirality vector, random dopant

  11. Coal-fired high performance power generating system. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-31

    As a result of the investigations carried out during Phase 1 of the Engineering Development of Coal-Fired High-Performance Power Generation Systems (Combustion 2000), the UTRC-led Combustion 2000 Team is recommending the development of an advanced high performance power generation system (HIPPS) whose high efficiency and minimal pollutant emissions will enable the US to use its abundant coal resources to satisfy current and future demand for electric power. The high efficiency of the power plant, which is the key to minimizing the environmental impact of coal, can only be achieved using a modern gas turbine system. Minimization of emissions can be achieved by combustor design, and advanced air pollution control devices. The commercial plant design described herein is a combined cycle using either a frame-type gas turbine or an intercooled aeroderivative with clean air as the working fluid. The air is heated by a coal-fired high temperature advanced furnace (HITAF). The best performance from the cycle is achieved by using a modern aeroderivative gas turbine, such as the intercooled FT4000. A simplified schematic is shown. In the UTRC HIPPS, the conversion efficiency for the heavy frame gas turbine version will be 47.4% (HHV) compared to the approximately 35% that is achieved in conventional coal-fired plants. This cycle is based on a gas turbine operating at turbine inlet temperatures approaching 2,500 F. Using an aeroderivative type gas turbine, efficiencies of over 49% could be realized in advanced cycle configuration (Humid Air Turbine, or HAT). Performance of these power plants is given in a table.

  12. Adaptations in athletic performance after ballistic power versus strength training.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2010-08-01

    To determine whether the magnitude of improvement in athletic performance and the mechanisms driving these adaptations differ in relatively weak individuals exposed to either ballistic power training or heavy strength training. Relatively weak men (n = 24) who could perform the back squat with proficient technique were randomized into three groups: strength training (n = 8; ST), power training (n = 8; PT), or control (n = 8). Training involved three sessions per week for 10 wk in which subjects performed back squats with 75%-90% of one-repetition maximum (1RM; ST) or maximal-effort jump squats with 0%-30% 1RM (PT). Jump and sprint performances were assessed as well as measures of the force-velocity relationship, jumping mechanics, muscle architecture, and neural drive. Both experimental groups showed significant (P training with no significant between-group differences evident in either jump (peak power: ST = 17.7% +/- 9.3%, PT = 17.6% +/- 4.5%) or sprint performance (40-m sprint: ST = 2.2% +/- 1.9%, PT = 3.6% +/- 2.3%). ST also displayed a significant increase in maximal strength that was significantly greater than the PT group (squat 1RM: ST = 31.2% +/- 11.3%, PT = 4.5% +/- 7.1%). The mechanisms driving these improvements included significant (P force-velocity relationship, jump mechanics, muscle architecture, and neural activation that showed a degree of specificity to the different training stimuli. Improvements in athletic performance were similar in relatively weak individuals exposed to either ballistic power training or heavy strength training for 10 wk. These performance improvements were mediated through neuromuscular adaptations specific to the training stimulus. The ability of strength training to render similar short-term improvements in athletic performance as ballistic power training, coupled with the potential long-term benefits of improved maximal strength, makes strength training a more effective training modality for relatively weak individuals.

  13. The effect of prosody awareness training on the performance of consecutive interpretation by Farsi-English interpreter trainees : an experimental study

    NARCIS (Netherlands)

    Yenkimaleki, M.; V.J., van Heuven

    2016-01-01

    This study investigates the effect of prosody awareness training on the performance of Farsi-English interpreter trainees. Two groups of student interpreters were formed. All were native speakers of Farsi who studied English translation and interpreting at the BA level at the State University of

  14. The Application of S7-400H redundant PLC in I and C system for waterworks in nuclear power plant

    International Nuclear Information System (INIS)

    Pang Yuxiang

    2013-01-01

    This paper introduces Siemens S7-400H redundant PLC which is employed to implement monitor and control systems for waterworks in nuclear power plant. It focuses on the configuration and realization of the redundant system. The safety and reliability of the system is improved by using redundant CPU, network and server. (authors)

  15. The Development of Power Detection System Using One-Chip Microcontroller

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Sa Hyun; Choi, Nkak II; Lee, Sung Kil; Lim, Yang Su; Cho Geum Bae; Baek, Hyung Lae [Chosun University, Kwangju(Korea)

    2002-04-01

    This paper describes on the development of power detection system with one-chop microcontroller. The designed system is composed of power detection circuits and analyzing software. The system detects 3-phases voltage, 3-phases current, external temperature, leakage current and stores in flash memory AT89C52 was used as CPU and AM29F040 was used as memory to store the data. The analysis software was developed to detect the cause of the electrical fire incidents. With a data-compression technology, the data can be stored for the 43.5 days in a normal state, four hours and fifteen minutes in emergency state. (author). 7 refs., 22 figs., 2 tabs.

  16. The human performance evaluation system at Virginia Power

    International Nuclear Information System (INIS)

    Smith, R.G. III.

    1989-01-01

    The safe operation of nuclear power plants requires high standards of performance, extensive training, and responsive management. Despite a utility's best efforts, inappropriate human actions do occur. Although such inappropriate actions will occur, it is believed that such actions can be minimized and managed. The Federal Aviation Administration has a successful program administered by the National Aeronautics and Space Administration. This program is called the Aviation Safety Reporting System (ASRS). Established in 1975, it is anonymous and nonpunitive. A trial program for several utilities was developed by the Institute of Nuclear Power Operations which used a concept similar to the ASRS reporting process. Based on valuable lessons learned by Virginia Power during the pilot program, an effort was made in 1986 to formalize the Human Performance Evaluation System (HPES) to establish an ongoing problem-solving system for evaluating human performance. Currently, 34 domestic utilities and 3 international utilities voluntarily participate in the implementation of the HPES. Each participating utility has selected and trained personnel to evaluate events involving human error and provide corrective action recommendations to prevent recurrence. It is believed that the use of the HPES can lead to improved safety and operation availability

  17. Context Aware Concurrent Execution Framework for Web Browser

    DEFF Research Database (Denmark)

    Saeed, Aamir; Erbad, Aiman Mahmood; Olsen, Rasmus Løvenstein

    Computing hungry multimedia web applications need to efficiently utilize the device resources. HTML5 web workers is a non-sharing concurrency platform that enables multimedia web application to utilize the available multi-core hardware. HTML5 web workers are implemented by major browser vendors...... to facilitate concurrent execution in web clients and enhance the quality of ambitious web applications. The concurrent execution in web workers allows parallel processing using available cores at the expense of communication overhead and extra computation. The benefits of concurrent execution can be maximized...... by balancing load across workers/CPU cores. This work presents load-balancing algorithms between web workers using parameters such as scheduler throughput, computation priority and game entity locality. An award-winning web-based multimedia game (raptjs.com) is used to test the performance of the load balance...

  18. MHD generator performance analysis for the Advanced Power Train study

    Science.gov (United States)

    Pian, C. C. P.; Hals, F. A.

    1984-01-01

    Comparative analyses of different MHD power train designs for early commercial MHD power plants were performed for plant sizes of 200, 500, and 1000 MWe. The work was conducted as part of the first phase of a planned three-phase program to formulate an MHD Advanced Power Train development program. This paper presents the results of the MHD generator design and part-load analyses. All of the MHD generator designs were based on burning of coal with oxygen-enriched air preheated to 1200 F. Sensitivities of the MHD generator design performance to variations in power plant size, coal type, oxygen enrichment level, combustor heat loss, channel length, and Mach number were investigated. Basd on these sensitivity analyses, together with the overall plant performance and cost-of-electricity analyses, as well as reliability and maintenance considerations, a recommended MHD generator design was selected for each of the three power plants. The generators for the 200 MWe and 500 MWe power plant sizes are supersonic designs. A subsonic generator design was selected for the 1000 MWe plant. Off-design analyses of part-load operation of the supersonic channel selected for the 200 MWe power plant were also conductd. The results showed that a relatively high overall net plant efficiency can be maintained during part-laod operation with a supersonic generator design.

  19. Adaptive real-time methodology for optimizing energy-efficient computing

    Science.gov (United States)

    Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA

    2011-06-28

    Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.

  20. Energy-aware system design algorithms and architectures

    CERN Document Server

    Kyung, Chong-Min

    2011-01-01

    Power consumption becomes the most important design goal in a wide range of electronic systems. There are two driving forces towards this trend: continuing device scaling and ever increasing demand of higher computing power. First, device scaling continues to satisfy Moore’s law via a conventional way of scaling (More Moore) and a new way of exploiting the vertical integration (More than Moore). Second, mobile and IT convergence requires more computing power on the silicon chip than ever. Cell phones are now evolving towards mobile PC. PCs and data centers are becoming commodities in house and a must in industry. Both supply enabled by device scaling and demand triggered by the convergence trend realize more computation on chip (via multi-core, integration of diverse functionalities on mobile SoCs, etc.) and finally more power consumption incurring power-related issues and constraints. Energy-Aware System Design: Algorithms and Architectures provides state-of-the-art ideas for low power design methods from ...

  1. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    Science.gov (United States)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  2. An FPGA Based Multiprocessing CPU for Beam Synchronous Timing in CERN's SPS and LHC

    CERN Document Server

    Ballester, F J; Gras, J J; Lewis, J; Savioz, J J; Serrano, J

    2003-01-01

    The Beam Synchronous Timing system (BST) will be used around the LHC and its injector, the SPS, to broadcast timing meassages and synchronize actions with the beam in different receivers. To achieve beam synchronization, the BST Master card encodes messages using the bunch clock, with a nominal value of 40.079 MHz for the LHC. These messages are produced by a set of tasks every revolution period, which is every 89 us for the LHC and every 23 us for the SPS, therefore imposing a hard real-time constraint on the system. To achieve determinism, the BST Master uses a dedicated CPU inside its main Field Programmable Gate Array (FPGA) featuring zero-delay hardware task switching and a reduced instruction set. This paper describes the BST Master card, stressing the main FPGA design, as well as the associated software, including the LynxOS driver and the tailor-made assembler.

  3. Development of web based performance analysis program for nuclear power plant turbine cycle

    International Nuclear Information System (INIS)

    Park, Hoon; Yu, Seung Kyu; Kim, Seong Kun; Ji, Moon Hak; Choi, Kwang Hee; Hong, Seong Ryeol

    2002-01-01

    Performance improvement of turbine cycle affects economic operation of nuclear power plant. We developed performance analysis system for nuclear power plant turbine cycle. The system is based on PTC (Performance Test Code), that is estimation standard of nuclear power plant performance. The system is developed using Java Web-Start and JSP(Java Server Page)

  4. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  5. GPU-based high performance Monte Carlo simulation in neutron transport

    Energy Technology Data Exchange (ETDEWEB)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br

    2009-07-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  6. GPU-based high performance Monte Carlo simulation in neutron transport

    International Nuclear Information System (INIS)

    Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)

  7. A task-based parallelism and vectorized approach to 3D Method of Characteristics (MOC) reactor simulation for high performance computing architectures

    Science.gov (United States)

    Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.

    2016-05-01

    In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.

  8. More than a feeling: Pervasive influences of memory without awareness of retrieval

    OpenAIRE

    Voss, Joel L.; Lucas, Heather D.; Paller, Ken A.

    2012-01-01

    The subjective experiences of recollection and familiarity have featured prominently in the search for neurocognitive mechanisms of memory. However, these two explicit expressions of memory, which involve conscious awareness of memory retrieval, are distinct from an entire category of implicit expressions of memory that do not entail such awareness. This review summarizes recent evidence showing that neurocognitive processing related to implicit memory can powerfully influence the behavioral ...

  9. Trading Relationship Performance and Market Power in Food Supply Chains

    DEFF Research Database (Denmark)

    Xhoxhi, Orjon

    The development of the agri-food industry has led to a considerable increase of intermediaries’ market power vis-à-vis farmers. There are studies and evidence that suggests that due to their power, intermediaries transfer risks and unexpected costs to farmers which compromise the innovation...... and livelihood. The overall objective of this PhD study was to investigate the intermediaries’ power over farmers and its effects on trading relationship performance between them. Two farms survey were conducted, the first one was carried out in the Adana region in Turkey and had an explorative focus aiming......), investigate how intermediaries’ power affects farmers-intermediaries trading relationship performance (paper 3) and analyse the determinants of contract farming and its effects on post-harvest losses (paper 4). The first paper investigates the determinants of intermediaries’ power over farmers’ margin related...

  10. Performance of automatic generation control mechanisms with large-scale wind power

    Energy Technology Data Exchange (ETDEWEB)

    Ummels, B.C.; Gibescu, M.; Paap, G.C. [Delft Univ. of Technology (Netherlands); Kling, W.L. [Transmission Operations Department of TenneT bv (Netherlands)

    2007-11-15

    The unpredictability and variability of wind power increasingly challenges real-time balancing of supply and demand in electric power systems. In liberalised markets, balancing is a responsibility jointly held by the TSO (real-time power balancing) and PRPs (energy programs). In this paper, a procedure is developed for the simulation of power system balancing and the assessment of AGC performance in the presence of large-scale wind power, using the Dutch control zone as a case study. The simulation results show that the performance of existing AGC-mechanisms is adequate for keeping ACE within acceptable bounds. At higher wind power penetrations, however, the capabilities of the generation mix are increasingly challenged and additional reserves are required at the same level. (au)

  11. A heterogeneous multi-core platform for low power signal processing in systems-on-chip

    DEFF Research Database (Denmark)

    Paker, Ozgun; Sparsø, Jens; Haandbæk, Niels

    2002-01-01

    is based on message passing. The mini-cores are designed as parameterized soft macros intended for a synthesis based design flow. A 520.000 transistor 0.25µm CMOS prototype chip containing 6 mini-cores has been fabricated and tested. Its power consumption is only 50% higher than a hardwired ASIC and more......This paper presents a low-power and programmable DSP architecture - a heterogeneous multiprocessor platform consisting of standard CPU/DSP cores, and a set of simple instruction set processors called mini-cores each optimized for a particular class of algorithm (FIR, IIR, LMS, etc.). Communication...

  12. Awareness of holistic care practices by intensive care nurses in north-western Saudi Arabia.

    Science.gov (United States)

    Albaqawi, Hamdan M; Butcon, Vincent R; Molina, Roger R

    2017-08-01

    To examine awareness of holistic patient care by staff nurses in the intensive care units of hospitals in the city of Hail, Saudi Arabia.  Methods: A quantitative correlational study design was used to investigate relationships between intensive care nurse's awareness of holistic practices and nurses' latest performance review. Intensive care staff nurses (n=99) from 4 public sector hospitals in Hail were surveyed on their awareness of variables across 5 holistic domains: physiological, sociocultural, psychological, developmental, and spiritual. Data were collected between October and December 2015 using written survey, and performance evaluations obtained from the hospital administrations. Results were statistically analyzed and compared (numerical, percentage, Pearson's correlation, Chronbach's alpha). Results: The ICU staff nurses in Hail City were aware of the secular aspects of holistic care, and the majority had very good performance evaluations. There were no demographic trends regarding holistic awareness and nurse performance. Further, awareness of holistic care was not associated with nurse performance.  Conclusion: A caring-enhancement workshop and a mentoring program for non-Saudi nurses may increase holistic care awareness and enhance its practice in the ICUs.

  13. Area and Power Modeling for Networks-on-Chip with Layout Awareness

    Directory of Open Access Journals (Sweden)

    Paolo Meloni

    2007-01-01

    Full Text Available Networks-on-Chip (NoCs are emerging as scalable interconnection architectures, designed to support the increasing amount of cores that are integrated onto a silicon die. Compared to traditional interconnects, however, NoCs still lack well established CAD deployment tools to tackle the large amount of available degrees of freedom, starting from the choice of a network topology. “Silicon-aware” optimization tools are now emerging in literature; they select an NoC topology taking into account the tradeoff between performance and hardware cost, that is, area and power consumption. A key requirement for the effectiveness of these tools, however, is the availability of accurate analytical models for power and area. Such models are unfortunately not as available and well understood as those for traditional communication fabrics. Further, simplistic models may turn out to be totally inaccurate when applied to wire dominated architectures; this observation demands at least for a model validation step against placed and routed devices. In this work, given an NoC reference architecture, we present a flow to devise analytical models of area occupation and power consumption of NoC switches, and propose strategies for coefficient characterization which have different tradeoffs in terms of accuracy and of modeling activity effort. The models are parameterized on several architectural, synthesis-related, and traffic variables, resulting in maximum flexibility. We finally assess the accuracy of the models, checking whether they can also be applied to placed and routed NoC blocks.

  14. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  15. Performance Analysis of XCPC Powered Solar Cooling Demonstration Project

    Science.gov (United States)

    Widyolar, Bennett K.

    A solar thermal cooling system using novel non-tracking External Compound Parabolic Concentrators (XCPC) has been built at the University of California, Merced and operated for two cooling seasons. Its performance in providing power for space cooling has been analyzed. This solar cooling system is comprised of 53.3 m2 of XCPC trough collectors which are used to power a 23 kW double effect (LiBr) absorption chiller. This is the first system that combines both XCPC and absorption chilling technologies. Performance of the system was measured in both sunny and cloudy conditions, with both clean and dirty collectors. It was found that these collectors are well suited at providing thermal power to drive absorption cooling systems and that both the coinciding of available thermal power with cooling demand and the simplicity of the XCPC collectors compared to other solar thermal collectors makes them a highly attractive candidate for cooling projects.

  16. High Input Voltage, Silicon Carbide Power Processing Unit Performance Demonstration

    Science.gov (United States)

    Bozak, Karin E.; Pinero, Luis R.; Scheidegger, Robert J.; Aulisio, Michael V.; Gonzalez, Marcelo C.; Birchenough, Arthur G.

    2015-01-01

    A silicon carbide brassboard power processing unit has been developed by the NASA Glenn Research Center in Cleveland, Ohio. The power processing unit operates from two sources: a nominal 300 Volt high voltage input bus and a nominal 28 Volt low voltage input bus. The design of the power processing unit includes four low voltage, low power auxiliary supplies, and two parallel 7.5 kilowatt (kW) discharge power supplies that are capable of providing up to 15 kilowatts of total power at 300 to 500 Volts (V) to the thruster. Additionally, the unit contains a housekeeping supply, high voltage input filter, low voltage input filter, and master control board, such that the complete brassboard unit is capable of operating a 12.5 kilowatt Hall effect thruster. The performance of the unit was characterized under both ambient and thermal vacuum test conditions, and the results demonstrate exceptional performance with full power efficiencies exceeding 97%. The unit was also tested with a 12.5kW Hall effect thruster to verify compatibility and output filter specifications. With space-qualified silicon carbide or similar high voltage, high efficiency power devices, this would provide a design solution to address the need for high power electric propulsion systems.

  17. Performance assessment of a power loaded supercapacitor based on manufacturer data

    International Nuclear Information System (INIS)

    Mellincovsky, Martin; Kuperman, Alon; Lerman, Chaim; Aharon, Ilan; Reichbach, Noam; Geula, Gal; Nakash, Ronen

    2013-01-01

    Highlights: • Analytic performance of a power loaded supercapacitor is derived. • Power and energy capabilities based on manufacturer data are obtained. • Power limitations based on depth of discharge are presented. - Abstract: Analytical derivation of constant power loaded supercapacitor behavior is presented in the paper. Simple RC model based on manufacturer datasheet extracted parameters is employed. Power and energy related figures of merit are obtained from the derived expressions and compared to the datasheet provided values. It is revealed that some of the performance characteristics provided in most of the datasheets are theoretical and cannot be achieved in practice. The process of a realistic Ragone plot derivation based on the proposed method is described in the paper as well. It is shown that the lower limit of supercapacitor voltage imposes certain limits on power and energy capabilities of the device. Extended simulation and experimental results are provided in order to reinforce the proposed method and justify the selected RC model for describing the supercapacitor performance. By appropriate comparison of simulations and experiments it is proven that the selected model, while being oversimplified and low order, may be used to predict supercapacitor behavior with reasonable accuracy to perform at least an initial design

  18. Aiding operator performance at low power feedwater control

    International Nuclear Information System (INIS)

    Woods, D.D.

    1986-01-01

    Control of the feedwater system during low power operations (approximately 2% to 30% power) is a difficult task where poor performance (excessive trips) has a high cost to utilities. This paper describes several efforts in the human factors aspects of this task that are underway to improve feedwater control. A variety of knowledge acquisition techniques have been used to understand the details of what makes feedwater control at low power difficult and what knowledge and skill distinguishes expert operators at this task from less experienced ones. The results indicate that there are multiple factors that contribute to task difficulty

  19. Evaluation of the performance of combined cooling, heating, and power systems with dual power generation units

    International Nuclear Information System (INIS)

    Knizley, Alta A.; Mago, Pedro J.; Smith, Amanda D.

    2014-01-01

    The benefits of using a combined cooling, heating, and power system with dual power generation units (D-CCHP) is examined in nine different U.S. locations. One power generation unit (PGU) is operated at base load while the other is operated following the electric load. The waste heat from both PGUs is used for heating and for cooling via an absorption chiller. The D-CCHP configuration is studied for a restaurant benchmark building, and its performance is quantified in terms of operational cost, primary energy consumption (PEC), and carbon dioxide emissions (CDE). Cost spark spread, PEC spark spread, and CDE spark spread are examined as performance indicators for the D-CCHP system. D-CCHP system performance correlates well with spark spreads, with higher spark spreads signifying greater savings through implementation of a D-CCHP system. A new parameter, thermal difference, is introduced to investigate the relative performance of a D-CCHP system compared to a dual PGU combined heat and power system (D-CHP). Thermal difference, together with spark spread, can explain the variation in savings of a D-CCHP system over a D-CHP system for each location. The effect of carbon credits on operational cost savings with respect to the reference case is shown for selected locations. - Highlights: • We investigate benefits from using combined cooling, heating, and power systems. • A dual power generation unit configuration is considered for CCHP and CHP. • Spark spreads for cost, energy, and emissions correlate with potential savings. • Thermal difference parameter helps to explain variations in potential savings. • Carbon credits may increase cost savings where emissions savings are possible

  20. Performance management for nuclear power plant operators

    International Nuclear Information System (INIS)

    Fan Pengfei

    2014-01-01

    Fuel was loaded to Unit 3 of the second power plant in May 2010. The Second Operation Division stepped in the operation stage from production preparation and commissioning and exploration of performance management was started. By means of performance evaluation, a closed loop of performance management was formed, staff enthusiasm improved, and potential capability inspired through evaluation, analysis and improvement. The performance evaluation covers attitude, skill, efficiency, performance, teamwork sense, cooperation, etc. Quantitative appraisal was carried out through 31 objective indicators of the working process and results. According to the evaluation results and personal interviews, indicators were modified. Through the performance evaluation, positive guidance is provided to the employees to promote the development of employees, departments and the enterprise. (authors)

  1. Operational safety performance and economical efficiency evaluation for nuclear power plants

    International Nuclear Information System (INIS)

    Liu Yachun; Zou Shuliang

    2012-01-01

    The economical efficiency of nuclear power includes a series of environmental parameters, for example, cleanliness. Nuclear security is the precondition and guarantee for its economy, and both are the direct embodiment of the social benefits of nuclear power. Through analyzing the supervision and management system on the effective operation of nuclear power plants, which has been put forward by the International Atomic Energy Agency (IAEA), the World Association of Nuclear Operators (WANO), the U.S. Nuclear Regulatory Commission (NRC), and other organizations, a set of indexs on the safety performance and economical efficiency of nuclear power are explored and established; Based on data envelopment analysis, a DEA approach is employed to evaluate the efficiency of the operation performance of several nuclear power plants, Some primary conclusion are achieved on the basis of analyzing the threshold parameter's sensitivity and relativity which affected operational performance. To address the conflicts between certain security and economical indicators, a multi-objective programming model is established, where top priority is given to nuclear safety, and the investment behavior of nuclear power plant is thereby optimized. (authors)

  2. Electromagnetic interference-aware transmission scheduling and power control for dynamic wireless access in hospital environments.

    Science.gov (United States)

    Phunchongharn, Phond; Hossain, Ekram; Camorlinga, Sergio

    2011-11-01

    We study the multiple access problem for e-Health applications (referred to as secondary users) coexisting with medical devices (referred to as primary or protected users) in a hospital environment. In particular, we focus on transmission scheduling and power control of secondary users in multiple spatial reuse time-division multiple access (STDMA) networks. The objective is to maximize the spectrum utilization of secondary users and minimize their power consumption subject to the electromagnetic interference (EMI) constraints for active and passive medical devices and minimum throughput guarantee for secondary users. The multiple access problem is formulated as a dual objective optimization problem which is shown to be NP-complete. We propose a joint scheduling and power control algorithm based on a greedy approach to solve the problem with much lower computational complexity. To this end, an enhanced greedy algorithm is proposed to improve the performance of the greedy algorithm by finding the optimal sequence of secondary users for scheduling. Using extensive simulations, the tradeoff in performance in terms of spectrum utilization, energy consumption, and computational complexity is evaluated for both the algorithms.

  3. Assessing mental workload and situation awareness in the evaluation of computerized procedures in the main control room

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Chih-Wei, E-mail: yangcw@iner.gov.tw [Institute of Nuclear Energy Research, 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China); Yang, Li-Chen; Cheng, Tsung-Chieh [Institute of Nuclear Energy Research, 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China); Jou, Yung-Tsan; Chiou, Shian-Wei [Department of Industrial Engineering, Chung-Yuan Christian University, 200, Chung Pei Rd., Chung-Li 32023, Taiwan (China)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer This study investigates procedure types' effects on operators' performance. Black-Right-Pointing-Pointer The computer-based procedure is suggested to be implemented in the main control room. Black-Right-Pointing-Pointer The computer-based procedure brings lowest mental workload. Black-Right-Pointing-Pointer And it also generates fewer error of omission, and the highest situation awareness. Black-Right-Pointing-Pointer The shift supervisor has the highest workload and the lowest situation awareness. - Abstract: Computerized procedure (CP) system has been developed in nuclear power plant (NPP) instrumentation and control (I and C) system. The system may include normal operating procedures (OPs), abnormal operating procedures (AOPs), alarm response procedures (ARPs), surveillance test procedures (STPs) and/or emergency operating procedures (EOPs). While there are many ways to evaluate computerized procedures design, the user's mental workload and situation awareness (SA) are particularly important considerations in the supervisory control of safety-critical systems. Users' mental workload and situation awareness may be influenced by human factor issues relating to computerized procedures, e.g., level of automation, dealing with (partially) unavailable I and C, switching to back-up system (e.g., paper-based procedures). Some of the positive impacts of CPs on operator performance include the following: tasks can be performed more quickly; overall workload can be reduced; cognitive workload can be minimized; fewer errors may be made in transitioning through or between procedures. However, various challenges have also been identified with CP systems. These should be addressed in the design and implementation of CPs where they are applicable. For example, narrower 'field of view' provided by CP systems than with paper-based procedures could reduce crew communications and crewmember awareness of the

  4. Assessing mental workload and situation awareness in the evaluation of computerized procedures in the main control room

    International Nuclear Information System (INIS)

    Yang, Chih-Wei; Yang, Li-Chen; Cheng, Tsung-Chieh; Jou, Yung-Tsan; Chiou, Shian-Wei

    2012-01-01

    Highlights: ► This study investigates procedure types’ effects on operators’ performance. ► The computer-based procedure is suggested to be implemented in the main control room. ► The computer-based procedure brings lowest mental workload. ► And it also generates fewer error of omission, and the highest situation awareness. ► The shift supervisor has the highest workload and the lowest situation awareness. - Abstract: Computerized procedure (CP) system has been developed in nuclear power plant (NPP) instrumentation and control (I and C) system. The system may include normal operating procedures (OPs), abnormal operating procedures (AOPs), alarm response procedures (ARPs), surveillance test procedures (STPs) and/or emergency operating procedures (EOPs). While there are many ways to evaluate computerized procedures design, the user's mental workload and situation awareness (SA) are particularly important considerations in the supervisory control of safety-critical systems. Users’ mental workload and situation awareness may be influenced by human factor issues relating to computerized procedures, e.g., level of automation, dealing with (partially) unavailable I and C, switching to back-up system (e.g., paper-based procedures). Some of the positive impacts of CPs on operator performance include the following: tasks can be performed more quickly; overall workload can be reduced; cognitive workload can be minimized; fewer errors may be made in transitioning through or between procedures. However, various challenges have also been identified with CP systems. These should be addressed in the design and implementation of CPs where they are applicable. For example, narrower “field of view” provided by CP systems than with paper-based procedures could reduce crew communications and crewmember awareness of the status and progress through the procedure. Based on a human factors experiment in which each participant monitored and controlled multiple simulated

  5. Context-Aware user interfaces in automation

    DEFF Research Database (Denmark)

    Olsen, Mikkel Holm

    2007-01-01

    Automation is deployed in a great range of different domains such as the chemical industry, the production of consumer goods, the production of energy (both in terms of power plants and in the petrochemical industry), transportation and several others. Through several decades the complexity...... of automation systems and the level of automation have been rising. This has caused problems regarding the operator's ability to comprehend the overall situation and state of the automation system, in particular in abnormal situations. The amount of data available to the operator results in information overload....... Since context-aware applications have been developed in other research areas it seems natural to analyze the findings of this research and examine how this can be applied to the domain of automation systems. By evaluating existing architectures for the development of context-aware applications we find...

  6. Distinct changes in CREB phosphorylation in frontal cortex and striatum during contingent and non-contingent performance of a visual attention task

    Directory of Open Access Journals (Sweden)

    Mirjana eCarli

    2011-10-01

    Full Text Available The cyclic-AMP response element binding protein (CREB family of transcription factors has been implicated in numerous forms of behavioural plasticity. We investigated CREB phosphorylation along some nodes of corticostriatal circuitry such as frontal cortex (FC and dorsal (caudate putamen, CPu and ventral (nucleus accumbens, NAC striatum in response to the contingent or non-contingent performance of the five-choice serial reaction time task (5-CSRTT used to assess visuospatial attention. Three experimental manipulations were used; an attentional performance group (contingent, master, a group trained previously on the task but for whom the instrumental contingency coupling responding with stimulus detection and reward was abolished (non-contingent, yoked and a control group matched for food deprivation and exposure to the test apparatus (untrained. Rats trained on the 5-CSRTT (both master and yoked had higher levels of CREB protein in the FC, CPu and NAC compared to untrained controls. Despite the divergent behaviour of master and yoked rats CREB activity in the FC was not substantially different. In rats performing the 5-CSRTT (master, CREB activity was completely abolished in the CPu whereas in the NAC it remained unchanged. In contrast, CREB phosphorylation in CPu and NAC increased only when the contingency changed from goal-dependent to goal-independent reinforcement (yoked. The present results indicate that up-regulation of CREB protein expression across cortical and striatal regions possibly reflects the extensive instrumental learning and performance whereas increased CREB activity in striatal regions may signal the unexpected change in the relationship between instrumental action and reinforcement.

  7. DASH-based network performance-aware solution for personalised video delivery systems

    OpenAIRE

    Rovcanin, Lejla

    2016-01-01

    Video content is an increasingly prevalent contributor of Internet traffic. The proliferation of available video content has been fuelled by both Internet expansion and the growing power and affordability of viewing devices. Such content can be consumed anywhere and anytime, using a variety of technologies. The high data rates required for streaming video content and the large volume of requests for such content degrade network performance when devices compete for finite network bandwidth. Th...

  8. Monitoring and analyzing features of electrical power quality system performance

    OpenAIRE

    Genci Sharko; Nike Shanku

    2010-01-01

    Power quality is a set of boundaries that allows electrical systems to function in their intended manner without significant loss of performance or life. The term is used to describe electric power that drives an electrical load and the load's ability to function properly with that electric power. Without the proper quality of the power, an electrical device may malfunction, fail prematurely or not operate at all. There are many reasons why the electric power can be of poor quality and many m...

  9. Power Performance Test Report for the SWIFT Wind Turbine

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, I.; Hur, J.

    2012-12-01

    This report summarizes the results of a power performance test that NREL conducted on the SWIFT wind turbine. This test was conducted in accordance with the International Electrotechnical Commission's (IEC) standard, Wind Turbine Generator Systems Part 12: Power Performance Measurements of Electricity Producing Wind Turbines, IEC 61400-12-1 Ed.1.0, 2005-12. However, because the SWIFT is a small turbine as defined by IEC, NREL also followed Annex H that applies to small wind turbines. In these summary results, wind speed is normalized to sea-level air density.

  10. Energy Efficiency and Network Performance: A Reality Check in SDN-Based 5G Systems

    Directory of Open Access Journals (Sweden)

    Leonardo Ochoa-Aday

    2017-12-01

    Full Text Available The increasing power consumption and related environmental implications currently generated by large data networks have become a major concern over the last decade. Given the drastic traffic increase expected in 5G dense environments, the energy consumption problem becomes even more concerning and challenging. In this context, Software-Defined Networks (SDN, a key technology enabler for 5G systems, can be seen as an attractive solution. In these programmable networks, an energy-aware solution could be easily implemented leveraging the capabilities provided by control and data plane separation. This paper investigates the impact of energy-aware routing on network performance. To that end, we propose a novel energy-aware mechanism that reduces the number of active links in SDN with multiple controllers, considering in-band control traffic. The proposed strategy exploits knowledge of the network topology combined with traffic engineering techniques to reduce the overall power consumption. Therefore, two heuristic algorithms are designed: a static network configuration and a dynamic energy-aware routing. Significant values of switched-off links are reached in the simulations where real topologies and demands data are used. Moreover, the obtained results confirm that crucial network parameters such as control traffic delay, data path latency, link utilization and Ternary Content Addressable Memory (TCAM occupation are affected by the performance-agnostic energy-aware model.

  11. Accounting for the speed shear in wind turbine power performance measurement

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, R.

    2010-04-15

    The power curve of a wind turbine is the primary characteristic of the machine as it is the basis of the warranty for it power production. The current IEC standard for power performance measurement only requires the measurement of the wind speed at hub height and the air density to characterise the wind field in front of the turbine. However, with the growing size of the turbine rotors during the last years, the effect of the variations of the wind speed within the swept rotor area, and therefore of the power output, cannot be ignored any longer. Primary effects on the power performance are from the vertical wind shear and the turbulence intensity. The work presented in this thesis consists of the description and the investigation of a simple method to account for the wind speed shear in the power performance measurement. Ignoring this effect was shown to result in a power curve dependant on the shear condition, therefore on the season and the site. It was then proposed to use an equivalent wind speed accounting for the whole speed profile in front of the turbine. The method was first tested with aerodynamic simulations of a multi-megawatt wind turbine which demonstrated the decrease of the scatter in the power curve. A power curve defined in terms of this equivalent wind speed would be less dependant on the shear than the standard power curve. The equivalent wind speed method was then experimentally validated with lidar measurements. Two equivalent wind speed definitions were considered both resulting in the reduction of the scatter in the power curve. As a lidar wind profiler can measure the wind speed at several heights within the rotor span, the wind speed profile is described with more accuracy than with the power law model. The equivalent wind speed derived from measurements, including at least one measurement above hub height, resulted in a smaller scatter in the power curve than the equivalent wind speed derived from profiles extrapolated from measurements

  12. Delay-Aware Program Codes Dissemination Scheme in Internet of Everything

    Directory of Open Access Journals (Sweden)

    Yixuan Xu

    2016-01-01

    Full Text Available Due to recent advancements in big data, connection technologies, and smart devices, our environment is transforming into an “Internet of Everything” (IoE environment. These smart devices can obtain new or special functions by reprogramming: upgrade their soft systems through receiving new version of program codes. However, bulk codes dissemination suffers from large delay, energy consumption, and number of retransmissions because of the unreliability of wireless links. In this paper, a delay-aware program dissemination (DAPD scheme is proposed to disseminate program codes with fast, reliable, and energy-efficient style. We observe that although total energy is limited in wireless sensor network, there exists residual energy in nodes deployed far from the base station. Therefore, DAPD scheme improves the performance of bulk codes dissemination through the following two aspects. (1 Due to the fact that a high transmitting power can significantly improve the quality of wireless links, transmitting power of sensors with more residual energy is enhanced to improve link quality. (2 Due to the fact that performance of correlated dissemination tends to degrade in a highly dynamic environment, link correlation is autonomously updated in DAPD during codes dissemination to maintain improvements brought by correlated dissemination. Theoretical analysis and experimental results show that, compared with previous work, DAPD scheme improves the dissemination performance in terms of completion time, transmission cost, and the efficiency of energy utilization.

  13. Social awareness on nuclear fuel cycle

    International Nuclear Information System (INIS)

    Tanigaki, Toshihiko

    2006-01-01

    In the present we surveyed public opinion regarding the nuclear fuel cycle to find out about the social awareness about nuclear fuel cycle and nuclear facilities. The study revealed that people's image of nuclear power is more familiar than the image of the nuclear fuel cycle. People tend to display more recognition and concern towards nuclear power and reprocessing plants than towards other facilities. Comparatively speaking, they tend to perceive radioactive waste disposal facilities and nuclear power plants as being highly more dangerous than reprocessing plants. It is found also that with the exception of nuclear power plants don't know very much whether nuclear fuel cycle facilities are in operation in Japan or not. The results suggests that 1) the relatively mild image of the nuclear fuel cycle is the result of the interactive effect of the highly dangerous image of nuclear power plants and the less dangerous image of reprocessing plants; and 2) that the image of a given plant (nuclear power plant, reprocessing plant, radioactive waste disposal facility) is influenced by the fact of whether the name of the plant suggests the presence of danger or not. (author)

  14. Thermal-Aware Scheduling for Future Chip Multiprocessors

    Directory of Open Access Journals (Sweden)

    Pedro Trancoso

    2007-04-01

    Full Text Available The increased complexity and operating frequency in current single chip microprocessors is resulting in a decrease in the performance improvements. Consequently, major manufacturers offer chip multiprocessor (CMP architectures in order to keep up with the expected performance gains. This architecture is successfully being introduced in many markets including that of the embedded systems. Nevertheless, the integration of several cores onto the same chip may lead to increased heat dissipation and consequently additional costs for cooling, higher power consumption, decrease of the reliability, and thermal-induced performance loss, among others. In this paper, we analyze the evolution of the thermal issues for the future chip multiprocessor architectures and show that as the number of on-chip cores increases, the thermal-induced problems will worsen. In addition, we present several scenarios that result in excessive thermal stress to the CMP chip or significant performance loss. In order to minimize or even eliminate these problems, we propose thermal-aware scheduler (TAS algorithms. When assigning processes to cores, TAS takes their temperature and cooling ability into account in order to avoid thermal stress and at the same time improve the performance. Experimental results have shown that a TAS algorithm that considers also the temperatures of neighboring cores is able to significantly reduce the temperature-induced performance loss while at the same time, decrease the chip's temperature across many different operation and configuration scenarios.

  15. Study of data I/O performance on distributed disk system in mask data preparation

    Science.gov (United States)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  16. Operation Performance Evaluation of Power Grid Enterprise Using a Hybrid BWM-TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Peipei You

    2017-12-01

    Full Text Available Electricity market reform is in progress in China, and the operational performance of power grid enterprises are vital for its healthy and sustainable development in the current electricity market environment. In this paper, a hybrid multi-criteria decision-making (MCDM framework for operational performance evaluation of a power grid enterprise is proposed from the perspective of sustainability. The latest MCDM method, namely the best-worst method (BWM was employed to determine the weights of all criteria, and the technique for order preference by similarity to an ideal solution (TOPSIS was applied to rank the operation performance of a power grid enterprise. The evaluation index system was built based on the concept of sustainability, which includes three criteria (namely economy, society, and environment and seven sub-criteria. Four power grid enterprises were selected to perform the empirical analysis, and the results indicate that power grid enterprise A1 has the best operation performance. The proposed hybrid BWM-TOPSIS-based framework for operation performance evaluation of a power grid enterprise is effective and practical.

  17. Control interlock and monitoring system for 80 KW IOT based RF power amplifier system at 505.812 MHz for Indus-2

    International Nuclear Information System (INIS)

    Kumar, Gautam; Deo, R.K.; Jain, M.K.; Bagre, Sunil; Hannurkar, P.R.

    2013-01-01

    For 80 kW inductive output tube (IOT) based RF power amplifier system at 505.812 MHz for Indus-2, a control, interlock and monitoring system is realized. This is to facilitate proper start-up and shutdown of the amplifier system, monitor various parameters to detect any malfunction during its operation and to bring the system in a safe stage, thereby assuring reliable operation of the amplifier system. This high power amplifier system incorporates interlocks such as cooling interlocks, various voltage and current interlocks and time critical RF interlocks. Processing of operation sequence, cooling interlocks and various voltage and current interlocks have been realized by using Siemens make S7-CPU-315-2DP (CPU) based programmable logic controller (PLC) system. While time critical or fast interlocks have been realized by using Siemens make FPGA based Boolean Co-processor FM-352-5 which operates in standalone mode. Siemens make operating panel OP277 6'' is being used as a human machine interface (HMI) device for command, data, alarm generation and process parameter monitoring. (author)

  18. Challenging the balance of power: patient empowerment.

    Science.gov (United States)

    Hewitt-Taylor, Jaquelina

    Empowering patients is a central element of nursing care, according to the RCN (2003). This article discusses the reality of changing the balance of power in health care, awareness of types of knowledge and the ways in which power may consciously or subconsciously be used. It also includes awareness of the financial and political aspects of health care and how these affect patient choice.

  19. A development of a quantitative situation awareness measurement tool: Computational Representation of Situation Awareness with Graphical Expressions (CoRSAGE)

    International Nuclear Information System (INIS)

    Yim, Ho Bin; Lee, Seung Min; Seong, Poong Hyun

    2014-01-01

    Highlights: • We proposed quantitative situation awareness (SA) evaluation technique. • We developed a computer based SA evaluation tool for NPPs training environment. • We introduced three rules and components to express more human-like results. • We conducted three sets of training with real plant operators. • Results showed that the tool could reasonably represent operator’s SA. - Abstract: Operator performance measures are used for multiple purposes, such as control room design, human system interface (HSI) evaluation, training, and so on. Performance measures are often focused on results; however, especially for a training purpose – at least in a nuclear industry, more detailed descriptions about processes are required. Situation awareness (SA) measurements have directly/indirectly played as a complimentary measure and provided descriptive insights on how to improve performance of operators for the next training. Unfortunately, most of the well-developed SA measurement techniques, such as Situation Awareness Global Assessment Technique (SAGAT) need an expert opinion which sometimes troubles easy spread of measurement’s application or usage. A quantitative SA measurement tool named Computational Representation of Situation Awareness with Graphical Expressions (CoRSAGE) is introduced to resolve some of these concerns. CoRSAGE is based on production rules to represent a human operator’s cognitive process of problem solving, and Bayesian inference to quantify it. Petri Net concept is also used for graphical expressions of SA flow. Three components – inference transition, volatile/non-volatile memory tokens – were newly developed to achieve required functions. Training data of a Loss of Coolant Accident (LOCA) scenario for an emergency condition and an earthquake scenario for an abnormal condition by real plant operators were used to validate the tool. The validation result showed that CoRSAGE performed a reasonable match to other performance

  20. Choosing CPUs in an Open Market: System Performance Testing for the BaBar Online Farm

    International Nuclear Information System (INIS)

    Pavel, Tomas J

    1998-01-01

    BABAR is a high-rate experiment to study CP violation in asymmetric e + e - collisions. The BABAR Online Farm is a pool of workstations responsible for the last layer of event selection, as well as for full reconstruction of selected events and for monitoring functions. A large number of machine architectures were evaluated for use in this Online Farm. We present an overview of the results of this evaluation, which include tests of low-level OS primitives, tests of memory architecture, and tests of application-specific CPU performance. Factors of general interest to others making hardware decisions are highlighted. Performance of current BABAR reconstruction (written in C++) is found to scale fairly well with SPECint95, but with some noticeable deviations. Even for machines with similar SPEC CPU ratings, large variations in memory system performance exist. No single operating system has an overall edge in the performance of its primitives. In particular, freeware operating systems perform no worse overall than the commercial offerings

  1. Performance of PWR Nuclear power plants, up to 1985

    International Nuclear Information System (INIS)

    Muniz, A.A.

    1987-01-01

    The performance of PWR nuclear power plants is studied, based on operational data up to 1985. The availability analysis was made with 793 unit-year and the reliability analysis was made with 5851 unit x month. The results were discussed and the availability of those nuclear power plants were estimated. (E.G.) [pt

  2. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  3. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  4. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    International Nuclear Information System (INIS)

    Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy

    2016-01-01

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10"8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  5. Study on efficiency of time computation in x-ray imaging simulation base on Monte Carlo algorithm using graphics processing unit

    Energy Technology Data Exchange (ETDEWEB)

    Setiani, Tia Dwi, E-mail: tiadwisetiani@gmail.com [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Suprijadi [Computational Science, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia); Haryanto, Freddy [Nuclear Physics and Biophysics Reaserch Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung Jalan Ganesha 10 Bandung, 40132 (Indonesia)

    2016-03-11

    Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10{sup 8} and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.

  6. High-Performance Constant Power Generation in Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    An advanced power control strategy by limiting the maximum feed-in power of PV systems has been proposed, which can ensure a fast and smooth transition between maximum power point tracking and Constant Power Generation (CPG). Regardless of the solar irradiance levels, high-performance and stable...... operation are always achieved by the proposed control strategy. It can regulate the PV output power according to any set-point, and force the PV systems to operate at the left side of the maximum power point without stability problems. Experimental results have verified the effectiveness of the proposed CPG...

  7. High performance protection circuit for power electronics applications

    Energy Technology Data Exchange (ETDEWEB)

    Tudoran, Cristian D., E-mail: cristian.tudoran@itim-cj.ro; Dădârlat, Dorin N.; Toşa, Nicoleta; Mişan, Ioan [National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat, PO 5 Box 700, 400293 Cluj-Napoca (Romania)

    2015-12-23

    In this paper we present a high performance protection circuit designed for the power electronics applications where the load currents can increase rapidly and exceed the maximum allowed values, like in the case of high frequency induction heating inverters or high frequency plasma generators. The protection circuit is based on a microcontroller and can be adapted for use on single-phase or three-phase power systems. Its versatility comes from the fact that the circuit can communicate with the protected system, having the role of a “sensor” or it can interrupt the power supply for protection, in this case functioning as an external, independent protection circuit.

  8. Evaluation criteria for enhanced solar–coal hybrid power plant performance

    International Nuclear Information System (INIS)

    Zhao, Yawen; Hong, Hui; Jin, Hongguang

    2014-01-01

    Attention has been directed toward hybridizing solar energy with fossil power plants since the 1990s to improve reliability and efficiency. Appropriate evaluation criteria were important in the design and optimization of solar–fossil hybrid systems. Two new criteria to evaluate the improved thermodynamic performances in a solar hybrid power plant were developed in this study. Correlations determined the main factors influencing the improved thermodynamic performances. The proposed criteria can be used to effectively integrate solar–coal hybridization systems. Typical 100 MW–1000 MW coal-fired power plants hybridized with solar heat at approximately 300 °C, which was used to preheat the feed water before entering the boiler, were evaluated using the criteria. The integration principle of solar–coal hybrid systems was also determined. The proposed evaluation criteria may be simple and reasonable for solar–coal hybrid systems with multi-energy input, thus directing system performance enhancement. - Highlights: • New criteria to evaluate the solar hybrid power plant were developed. • Typical solar–coal hybrid power plants were evaluated using the criteria. • The integration principle of solar–coal hybrid systems was determined. • The benefits of the solar–coal hybrid system are enhanced at lower solar radiation

  9. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  10. Performance enhancement using power beaming for electric propulsion earth orbital transporters

    International Nuclear Information System (INIS)

    Dagle, J.E.

    1991-01-01

    An electric propulsion Earth orbital transport vehicle (EOTV) can effectively deliver large payloads using much less propellant than chemical transfer methods. By using an EOTV instead of a chemical upper stage, either a smaller launch vehicle can be used for the same satellite mass or a larger satellite can be deployed using the same launch vehicle. However, the propellant mass savings from using the higher specific impulse of electric propulsion may not be enough to overcome the disadvantage of the added mass and cost of the electric propulsion power source. Power system limitations have been a major factor delaying the acceptance and use of electric propulsion. This paper outlines the power requirements of electric propulsion technology being developed today, including arcjets, magnetoplasmadynamic (MPD) thrusters, and ion engines. Power supply characteristics are discussed for nuclear, solar, and power-beaming systems. Operational characteristics are given for each, as are the impacts of the power supply alternative on the overall craft performance. Because of its modular nature, the power-beaming approach is able to meet the power requirements of all three electric propulsion types. Also, commonality of approach allows different electric propulsion approaches to be powered by a single power supply approach. Power beaming exhibits better flexibility and performance than on-board nuclear or solar power systems

  11. A Methodology to Reduce the Computational Effort in the Evaluation of the Lightning Performance of Distribution Networks

    Directory of Open Access Journals (Sweden)

    Ilaria Bendato

    2016-11-01

    Full Text Available The estimation of the lightning performance of a power distribution network is of great importance to design its protection system against lightning. An accurate evaluation of the number of lightning events that can create dangerous overvoltages requires a huge computational effort, as it implies the adoption of a Monte Carlo procedure. Such a procedure consists of generating many different random lightning events and calculating the corresponding overvoltages. The paper proposes a methodology to deal with the problem in two computationally efficient ways: (i finding out the minimum number of Monte Carlo runs that lead to reliable results; and (ii setting up a procedure that bypasses the lightning field-to-line coupling problem for each Monte Carlo run. The proposed approach is shown to provide results consistent with existing approaches while exhibiting superior Central Processing Unit (CPU time performances.

  12. Performance Enhancement of Power Transistors and Radiation effect

    International Nuclear Information System (INIS)

    Hassn, Th.A.A.

    2012-01-01

    The main objective of this scientific research is studying the characteristic of bipolar junction transistor device and its performance under radiation fields and temperature effect as a control element in many power circuits. In this work we present the results of experimental measurements and analytical simulation of gamma – radiation effects on the electrical characteristics and operation of power transistor types 2N3773, 2N3055(as complementary silicon power transistor are designed for general-purpose switching and amplifier applications), three samples of each type were irradiated by gamma radiation with doses, 1 K rad, 5 K rad, 10 K rad, 30 K rad, and 10 Mrad, the experimental data are utilized to establish an analytical relation between the total absorbed dose of gamma irradiation and corresponding to effective density of generated charge in the internal structure of transistor, the electrical parameters which can be measured to estimate the generated defects in the power transistor are current gain, collector current and collected emitter leakage current , these changes cause the circuit to case proper functioning. Collector current and transconductance of each device are calibrated as a function of irradiated dose. Also the threshold voltage and transistor gain can be affected and also calibrated as a function of dose. A silicon NPN power transistor type 2N3773 intended for general purpose applications, were used in this work. It was designed for medium current and high power circuits. Performance and characteristic were discusses under temperature and gamma radiation doses. Also the internal junction thermal system of the transistor represented in terms of a junction thermal resistance (Rjth). The thermal resistance changed by ΔRjth, due to the external intended, also due to the gamma doses intended. The final result from the model analysis reveals that the emitter-bias configuration is quite stable by resistance ratio RB/RE. Also the current

  13. Evaluation of Topology-Aware Broadcast Algorithms for Dragonfly Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Mubarak, Misbah; Ross, Rob; Li, Jianping Kelvin; Carothers, Christopher D.; Ma, Kwan-Liu

    2016-09-12

    Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigning MPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small- and large-sized broadcasts (in terms of both data size and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.

  14. Location-Aware Cross-Layer Design Using Overlay Watermarks

    Directory of Open Access Journals (Sweden)

    Paul Ho

    2007-04-01

    Full Text Available A new orthogonal frequency division multiplexing (OFDM system embedded with overlay watermarks for location-aware cross-layer design is proposed in this paper. One major advantage of the proposed system is the multiple functionalities the overlay watermark provides, which includes a cross-layer signaling interface, a transceiver identification for position-aware routing, as well as its basic role as a training sequence for channel estimation. Wireless terminals are typically battery powered and have limited wireless communication bandwidth. Therefore, efficient collaborative signal processing algorithms that consume less energy for computation and less bandwidth for communication are needed. Transceiver aware of its location can also improve the routing efficiency by selective flooding or selective forwarding data only in the desired direction, since in most cases the location of a wireless host is unknown. In the proposed OFDM system, location information of a mobile for efficient routing can be easily derived when a unique watermark is associated with each individual transceiver. In addition, cross-layer signaling and other interlayer interactive information can be exchanged with a new data pipe created by modulating the overlay watermarks. We also study the channel estimation and watermark removal techniques at the physical layer for the proposed overlay OFDM. Our channel estimator iteratively estimates the channel impulse response and the combined signal vector from the overlay OFDM signal. Cross-layer design that leads to low-power consumption and more efficient routing is investigated.

  15. Power Constrained High-Level Synthesis of Battery Powered Digital Systems

    DEFF Research Database (Denmark)

    Nielsen, Sune Fallgaard; Madsen, Jan

    2003-01-01

    We present a high-level synthesis algorithm solving the combined scheduling, allocation and binding problem minimizing area under both latency and maximum power per clock-cycle constraints. Our approach eliminates the large power spikes, resulting in an increased battery lifetime, a property...... of utmost importance for battery powered embedded systems. Our approach extends the partial-clique partitioning algorithm by introducing power awareness through a heuristic algorithm which bounds the design space to those of power feasible schedules. We have applied our algorithm on a set of dataflow graphs...

  16. Power/performance trade-offs in real-time SDRAM command scheduling

    OpenAIRE

    Goossens, S.L.M.; Chandrasekar, K.; Akesson, K.B.; Goossens, K.G.W.

    2016-01-01

    Real-time safety-critical systems should provide hard bounds on an applications’ performance. SDRAM controllers used in this domain should therefore have a bounded worst-case bandwidth, response time, and power consumption. Existing works on real-time SDRAM controllers only consider a narrow range of memory devices, and do not evaluate how their schedulers’ performance varies across memory generations, nor how the scheduling algorithm influences power usage. The extent to which the number of ...

  17. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  18. Text Comprehension Mediates Morphological Awareness, Syntactic Processing, and Working Memory in Predicting Chinese Written Composition Performance

    Science.gov (United States)

    Guan, Connie Qun; Ye, Feifei; Wagner, Richard K.; Meng, Wanjin; Leong, Che Kan

    2014-01-01

    The goal of the present study was to test opposing views about four issues concerning predictors of individual differences in Chinese written composition: (a) Whether morphological awareness, syntactic processing, and working memory represent distinct and measureable constructs in Chinese or are just manifestations of general language ability; (b) whether they are important predictors of Chinese written composition, and if so, the relative magnitudes and independence of their predictive relations; (c) whether observed predictive relations are mediated by text comprehension; and (d) whether these relations vary or are developmentally invariant across three years of writing development. Based on analyses of the performance of students in grades 4 (n = 246), 5 (n = 242) and 6 (n = 261), the results supported morphological awareness, syntactic processing, and working memory as distinct yet correlated abilities that made independent contributions to predicting Chinese written composition, with working memory as the strongest predictor. However, predictive relations were mediated by text comprehension. The final model accounted for approximately 75 percent of the variance in Chinese written composition. The results were largely developmentally invariant across the three grades from which participants were drawn. PMID:25530630

  19. Medium power hydrogen arcjet performance

    Science.gov (United States)

    Curran, Francis M.; Bullock, S. R.; Haag, Thomas W.; Sarmiento, Charles J.; Sankovic, John M.

    1991-01-01

    An experimental investigation was performed to evaluate hydrogen arcjet operating characteristics in the range of 1 to 4 kW. A series of nozzles were operated in modular laboratory thrusters to examine the effects of geometric parameters such as constrictor diameter and nozzle divergence angle. Each nozzle was tested over a range of current and mass flow rates to explore stability and performance. In the range of mass flow rates and power levels tested, specific impulse values between 650 and 1250 sec were obtained at efficiencies between 30 and 40 percent. The performance of the two larger half angle (20, 15 deg) nozzles was similar for each of the two constrictor diameters tested. The nozzles with the smallest half angle (10 deg) were difficult to operate. A restrike mode of operation was identified and described. Damage in the form of melting was observed in the constrictor region of all the nozzle inserts tested. Arcjet ignition was also difficult in many tests and a glow discharge mode that prevents starting was identified.

  20. Postactivation potentiation: effect of various recovery intervals on bench press power performance.

    Science.gov (United States)

    Ferreira, Sandra Lívia de Assis; Panissa, Valéria Leme Gonçalves; Miarka, Bianca; Franchini, Emerson

    2012-03-01

    Postactivation potentiation (PAP) is a strategy used to improve performance in power activities. The aim of this study was to determine if power during bench press exercise was increased when preceded by 1 repetition maximum (1RM) in the same exercise and to determine which time interval could optimize PAP response. For this, 11 healthy male subjects (age, 25 ± 4 years; height, 178 ± 6 cm; body mass, 74 ± 8 kg; bench press 1RM, 76 ± 19 kg) underwent 6 sessions. Two control sessions were conducted to determine both bench press 1RM and power (6 repetitions at 50% 1RM). The 4 experimental sessions were composed of a 1RM exercise followed by power sets with different recovery intervals (1, 3, 5, and 7 minutes), performed on different days, and determined randomly. Power values were measured via Peak Power equipment (Cefise, Nova Odessa, São Paulo, Brazil). The conditions were compared using an analysis of variance with repeated measures, followed by a Tukey test. The significance level was set at p bench press and that such a strategy could be applied as an interesting alternative to enhance the performance in tasks aimed at increasing upper-body power performance.

  1. A general framework for performance guaranteed green data center networking

    OpenAIRE

    Wang, Ting; Xia, Yu; Muppala, Jogesh; Hamdi, Mounir; Foufou, Sebti

    2014-01-01

    From the perspective of resource allocation and routing, this paper aims to save as much energy as possible in data center networks. We present a general framework, based on the blocking island paradigm, to try to maximize the network power conservation and minimize sacrifices of network performance and reliability. The bandwidth allocation mechanism together with power-aware routing algorithm achieve a bandwidth guaranteed tighter network. Besides, our fast efficient heuristics for allocatin...

  2. Research and development into power reactor fuel performance

    International Nuclear Information System (INIS)

    Notley, M.J.F.

    1983-07-01

    The nuclear fuel in a power reactor must perform reliably during normal operation, and the consequences of abnormal events must be researched and assessed. The present highly reliable operation of the natural UO 2 in the CANDU power reactors has reduced the need for further work in this area; however a core of expertise must be retained for purposes such as training of new staff, retaining the capability of reacting to unforeseen circumstances, and participating in the commercial development of new ideas. The assessment of fuel performance during accidents requires research into many aspects of materials, fuel and fission product behaviour, and the consolidation of that knowledge into computer codes used to evaluate the consequences of any particular accident. This work is growing in scope, much is known from out-reactor work at temperatures up to about 1500 degreesC, but the need for in-reactor verification and investigation of higher-temperature accidents has necessitated the construction of a major new in-reactor test loop and the initiation of the associated out-reactor support programs. Since many of the programs on normal and accident-related performance are generic in nature, they will be applicable to advanced fuel cycles. Work will therefore be gradually transferred from the present, committed power reactor system to support the next generation of thorium-based reactor cycles

  3. Conflict-related activity in the caudal anterior cingulate cortex in the absence of awareness

    Science.gov (United States)

    Ursu, Stefan; Clark, Kristi A.; Aizenstein, Howard J.; Stenger, V. Andrew; Carter, Cameron S.

    2009-01-01

    The caudal anterior cingulate cortex (cACC) is thought to be involved in performance monitoring, as conflict and error-related activity frequently co-localize in this area. Recent results suggest that these effects may be differentially modulated by awareness. To clarify the role of awareness in performance monitoring by the cACC, we used rapid event-related fMRI to examine the cACC activity while subjects performed a dual task: a delayed recognition task and a serial response task (SRT) with an implicit probabilistic learning rule (i.e. the stimulus location followed a probabilistic sequence of which the subjects were unaware). Task performance confirmed that the location sequence was learned implicitly. Even though we found no evidence of awareness for the presence of the sequence, imaging data revealed increased cACC activity during correct trials which violated the sequence (high conflict), relative to trials when stimuli followed the sequence (low conflict). Errors made with awareness also activated the same brain region. These results suggest that the performance monitoring function of the cACC extends beyond detection of errors made with or without awareness, and involves detection of multiple responses even when they are outside of awareness. PMID:19026710

  4. MUMAX: A new high-performance micromagnetic simulation tool

    International Nuclear Information System (INIS)

    Vansteenkiste, A.; Van de Wiele, B.

    2011-01-01

    We present MUMAX, a general-purpose micromagnetic simulation tool running on graphical processing units (GPUs). MUMAX is designed for high-performance computations and specifically targets large simulations. In that case speedups of over a factor 100 x can be obtained compared to the CPU-based OOMMF program developed at NIST. MUMAX aims to be general and broadly applicable. It solves the classical Landau-Lifshitz equation taking into account the magnetostatic, exchange and anisotropy interactions, thermal effects and spin-transfer torque. Periodic boundary conditions can optionally be imposed. A spatial discretization using finite differences in two or three dimensions can be employed. MUMAX is publicly available as open-source software. It can thus be freely used and extended by community. Due to its high computational performance, MUMAX should open up the possibility of running extensive simulations that would be nearly inaccessible with typical CPU-based simulators. - Highlights: → Novel, open-source micromagnetic simulator on GPU hardware. → Speedup of ∝100x compared to other widely used tools. → Extensively validated against standard problems. → Makes previously infeasible simulations accessible.

  5. Human Performance at the Perry Nuclear Power Plant

    International Nuclear Information System (INIS)

    Rabe, Alan W.

    1998-01-01

    Provides a description of human performance training for plant workers as implemented at the Perry Nuclear Power Plant. Practical concepts regarding the training are presented as well as a demonstration of some of the training material. Concepts are drawn from INPO, Reason and Deming. The paper encourages the use of site-wide and individual organizational unit training in human performance management techniques. (author)

  6. Effect of traditional resistance and power training using rated perceived exertion for enhancement of muscle strength, power, and functional performance.

    Science.gov (United States)

    Tiggemann, Carlos Leandro; Dias, Caroline Pieta; Radaelli, Regis; Massa, Jéssica Cassales; Bortoluzzi, Rafael; Schoenell, Maira Cristina Wolf; Noll, Matias; Alberton, Cristine Lima; Kruel, Luiz Fernando Martins

    2016-04-01

    The present study compared the effects of 12 weeks of traditional resistance training and power training using rated perceived exertion (RPE) to determine training intensity on improvements in strength, muscle power, and ability to perform functional task in older women. Thirty healthy elderly women (60-75 years) were randomly assigned to traditional resistance training group (TRT; n = 15) or power training group (PT; n = 15). Participants trained twice a week for 12 weeks using six exercises. The training protocol was designed to ascertain that participants exercised at an RPE of 13-18 (on a 6-20 scale). Maximal dynamic strength, muscle power, and functional performance of lower limb muscles were assessed. Maximal dynamic strength muscle strength leg press (≈58 %) and knee extension (≈20 %) increased significantly (p training. Muscle power also increased with training (≈27 %; p functional performance after training period (≈13 %; p effective in improving maximal strength, muscle power, and functional performance of lower limbs in elderly women.

  7. Awareness and Self-Awareness for Multi-Robot Organisms

    OpenAIRE

    Kernbach, Serge

    2011-01-01

    Awareness and self-awareness are two different notions related to knowing the environment and itself. In a general context, the mechanism of self-awareness belongs to a class of co-called "self-issues" (self-* or self-star): self-adaptation, self-repairing, self-replication, self-development or self-recovery. The self-* issues are connected in many ways to adaptability and evolvability, to the emergence of behavior and to the controllability of long-term developmental processes. Self-* are ei...

  8. Anatomy of event and human performance management in nuclear power plants

    International Nuclear Information System (INIS)

    Wang Jinhua

    2014-01-01

    This article analyzes the occurrence mechanism of events in nuclear power plants, and explains the four factors of human errors and the relations among them, then probes into the occurrence mechanism and characteristics of human errors in nuclear power plants. Moreover, the article clarifies that the principle of human performance training in nuclear power plants is all-member training, and that the implementation approach is to develop different human performance tools for different staff categories as workers, knowledge workers and supervisors, which are categorized based on characteristics of work of different staff. (author)

  9. Performance analysis of joint diversity combining, adaptive modulation, and power control schemes

    KAUST Repository

    Qaraqe, Khalid A.

    2011-01-01

    Adaptive modulation and diversity combining represent very important adaptive solutions for future generations of wireless communication systems. Indeed, in order to improve the performance and the efficiency of these systems, these two techniques have been recently used jointly in new schemes named joint adaptive modulation and diversity combining (JAMDC) schemes. Considering the problem of finding low hardware complexity, bandwidth-efficient, and processing-power efficient transmission schemes for a downlink scenario and capitalizing on some of these recently proposed JAMDC schemes, we propose and analyze in this paper three joint adaptive modulation, diversity combining, and power control (JAMDCPC) schemes where a constant-power variable-rate adaptive modulation technique is used with an adaptive diversity combining scheme and a common power control process. More specifically, the modulation constellation size, the number of combined diversity paths, and the needed power level are jointly determined to achieve the highest spectral efficiency with the lowest possible processing power consumption quantified in terms of the average number of combined paths, given the fading channel conditions and the required bit error rate (BER) performance. In this paper, the performance of these three JAMDCPC schemes is analyzed in terms of their spectral efficiency, processing power consumption, and error-rate performance. Selected numerical examples show that these schemes considerably increase the spectral efficiency of the existing JAMDC schemes with a slight increase in the average number of combined paths for the low signal-to-noise ratio range while maintaining compliance with the BER performance and a low radiated power which yields to a substantial decrease in interference to co-existing users and systems. © 2011 IEEE.

  10. Development of High Performance Cooling Modules in Notebook PC's

    Science.gov (United States)

    Tanahashi, Kosei

    The CPU power consumption in Notebook PCs is increasing every year. Video chips and HDDs are also continually using larger power for higher performance. In addition, since miniaturization is desired, the mounting of components is becoming more and more dense. Accordingly, the cooling mechanisms are increasingly important. The cooling modules have to dissipate larger amounts of heat in the same environmental conditions. Therefore, high capacity cooling capabilities is needed, while low costs and high reliability must be retained. Available cooling methods include air or water cooling systems and the heat conduction method. The air cooling system is to transmit heat by a cooling fan often using a heat pipe. The water cooling one employs the water to carry heat to the back of the display, which offers a comparatively large cooling area. The heat conduction method is to transfer the heat by thermal conduction to the case. This article describes the development of new and comparatively efficient cooling devices offering low cost and high reliability for air cooling system. As one of the development techniques, the heat resistance and performance are measured for various parts and layouts. Each cooling system is evaluated in the same measurement environment. With regards to the fans, an optimal shape of the fan blades to maximize air flow is found by using CFD simulation, and prototypes were built and tested.

  11. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  12. Phonemic awareness as a pathway to number transcoding

    Directory of Open Access Journals (Sweden)

    Júlia Beatriz Lopes-Silva

    2014-01-01

    Full Text Available Although verbal and numerical abilities have a well-established interaction, the impact of phonological processing on numeric abilities remains elusive. The aim of this study is to investigate the role of phonemic awareness in number processing and to explore its association with other functions such as working memory and magnitude processing. One hundred seventy-two children in 2nd grade to 4th grade were evaluated in terms of their intelligence, number transcoding, phonemic awareness, verbal and visuospatial working memory and number sense (nonsymbolic magnitude comparison performance. All of the children had normal intelligence. Among these measurements of magnitude processing, working memory and phonemic awareness, only the last was retained in regression and path models predicting transcoding ability. Phonemic awareness mediated the influence of verbal working memory on number transcoding. The evidence suggests that phonemic awareness significantly affects number transcoding. Such an association is robust and should be considered in cognitive models of both dyslexia and dyscalculia.

  13. Susceptible-infected-recovered epidemics in random networks with population awareness

    Science.gov (United States)

    Wu, Qingchu; Chen, Shufang

    2017-10-01

    The influence of epidemic information-based awareness on the spread of infectious diseases on networks cannot be ignored. Within the effective degree modeling framework, we discuss the susceptible-infected-recovered model in complex networks with general awareness and general degree distribution. By performing the linear stability analysis, the conditions of epidemic outbreak can be deduced and the results of the previous research can be further expanded. Results show that the local awareness can suppress significantly the epidemic spreading on complex networks via raising the epidemic threshold and such effects are closely related to the formulation of awareness functions. In addition, our results suggest that the recovered information-based awareness has no effect on the critical condition of epidemic outbreak.

  14. Students multicultural awareness

    Directory of Open Access Journals (Sweden)

    F.I Soekarman

    2016-12-01

    Full Text Available Multicultural awareness is the foundation of communication and it involves the ability of standing back from ourselves and becoming aware of our cultural values, beliefs and perceptions. Multicultural awareness becomes central when we have to interact with people from other cultures. People see, interpret and evaluate things in a different ways. What is considered an appropriate behaviour in one culture is frequently inappropriate in another one. this research use descriptive- quantitative methodology to indentify level of students multicultural awareness specifically will be identified by gender and academic years. This research will identify multicultural awareness based on differences of gender, academic years. This research use random and purposive random sampling of 650 students from university. These studies identify of multicultural awareness 34, 11, 4% in high condition, 84, 1% medium and 4, 5% in low. Further, there is a significant difference in the level of multicultural awareness based on gender and academic year. These findings could not be generalized because of the limited sample and ethnicity; it should need a wider research so that can be generalized and recommended the efforts to development and improvement of multicultural awareness conditions for optimization the services.

  15. The Impact of Power Switching Devices on the Thermal Performance of a 10 MW Wind Power NPC Converter

    DEFF Research Database (Denmark)

    Ma, Ke; Blaabjerg, Frede

    2012-01-01

    Power semiconductor switching devices play an important role in the performance of high power wind energy generation systems. The state-of-the-art device choices in the wind power application as reported in the industry include IGBT modules, IGBT press-pack and IGCT press-pack. Because...

  16. Organization, structure, and performance in the US nuclear power industry

    International Nuclear Information System (INIS)

    Lester, R.K.

    1986-01-01

    Several propositions are advanced concerning the effects of industry organization and structure on the economic performance of the American commercial nuclear power industry. Both the electric utility industry and the nuclear power plant supply industry are relatively high degree of horizontal disaggregation. The latter is also characterized by an absence of vertical integration. The impact of each of these factors on construction and operating performance is discussed. Evidence is presented suggesting that the combination of horizontal and vertical disaggregation in the industry has had a significant adverse effect on economic performance. The relationship between industrial structure and regulatory behavior is also discussed. 43 references, 4 figures, 9 tables

  17. Validity of linear encoder measurement of sit-to-stand performance power in older people.

    Science.gov (United States)

    Lindemann, U; Farahmand, P; Klenk, J; Blatzonis, K; Becker, C

    2015-09-01

    To investigate construct validity of linear encoder measurement of sit-to-stand performance power in older people by showing associations with relevant functional performance and physiological parameters. Cross-sectional study. Movement laboratory of a geriatric rehabilitation clinic. Eighty-eight community-dwelling, cognitively unimpaired older women (mean age 78 years). Sit-to-stand performance power and leg power were assessed using a linear encoder and the Nottingham Power Rig, respectively. Gait speed was measured on an instrumented walkway. Maximum quadriceps and hand grip strength were assessed using dynamometers. Mid-thigh muscle cross-sectional area of both legs was measured using magnetic resonance imaging. Associations of sit-to-stand performance power with power assessed by the Nottingham Power Rig, maximum gait speed and muscle cross-sectional area were r=0.646, r=0.536 and r=0.514, respectively. A linear regression model explained 50% of the variance in sit-to-stand performance power including muscle cross-sectional area (p=0.001), maximum gait speed (p=0.002), and power assessed by the Nottingham Power Rig (p=0.006). Construct validity of linear encoder measurement of sit-to-stand power was shown at functional level and morphological level for older women. This measure could be used in routine clinical practice as well as in large-scale studies. DRKS00003622. Copyright © 2015 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.

  18. Interoceptive awareness in experienced meditators.

    Science.gov (United States)

    Khalsa, Sahib S; Rudrauf, David; Damasio, Antonio R; Davidson, Richard J; Lutz, Antoine; Tranel, Daniel

    2008-07-01

    Attention to internal body sensations is practiced in most meditation traditions. Many traditions state that this practice results in increased awareness of internal body sensations, but scientific studies evaluating this claim are lacking. We predicted that experienced meditators would display performance superior to that of nonmeditators on heartbeat detection, a standard noninvasive measure of resting interoceptive awareness. We compared two groups of meditators (Tibetan Buddhist and Kundalini) to an age- and body mass index-matched group of nonmeditators. Contrary to our prediction, we found no evidence that meditators were superior to nonmeditators in the heartbeat detection task, across several sessions and respiratory modulation conditions. Compared to nonmeditators, however, meditators consistently rated their interoceptive performance as superior and the difficulty of the task as easier. These results provide evidence against the notion that practicing attention to internal body sensations, a core feature of meditation, enhances the ability to sense the heartbeat at rest.

  19. Relationship between strength, power and balance performance in seniors.

    Science.gov (United States)

    Muehlbauer, Thomas; Besemer, Carmen; Wehrle, Anja; Gollhofer, Albert; Granacher, Urs

    2012-01-01

    Deficits in strength, power and balance represent important intrinsic risk factors for falls in seniors. The purpose of this study was to investigate the relationship between variables of lower extremity muscle strength/power and balance, assessed under various task conditions. Twenty-four healthy and physically active older adults (mean age: 70 ± 5 years) were tested for their isometric strength (i.e. maximal isometric force of the leg extensors) and muscle power (i.e. countermovement jump height and power) as well as for their steady-state (i.e. unperturbed standing, 10-meter walk), proactive (i.e. Timed Up & Go test, Functional Reach Test) and reactive (i.e. perturbed standing) balance. Balance tests were conducted under single (i.e. standing or walking alone) and dual task conditions (i.e. standing or walking plus cognitive and motor interference task). Significant positive correlations were found between measures of isometric strength and muscle power of the lower extremities (r values ranged between 0.608 and 0.720, p power and balance (i.e. no significant association in 20 out of 21 cases). Additionally, no significant correlations were found between measures of steady-state, proactive and reactive balance or balance tests performed under single and dual task conditions (all p > 0.05). The predominately nonsignificant correlations between different types of balance imply that balance performance is task specific in healthy and physically active seniors. Further, strength, power and balance as well as balance under single and dual task conditions seem to be independent of each other and may have to be tested and trained complementarily. Copyright © 2012 S. Karger AG, Basel.

  20. Does post-operative knee awareness differ between knees in bilateral simultaneous total knee arthroplasty? Predictors of high or low knee awareness

    DEFF Research Database (Denmark)

    Nielsen, Katrine Abildgaard; Thomsen, Morten Grove; Latifi, Roshan

    2016-01-01

    PURPOSE: To evaluate the difference in post-operative knee awareness between knees in patients undergoing bilateral simultaneous total knee arthroplasty (TKA) and to assess factors predicting high or low knee awareness. METHODS: This study was conducted on 99 bilateral simultaneous TKAs performed...... at our institution from 2008 to 2012. All patients received one set of questionnaires [Forgotten Joint Score (FJS) and Oxford Knee Score (OKS)] for each knee. Based on the FJS, the patients' knees were divided into two groups: "best" and "worst" knees. The median of the absolute difference in FJS and OKS...... within each patient was calculated. Multivariate linear regression was performed to identify factors affecting FJS. RESULTS: The difference between knees was 1 point (CI 0-5) for the FJS and 1 point (CI 0-2) for the OKS. The FJS for females increased (decreasing awareness) with increasing age. Males had...