WorldWideScience

Sample records for heterogeneous cloud workloads

  1. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  2. Clean Energy Use for Cloud Computing Federation Workloads

    Yahav Biran

    2017-08-01

    Full Text Available Cloud providers seek to maximize their market share. Traditionally, they deploy datacenters with sufficient capacity to accommodate their entire computing demand while maintaining geographical affinity to its customers. Achieving these goals by a single cloud provider is increasingly unrealistic from a cost of ownership perspective. Moreover, the carbon emissions from underutilized datacenters place an increasing demand on electricity and is a growing factor in the cost of cloud provider datacenters. Cloud-based systems may be classified into two categories: serving systems and analytical systems. We studied two primary workload types, on-demand video streaming as a serving system and MapReduce jobs as an analytical systems and suggested two unique energy mix usage for processing that workloads. The recognition that on-demand video streaming now constitutes the bulk portion of traffic to Internet consumers provides a path to mitigate rising energy demand. On-demand video is usually served through Content Delivery Networks (CDN, often scheduled in backend and edge datacenters. This publication describes a CDN deployment solution that utilizes green energy to supply on-demand streaming workload. A cross-cloud provider collaboration will allow cloud providers to both operate near their customers and reduce operational costs, primarily by lowering the datacenter deployments per provider ratio. Our approach optimizes cross-datacenters deployment. Specifically, we model an optimized CDN-edge instance allocation system that maximizes, under a set of realistic constraints, green energy utilization. The architecture of this cross-cloud coordinator service is based on Ubernetes, an open source container cluster manager that is a federation of Kubernetes clusters. It is shown how, under reasonable constraints, it can reduce the projected datacenter’s carbon emissions growth by 22% from the currently reported consumption. We also suggest operating

  3. Workload Classification & Software Energy Measurement for Efficient Scheduling on Private Cloud Platforms

    Smith, James W.; Sommerville, Ian

    2011-01-01

    At present there are a number of barriers to creating an energy efficient workload scheduler for a Private Cloud based data center. Firstly, the relationship between different workloads and power consumption must be investigated. Secondly, current hardware-based solutions to providing energy usage statistics are unsuitable in warehouse scale data centers where low cost and scalability are desirable properties. In this paper we discuss the effect of different workloads on server power consumpt...

  4. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  5. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  6. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  7. Hipster: hybrid task manager for latency-critical cloud workloads

    Nishtala, Rajiv; Carpenter, Paul M.; Petrucci, Vinicius; Martorell Bofill, Xavier

    2017-01-01

    In 2013, U. S. data centers accounted for 2.2% of the country's total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important workloads are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to reduce power consumption due to increasing performance demands. This paper introduces Hipster, a technique that combines heuristics and reinforcement learning to manage latency-crit...

  8. A Holistic Approach for Collaborative Workload Execution in Volunteer Clouds

    Sebastio, Stefano; Amoretti, Michele; Lluch Lafuente, Alberto

    2018-01-01

    The demand for provisioning, using, and maintaining distributed computational resources is growing hand in hand with the quest for ubiquitous services. Centralized infrastructures such as cloud computing systems provide suitable solutions for many applications, but their scalability could be limi...

  9. Online Cloud Offloading Using Heterogeneous Enhanced Remote Radio Heads

    Shnaiwer, Yousef N.; Sorour, Sameh; Sadeghi, Parastoo; Al-Naffouri, Tareq Y.

    2018-01-01

    This paper studies the cloud offloading gains of using heterogeneous enhanced remote radio heads (eRRHs) and dual-interface clients in fog radio access networks (F-RANs). First, the cloud offloading problem is formulated as a collection

  10. Coarse-Grain QoS-Aware Dynamic Instance Provisioning for Interactive Workload in the Cloud

    Jianxiong Wan

    2014-01-01

    Full Text Available Cloud computing paradigm renders the Internet service providers (ISPs with a new approach to deliver their service with less cost. ISPs can rent virtual machines from the Infrastructure-as-a-Service (IaaS provided by the cloud rather than purchasing them. In addition, commercial cloud providers (CPs offer diverse VM instance rental services in various time granularities, which provide another opportunity for ISPs to reduce cost. We investigate a Coarse-grain QoS-aware Dynamic Instance Provisioning (CDIP problem for interactive workload in the cloud from the perspective of ISPs. We formulate the CDIP problem as an optimization problem where the objective is to minimize the VM instance rental cost and the constraint is the percentile delay bound. Since the Internet traffic shows a strong self-similar property, it is hard to get an analytical form of the percentile delay constraint. To address this issue, we purpose a lookup table structure together with a learning algorithm to estimate the performance of the instance provisioning policy. This approach is further extended with two function approximations to enhance the scalability of the learning algorithm. We also present an efficient dynamic instance provisioning algorithm, which takes full advantage of the rental service diversity, to determine the instance rental policy. Extensive simulations are conducted to validate the effectiveness of the proposed algorithms.

  11. Online Cloud Offloading Using Heterogeneous Enhanced Remote Radio Heads

    Shnaiwer, Yousef N.

    2018-02-12

    This paper studies the cloud offloading gains of using heterogeneous enhanced remote radio heads (eRRHs) and dual-interface clients in fog radio access networks (F-RANs). First, the cloud offloading problem is formulated as a collection of independent sets selection problem over a network coding graph, and its NP-hardness is shown. Therefore, a computationally simple online heuristic algorithm is proposed, that maximizes cloud offloading by finding an efficient schedule of coded file transmissions from the eRRHs and the cloud base station (CBS). Furthermore, a lower bound on the average number of required CBS channels to serve all clients is derived. Simulation results show that our proposed framework that uses both network coding and a heterogeneous F-RAN setting enhances cloud offloading as compared to conventional homogeneous F-RANs with network coding.

  12. Cloud-Based Parameter-Driven Statistical Services and Resource Allocation in a Heterogeneous Platform on Enterprise Environment

    Sungju Lee

    2016-09-01

    Full Text Available A fundamental key for enterprise users is a cloud-based parameter-driven statistical service and it has become a substantial impact on companies worldwide. In this paper, we demonstrate the statistical analysis for some certain criteria that are related to data and applied to the cloud server for a comparison of results. In addition, we present a statistical analysis and cloud-based resource allocation method for a heterogeneous platform environment by performing a data and information analysis with consideration of the application workload and the server capacity, and subsequently propose a service prediction model using a polynomial regression model. In particular, our aim is to provide stable service in a given large-scale enterprise cloud computing environment. The virtual machines (VMs for cloud-based services are assigned to each server with a special methodology to satisfy the uniform utilization distribution model. It is also implemented between users and the platform, which is a main idea of our cloud computing system. Based on the experimental results, we confirm that our prediction model can provide sufficient resources for statistical services to large-scale users while satisfying the uniform utilization distribution.

  13. Resource allocation in heterogeneous cloud radio access networks: advances and challenges

    Dahrouj, Hayssam; Douik, Ahmed S.; Dhifallah, Oussama Najeeb; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    , becomes a necessity. By connecting all the base stations from different tiers to a central processor (referred to as the cloud) through wire/wireline backhaul links, the heterogeneous cloud radio access network, H-CRAN, provides an open, simple

  14. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  15. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    Junping Dong; Qingyu Xiong; Junhao Wen; Peng Li

    2014-01-01

    Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functi...

  16. Towards Media Intercloud Standardization Evaluating Impact of Cloud Storage Heterogeneity

    Aazam, Mohammad; StHilaire, Marc; Huh, EuiNam

    2016-01-01

    Digital media has been increasing very rapidly, resulting in cloud computing's popularity gain. Cloud computing provides ease of management of large amount of data and resources. With a lot of devices communicating over the Internet and with the rapidly increasing user demands, solitary clouds have to communicate to other clouds to fulfill the demands and discover services elsewhere. This scenario is called intercloud computing or cloud federation. Intercloud computing still lacks standard ar...

  17. Heterogeneous Data Storage Management with Deduplication in Cloud Computing

    Yan, Zheng; Zhang, Lifang; Ding, Wenxiu; Zheng, Qinghua

    2017-01-01

    Cloud storage as one of the most important services of cloud computing helps cloud users break the bottleneck of restricted resources and expand their storage without upgrading their devices. In order to guarantee the security and privacy of cloud users, data are always outsourced in an encrypted form. However, encrypted data could incur much waste of cloud storage and complicate data sharing among authorized users. We are still facing challenges on encrypted data storage and management with ...

  18. Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation

    Muraraşu, Alin

    2012-01-01

    Multi-core parallelism and accelerators are becoming common features of today’s computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.

  19. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  20. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  1. Development of a Survivable Cloud Multi-Robot Framework for Heterogeneous Environments

    Isaac Osunmakinde

    2014-10-01

    Full Text Available Cloud robotics is a paradigm that allows for robots to offload computationally intensive and data storage requirements into the cloud by providing a secure and customizable environment. The challenge for cloud robotics is the inherent problem of cloud disconnection. A major assumption made in the development of the current cloud robotics frameworks is that the connection between the cloud and the robot is always available. However, for multi-robots working in heterogeneous environments, the connection between the cloud and the robots cannot always be guaranteed. This work serves to assist with the challenge of disconnection in cloud robotics by proposing a survivable cloud multi-robotics (SCMR framework for heterogeneous environments. The SCMR framework leverages the combination of a virtual ad hoc network formed by robot-to-robot communication and a physical cloud infrastructure formed by robot-to-cloud communications. The quality of service (QoS on the SCMR framework was tested and validated by determining the optimal energy utilization and time of response (ToR on drivability analysis with and without cloud connection. The design trade-off, including the result, is between the computation energy for the robot execution and the offloading energy for the cloud execution.

  2. Service workload patterns for QoS-driven cloud resource management

    Zhang, Li; Zhang, Yichuan; Jamshidi, Pooyan; Xu, Lei; Pahl, Claus

    2015-01-01

    Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support a continuous approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction techni...

  3. The impact of horizontal heterogeneities, cloud fraction, and cloud dynamics on warm cloud effective radii and liquid water path from CERES-like Aqua MODIS retrievals

    D. Painemal; P. Minnis; S. Sun-Mack

    2013-01-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES Edition 4 algorithms are averaged at the CERES footprint resolution (~ 20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean...

  4. Leveraging Cloud Heterogeneity for Cost-Efficient Execution of Parallel Applications

    Roloff, Eduardo; Diener, Matthias; Diaz Carreño, Emmanuell; Gaspary, Luciano Paschoal; Navaux, Philippe O.A.

    2017-01-01

    Public cloud providers offer a wide range of instance types, with different processing and interconnection speeds, as well as varying prices. Furthermore, the tasks of many parallel applications show different computational demands due to load imbalance. These differences can be exploited for improving the cost efficiency of parallel applications in many cloud environments by matching application requirements to instance types. In this paper, we introduce the concept of heterogeneous cloud sy...

  5. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  6. Parameterizing the competition between homogeneous and heterogeneous freezing in ice cloud formation – polydisperse ice nuclei

    D. Barahona

    2009-08-01

    Full Text Available This study presents a comprehensive ice cloud formation parameterization that computes the ice crystal number, size distribution, and maximum supersaturation from precursor aerosol and ice nuclei. The parameterization provides an analytical solution of the cloud parcel model equations and accounts for the competition effects between homogeneous and heterogeneous freezing, and, between heterogeneous freezing in different modes. The diversity of heterogeneous nuclei is described through a nucleation spectrum function which is allowed to follow any form (i.e., derived from classical nucleation theory or from observations. The parameterization reproduces the predictions of a detailed numerical parcel model over a wide range of conditions, and several expressions for the nucleation spectrum. The average error in ice crystal number concentration was −2.0±8.5% for conditions of pure heterogeneous freezing, and, 4.7±21% when both homogeneous and heterogeneous freezing were active. The formulation presented is fast and free from requirements of numerical integration.

  7. The impact of horizontal heterogeneities, cloud fraction, and cloud dynamics on warm cloud effective radii and liquid water path from CERES-like Aqua MODIS retrievals

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-05-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES Edition 4 algorithms are averaged at the CERES footprint resolution (~ 20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8 - re2.1 differences are positive ( 50 g m-2, and negative (up to -4 μm) for larger Hσ. Thus, re3.8 - re2.1 differences are more likely to reflect biases associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  8. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets

  9. Heuristic Data Placement for Data-Intensive Applications in Heterogeneous Cloud

    Qing Zhao

    2016-01-01

    Full Text Available Data placement is an important issue which aims at reducing the cost of internode data transfers in cloud especially for data-intensive applications, in order to improve the performance of the entire cloud system. This paper proposes an improved data placement algorithm for heterogeneous cloud environments. In the initialization phase, a data clustering algorithm based on data dependency clustering and recursive partitioning has been presented, and both the factor of data size and fixed position are incorporated. And then a heuristic tree-to-tree data placement strategy is advanced in order to make frequent data movements occur on high-bandwidth channels. Simulation results show that, compared with two classical strategies, this strategy can effectively reduce the amount of data transmission and its time consumption during execution.

  10. Heterogeneous condensation of ice mantle around silicate core grain in molecular cloud

    Hasegawa, H.

    1984-01-01

    Interstellar water ice grains are observed in the cold and dense regions such as molecular clouds, HII regions and protostellar objects. The water ice is formed from gas phase during the cooling stage of cosmic gas with solid grain surfaces of high temperature silicate minerals. It is a question whether the ice is formed through the homogeneous condensation process (as the ice alone) or the heterogeneous one (as the ice around the pre-existing high temperature mineral grains). (author)

  11. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  12. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-01-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. ...

  13. Heterogeneous Formation of Polar Stratospheric Clouds- Part 1: Nucleation of Nitric Acid Trihydrate (NAT)

    Hoyle, C. R.; Engel, I.; Luo, B. P.; Pitts, M. C.; Poole, L. R.; Grooss, J.-U.; Peter, T.

    2013-01-01

    Satellite-based observations during the Arctic winter of 2009/2010 provide firm evidence that, in contrast to the current understanding, the nucleation of nitric acid trihydrate (NAT) in the polar stratosphere does not only occur on preexisting ice particles. In order to explain the NAT clouds observed over the Arctic in mid-December 2009, a heterogeneous nucleation mechanism is required, occurring via immersion freezing on the surface of solid particles, likely of meteoritic origin. For the first time, a detailed microphysical modelling of this NAT formation pathway has been carried out. Heterogeneous NAT formation was calculated along more than sixty thousand trajectories, ending at Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) observation points. Comparing the optical properties of the modelled NAT with these observations enabled a thorough validation of a newly developed NAT nucleation parameterisation, which has been built into the Zurich Optical and Microphysical box Model (ZOMM). The parameterisation is based on active site theory, is simple to implement in models and provides substantial advantages over previous approaches which involved a constant rate of NAT nucleation in a given volume of air. It is shown that the new method is capable of reproducing observed polar stratospheric clouds (PSCs) very well, despite the varied conditions experienced by air parcels travelling along the different trajectories. In a companion paper, ZOMM is applied to a later period of the winter, when ice PSCs are also present, and it is shown that the observed PSCs are also represented extremely well under these conditions.

  14. Heterogeneous ice nucleation activity of bacteria: new laboratory experiments at simulated cloud conditions

    O. Möhler

    2008-10-01

    Full Text Available The ice nucleation activities of five different Pseudomonas syringae, Pseudomonas viridiflava and Erwinia herbicola bacterial species and of Snomax™ were investigated in the temperature range between −5 and −15°C. Water suspensions of these bacteria were directly sprayed into the cloud chamber of the AIDA facility of Forschungszentrum Karlsruhe at a temperature of −5.7°C. At this temperature, about 1% of the Snomax™ cells induced immersion freezing of the spray droplets before the droplets evaporated in the cloud chamber. The living cells didn't induce any detectable immersion freezing in the spray droplets at −5.7°C. After evaporation of the spray droplets the bacterial cells remained as aerosol particles in the cloud chamber and were exposed to typical cloud formation conditions in experiments with expansion cooling to about −11°C. During these experiments, the bacterial cells first acted as cloud condensation nuclei to form cloud droplets. Then, only a minor fraction of the cells acted as heterogeneous ice nuclei either in the condensation or the immersion mode. The results indicate that the bacteria investigated in the present study are mainly ice active in the temperature range between −7 and −11°C with an ice nucleation (IN active fraction of the order of 10−4. In agreement to previous literature results, the ice nucleation efficiency of Snomax™ cells was much larger with an IN active fraction of 0.2 at temperatures around −8°C.

  15. Parameterizing the competition between homogeneous and heterogeneous freezing in cirrus cloud formation – monodisperse ice nuclei

    D. Barahona

    2009-01-01

    Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.

  16. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  17. Resource allocation in heterogeneous cloud radio access networks: advances and challenges

    Dahrouj, Hayssam

    2015-06-01

    Base station densification is increasingly used by network operators to provide better throughput and coverage performance to mobile subscribers in dense data traffic areas. Such densification is progressively diffusing the move from traditional macrocell base stations toward heterogeneous networks with diverse cell sizes (e.g., microcell, picocell, femotcell) and diverse radio access technologies (e.g., GSM, CDMA), and LTE). The coexistence of the different network entities brings an additional set of challenges, particularly in terms of the provisioning of high-speed communications and the management of wireless interference. Resource sharing between different entities, largely incompatible in conventional systems due to the lack of interconnections, becomes a necessity. By connecting all the base stations from different tiers to a central processor (referred to as the cloud) through wire/wireline backhaul links, the heterogeneous cloud radio access network, H-CRAN, provides an open, simple, controllable, and flexible paradigm for resource allocation. This article discusses challenges and recent developments in H-CRAN design. It proposes promising resource allocation schemes in H-CRAN: coordinated scheduling, hybrid backhauling, and multicloud association. Simulations results show how the proposed strategies provide appreciable performance improvement compared to methods from recent literature. © 2015 IEEE.

  18. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-10-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8-re2.1 differences are positive ( 45 gm-2, and negative (up to -4 μm) for larger Hσ. While re3.8-re2.1 differences in homogeneous scenes are qualitatively consistent with in situ microphysical observations over the region of study, negative differences - particularly evinced in mean regional maps - are more likely to reflect the dominant bias associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  19. Contributions of Heterogeneous Ice Nucleation, Large-Scale Circulation, and Shallow Cumulus Detrainment to Cloud Phase Transition in Mixed-Phase Clouds with NCAR CAM5

    Liu, X.; Wang, Y.; Zhang, D.; Wang, Z.

    2016-12-01

    Mixed-phase clouds consisting of both liquid and ice water occur frequently at high-latitudes and in mid-latitude storm track regions. This type of clouds has been shown to play a critical role in the surface energy balance, surface air temperature, and sea ice melting in the Arctic. Cloud phase partitioning between liquid and ice water determines the cloud optical depth of mixed-phase clouds because of distinct optical properties of liquid and ice hydrometeors. The representation and simulation of cloud phase partitioning in state-of-the-art global climate models (GCMs) are associated with large biases. In this study, the cloud phase partition in mixed-phase clouds simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against satellite observations. Observation-based supercooled liquid fraction (SLF) is calculated from CloudSat, MODIS and CPR radar detected liquid and ice water paths for clouds with cloud-top temperatures between -40 and 0°C. Sensitivity tests with CAM5 are conducted for different heterogeneous ice nucleation parameterizations with respect to aerosol influence (Wang et al., 2014), different phase transition temperatures for detrained cloud water from shallow convection (Kay et al., 2016), and different CAM5 model configurations (free-run versus nudged winds and temperature, Zhang et al., 2015). A classical nucleation theory-based ice nucleation parameterization in mixed-phase clouds increases the SLF especially at temperatures colder than -20°C, and significantly improves the model agreement with observations in the Arctic. The change of transition temperature for detrained cloud water increases the SLF at higher temperatures and improves the SLF mostly over the Southern Ocean. Even with the improved SLF from the ice nucleation and shallow cumulus detrainment, the low SLF biases in some regions can only be improved through the improved circulation with the nudging technique. Our study highlights the challenges of

  20. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  1. Application of physical adsorption thermodynamics to heterogeneous chemistry on polar stratospheric clouds

    Elliott, Scott; Turco, Richard P.; Toon, Owen B.; Hamill, Patrick

    1991-01-01

    Laboratory isotherms for the binding of several nonheterogeneously active atmospheric gases and for HCl to water ice are translated into adsorptive equilibrium constants and surface enthalpies. Extrapolation to polar conditions through the Clausius Clapeyron relation yields coverage estimates below the percent level for N2, Ar, CO2, and CO, suggesting that the crystal faces of type II stratospheric cloud particles may be regarded as clean with respect to these species. For HCl, and perhaps HF and HNO3, estimates rise to several percent, and the adsorbed layer may offer acid or proton sources alternate to the bulk solid for heterogeneous reactions with stratospheric nitrates. Measurements are lacking for many key atmospheric molecules on water ice, and almost entirely for nitric acid trihydrate as substrate. Adsorptive equilibria enter into gas to particle mass flux descriptions, and the binding energy determines rates for desorption of, and encounter between, potential surface reactants.

  2. Cirrus cloud mimic surfaces in the laboratory: organic acids, bases and NOx heterogeneous reactions

    Sodeau, J.; Oriordan, B.

    2003-04-01

    CIRRUS CLOUD MIMIC SURFACES IN THE LABORATORY:ORGANIC ACIDS, BASES AND NOX HETEROGENEOUS REACTIONS. B. ORiordan, J. Sodeau Department of Chemistry and Environment Research Institute, University College Cork, Ireland j.sodeau@ucc.ie /Fax: +353-21-4902680 There are a variety of biogenic and anthropogenic sources for the simple carboxylic acids to be found in the troposphere giving rise to levels as high as 45 ppb in certain urban areas. In this regard it is of note that ants of genus Formica produce some 10Tg of formic acid each year; some ten times that produced by industry. The expected sinks are those generally associated with tropospheric chemistry: the major routes studied, to date, being wet and dry deposition. No studies have been carried out hitherto on the role of water-ice surfaces in the atmospheric chemistry of carboxylic acids and the purpose of this paper is to indicate their potential function in the heterogeneous release of atmospheric species such as HONO. The deposition of formic acid on a water-ice surface was studied using FT-RAIR spectroscopy over a range of temperatures between 100 and 165K. In all cases ionization to the formate (and oxonium) ions was observed. The results were confirmed by TPD (Temperature Programmed Desorption) measurements, which indicated that two distinct surface species adsorb to the ice. Potential reactions between the formic acid/formate ion surface and nitrogen dioxide were subsequently investigated by FT-RAIRS. Co-deposition experiments showed that N2O3 and the NO+ ion (associated with water) were formed as products. A mechanism is proposed to explain these results, which involves direct reaction between the organic acid and nitrogen dioxide. Similar experiments involving acetic acid also indicate ionization on a water-ice surface. The results are put into the context of atmospheric chemistry potentially occuring on cirrus cloud surfaces.

  3. Sophisticated Online Learning Scheme for Green Resource Allocation in 5G Heterogeneous Cloud Radio Access Networks

    Alqerm, Ismail

    2018-01-23

    5G is the upcoming evolution for the current cellular networks that aims at satisfying the future demand for data services. Heterogeneous cloud radio access networks (H-CRANs) are envisioned as a new trend of 5G that exploits the advantages of heterogeneous and cloud radio access networks to enhance spectral and energy efficiency. Remote radio heads (RRHs) are small cells utilized to provide high data rates for users with high quality of service (QoS) requirements, while high power macro base station (BS) is deployed for coverage maintenance and low QoS users service. Inter-tier interference between macro BSs and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRANs. Therefore, we propose an efficient resource allocation scheme using online learning, which mitigates interference and maximizes energy efficiency while maintaining QoS requirements for all users. The resource allocation includes resource blocks (RBs) and power. The proposed scheme is implemented using two approaches: centralized, where the resource allocation is processed at a controller integrated with the baseband processing unit and decentralized, where macro BSs cooperate to achieve optimal resource allocation strategy. To foster the performance of such sophisticated scheme with a model free learning, we consider users\\' priority in RB allocation and compact state representation learning methodology to improve the speed of convergence and account for the curse of dimensionality during the learning process. The proposed scheme including both approaches is implemented using software defined radios testbed. The obtained results and simulation results confirm that the proposed resource allocation solution in H-CRANs increases the energy efficiency significantly and maintains users\\' QoS.

  4. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    D. Painemal

    2013-10-01

    Full Text Available The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E, and cloud fraction (CF on MODIS cloud effective radius (re, retrieved from the 2.1 μm (re2.1 and 3.8 μm (re3.8 channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km, while heterogeneities (Hσ are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8–re2.1 differences are positive (Hσ 45 gm−2, and negative (up to −4 μm for larger Hσ. While re3.8–re2.1 differences in homogeneous scenes are qualitatively consistent with in situ microphysical observations over the region of study, negative differences – particularly evinced in mean regional maps – are more likely to reflect the dominant bias associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  5. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  6. Influences of cloud heterogeneity on cirrus optical properties retrieved from the visible and near-infrared channels of MODIS/SEVIRI for flat and optically thick cirrus clouds

    Zhou, Yongbo; Sun, Xuejin; Zhang, Riwei; Zhang, Chuanliang; Li, Haoran; Zhou, Junhao; Li, Shaohui

    2017-01-01

    The influences of three-dimensional radiative effects and horizontal heterogeneity effects on the retrieval of cloud optical thickness (COT) and effective diameter (De) for cirrus clouds are explored by the SHDOM radiative transfer model. The stochastic cirrus clouds are generated by the Cloudgen model based on the Atmospheric Radiation Measurement program data. Incorporating a new ice cloud spectral model, we evaluate the retrieval errors for two solar zenith angles (SZAs) (30° and 60°), four solar azimuth angles (0°, 45°, 90°, and 180°), and two sensor settings (Moderate Resolution Imaging Spectrometer (MODIS) onboard Aqua and Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard METEOSAT-8). The domain-averaged relative error of COT (μ) ranges from −24.1 % to -1.0 % (SZA = 30°) and from −11.6 % to 3.3 % (SZA = 60°), with the uncertainty within 7.5 % to –12.5 % (SZA = 30°) and 20.0 % - 27.5 % (SZA = 60°). For the SZA of 60° only, the relative error and uncertainty are parameterized by the retrieved COT by linear functions, providing bases to correct the retrieved COT and estimate their uncertainties. Besides, De is overestimated by 0.7–15.0 μm on the domain average, with the corresponding uncertainty within 6.7–26.5 μm. The retrieval errors show no discernible dependence on solar azimuth angle due to the flat tops and full coverage of the cirrus samples. The results are valid only for the two samples and for the specific spatial resolution of the radiative transfer simulations. - Highlights: • The retrieved cloud optical properties for 3-D cirrus clouds are evaluated. • The cloud optical thickness and uncertainty could be corrected and estimated. • On the domain average, the effective diameter of ice crystal is overestimated. • The optical properties show non-obvious dependence on the solar azimuth angle.

  7. StackInsights: Cognitive Learning for Hybrid Cloud Readiness

    Qiao, Mu; Bathen, Luis; Génot, Simon-Pierre; Lee, Sunhwan; Routray, Ramani

    2017-01-01

    Hybrid cloud is an integrated cloud computing environment utilizing a mix of public cloud, private cloud, and on-premise traditional IT infrastructures. Workload awareness, defined as a detailed full range understanding of each individual workload, is essential in implementing the hybrid cloud. While it is critical to perform an accurate analysis to determine which workloads are appropriate for on-premise deployment versus which workloads can be migrated to a cloud off-premise, the assessment...

  8. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2017-08-01

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  9. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  10. Security of Heterogeneous Content in Cloud Based Library Information Systems Using an Ontology Based Approach

    Mihai DOINEA

    2014-01-01

    Full Text Available As in any domain that involves the use of software, the library information systems take advantages of cloud computing. The paper highlights the main aspect of cloud based systems, describing some public solutions provided by the most important players on the market. Topics related to content security in cloud based services are tackled in order to emphasize the requirements that must be met by these types of systems. A cloud based implementation of an Information Library System is presented and some adjacent tools that are used together with it to provide digital content and metadata links are described. In a cloud based Information Library System security is approached by means of ontologies. Aspects such as content security in terms of digital rights are presented and a methodology for security optimization is proposed.

  11. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  12. Enhanced machine learning scheme for energy efficient resource allocation in 5G heterogeneous cloud radio access networks

    Alqerm, Ismail

    2018-02-15

    Heterogeneous cloud radio access networks (H-CRAN) is a new trend of 5G that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users\\' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users\\' QoS requirements.

  13. The global influence of dust mineralogical composition on heterogeneous ice nucleation in mixed-phase clouds

    Hoose, C; Lohmann, U; Erdin, R; Tegen, I

    2008-01-01

    Mineral dust is the dominant natural ice nucleating aerosol. Its ice nucleation efficiency depends on the mineralogical composition. We show the first sensitivity studies with a global climate model and a three-dimensional dust mineralogy. Results show that, depending on the dust mineralogical composition, coating with soluble material from anthropogenic sources can lead to quasi-deactivation of natural dust ice nuclei. This effect counteracts the increased cloud glaciation by anthropogenic black carbon particles. The resulting aerosol indirect effect through the glaciation of mixed-phase clouds by black carbon particles is small (+0.1 W m -2 in the shortwave top-of-the-atmosphere radiation in the northern hemisphere)

  14. Heterogeneous access and processing of EO-Data on a Cloud based Infrastructure delivering operational Products

    Niggemann, F.; Appel, F.; Bach, H.; de la Mar, J.; Schirpke, B.; Dutting, K.; Rucker, G.; Leimbach, D.

    2015-04-01

    To address the challenges of effective data handling faced by Small and Medium Sized Enterprises (SMEs) a cloud-based infrastructure for accessing and processing of Earth Observation(EO)-data has been developed within the project APPS4GMES(www.apps4gmes.de). To gain homogenous multi mission data access an Input Data Portal (IDP) been implemented on this infrastructure. The IDP consists of an Open Geospatial Consortium (OGC) conformant catalogue, a consolidation module for format conversion and an OGC-conformant ordering framework. Metadata of various EO-sources and with different standards is harvested and transferred to an OGC conformant Earth Observation Product standard and inserted into the catalogue by a Metadata Harvester. The IDP can be accessed for search and ordering of the harvested datasets by the services implemented on the cloud infrastructure. Different land-surface services have been realised by the project partners, using the implemented IDP and cloud infrastructure. Results of these are customer ready products, as well as pre-products (e.g. atmospheric corrected EO data), serving as a basis for other services. Within the IDP an automated access to ESA's Sentinel-1 Scientific Data Hub has been implemented. Searching and downloading of the SAR data can be performed in an automated way. With the implementation of the Sentinel-1 Toolbox and own software, for processing of the datasets for further use, for example for Vista's snow monitoring, delivering input for the flood forecast services, can also be performed in an automated way. For performance tests of the cloud environment a sophisticated model based atmospheric correction and pre-classification service has been implemented. Tests conducted an automated synchronised processing of one entire Landsat 8 (LS-8) coverage for Germany and performance comparisons to standard desktop systems. Results of these tests, showing a performance improvement by the factor of six, proved the high flexibility and

  15. The workload of fishermen

    Østergaard, Helle; Jepsen, Jørgen Riis; Berg-Beckhoff, Gabriele

    2016-01-01

    -reported occupational and health data. Questions covering the physical workload were related to seven different work situations and a score summing up the workload was developed for the analysis of the relative impact on different groups of fishermen. Results: Almost all fishermen (96.2%) were familiar to proper...... health. To address the specific areas of fishing with the highest workload, future investments in assistive devices to ease the demanding work and reduce the workload, should particularly address deckhands and less mechanized vessels....

  16. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time

  17. DEVELOPMENT OF A HETEROGENIC DISTRIBUTED ENVIRONMENT FOR SPATIAL DATA PROCESSING USING CLOUD TECHNOLOGIES

    A. S. Garov

    2016-06-01

    Full Text Available We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  18. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (geoportal"target="_blank">http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  19. KONGMING: Performance Prediction in the Cloud via Multidimensional Interference Surrogates

    Bowen, Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Casas-Guix, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bagchi, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-15

    As more and more applications are deployed in the cloud, it is important for both the user and the operator of the cloud that the resources of the cloud are utilized efficiently. Virtualization and workload consolidation techniques are pervasively applied in the cloud to increase resource utilization while providing isolated execution environments for different users. While virtualization hides the architectural details of the underlying hardware, it can also increase the variability in application execution times due to heterogeneity in available hardware, and interference from other applications sharing the same hardware resources. This reduces both the productivity of cloud platforms and limits the degree to which software colocation can be used to increase its efficiency.

  20. The CTTC 5G End-to-End Experimental Platform : Integrating Heterogeneous Wireless/Optical Networks, Distributed Cloud, and IoT Devices

    Munoz, Raul; Mangues-Bafalluy, Josep; Vilalta, Ricard; Verikoukis, Christos; Alonso-Zarate, Jesus; Bartzoudis, Nikolaos; Georgiadis, Apostolos; Payaro, Miquel; Perez-Neira, Ana; Casellas, Ramon; Martinez, Ricardo; Nunez-Martinez, Jose; Requena Esteso, Manuel; Pubill, David; Font-Bach, Oriol

    2016-01-01

    The Internet of Things (IoT) will facilitate a wide variety of applications in different domains, such as smart cities, smart grids, industrial automation (Industry 4.0), smart driving, assistance of the elderly, and home automation. Billions of heterogeneous smart devices with different application requirements will be connected to the networks and will generate huge aggregated volumes of data that will be processed in distributed cloud infrastructures. On the other hand, there is also a gen...

  1. Model simulations with COSMO-SPECS: impact of heterogeneous freezing modes and ice nucleating particle types on ice formation and precipitation in a deep convective cloud

    K. Diehl

    2018-03-01

    Full Text Available In deep convective clouds, heavy rain is often formed involving the ice phase. Simulations were performed using the 3-D cloud resolving model COSMO-SPECS with detailed spectral microphysics including parameterizations of homogeneous and three heterogeneous freezing modes. The initial conditions were selected to result in a deep convective cloud reaching 14 km of altitude with strong updrafts up to 40 m s−1. At such altitudes with corresponding temperatures below −40 °C the major fraction of liquid drops freezes homogeneously. The goal of the present model simulations was to investigate how additional heterogeneous freezing will affect ice formation and precipitation although its contribution to total ice formation may be rather low. In such a situation small perturbations that do not show significant effects at first sight may trigger cloud microphysical responses. Effects of the following small perturbations were studied: (1 additional ice formation via immersion, contact, and deposition modes in comparison to solely homogeneous freezing, (2 contact and deposition freezing in comparison to immersion freezing, and (3 small fractions of biological ice nucleating particles (INPs in comparison to higher fractions of mineral dust INP. The results indicate that the modification of precipitation proceeds via the formation of larger ice particles, which may be supported by direct freezing of larger drops, the growth of pristine ice particles by riming, and by nucleation of larger drops by collisions with pristine ice particles. In comparison to the reference case with homogeneous freezing only, such small perturbations due to additional heterogeneous freezing rather affect the total precipitation amount. It is more likely that the temporal development and the local distribution of precipitation are affected by such perturbations. This results in a gradual increase in precipitation at early cloud stages instead of a strong increase at

  2. School Nurse Workload.

    Endsley, Patricia

    2017-02-01

    The purpose of this scoping review was to survey the most recent (5 years) acute care, community health, and mental health nursing workload literature to understand themes and research avenues that may be applicable to school nursing workload research. The search for empirical and nonempirical literature was conducted using search engines such as Google Scholar, PubMed, CINAHL, and Medline. Twenty-nine empirical studies and nine nonempirical articles were selected for inclusion. Themes that emerged consistent with school nurse practice include patient classification systems, environmental factors, assistive personnel, missed nursing care, and nurse satisfaction. School nursing is a public health discipline and population studies are an inherent research priority but may overlook workload variables at the clinical level. School nurses need a consistent method of population assessment, as well as evaluation of appropriate use of assistive personnel and school environment factors. Assessment of tasks not directly related to student care and professional development must also be considered in total workload.

  3. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Llamas, Ramón Medrano; Megino, Fernando Harald Barreiro; Cinquilli, Mattia; Kucharczyk, Katarzyna; Denis, Marek Kamil

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  4. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  5. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were

  6. DDM Workload Emulation

    Vigne, R.; Schikuta, E.; Garonne, V.; Stewart, G.; Barisits, M.; Beermann, T.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  7. DDM Workload Emulation

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  8. DDM workload emulation

    Vigne, R; Schikuta, E; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  9. DDM Workload Emulation

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  10. Characterization of aerosol photooxidation flow reactors: heterogeneous oxidation, secondary organic aerosol formation and cloud condensation nuclei activity measurements

    A. T. Lambe

    2011-03-01

    Full Text Available Motivated by the need to develop instrumental techniques for characterizing organic aerosol aging, we report on the performance of the Toronto Photo-Oxidation Tube (TPOT and Potential Aerosol Mass (PAM flow tube reactors under a variety of experimental conditions. The PAM system was designed with lower surface-area-to-volume (SA/V ratio to minimize wall effects; the TPOT reactor was designed to study heterogeneous aerosol chemistry where wall loss can be independently measured. The following studies were performed: (1 transmission efficiency measurements for CO2, SO2, and bis(2-ethylhexyl sebacate (BES particles, (2 H2SO4 yield measurements from the oxidation of SO2, (3 residence time distribution (RTD measurements for CO2, SO2, and BES particles, (4 aerosol mass spectra, O/C and H/C ratios, and cloud condensation nuclei (CCN activity measurements of BES particles exposed to OH radicals, and (5 aerosol mass spectra, O/C and H/C ratios, CCN activity, and yield measurements of secondary organic aerosol (SOA generated from gas-phase OH oxidation of m-xylene and α-pinene. OH exposures ranged from (2.0 ± 1.0 × 1010 to (1.8 ± 0.3 × 1012 molec cm−3 s. Where applicable, data from the flow tube reactors are compared with published results from the Caltech smog chamber. The TPOT yielded narrower RTDs. However, its transmission efficiency for SO2 was lower than that for the PAM. Transmission efficiency for BES and H2SO4 particles was size-dependent and was similar for the two flow tube designs. Oxidized BES particles had similar O/C and H/C ratios and CCN activity at OH exposures greater than 1011 molec cm−3 s, but different CCN activity at lower OH exposures. The O/C ratio, H/C ratio, and yield of m-xylene and α-pinene SOA was strongly affected by reactor design and

  11. Workload management in the EMI project

    Andreetto, Paolo; Bertocco, Sara; Dorigo, Alvise; Frizziero, Eric; Gianelle, Alessio; Sgaravatto, Massimo; Zangrando, Luigi; Capannini, Fabio; Cecchi, Marco; Mezzadri, Massimo; Prelz, Francesco; Rebatto, David; Monforte, Salvatore; Kretsis, Aristotelis

    2012-01-01

    The EU-funded project EMI, now at its second year, aims at providing a unified, high quality middleware distribution for e-Science communities. Several aspects about workload management over diverse distributed computing environments are being challenged by the EMI roadmap: enabling seamless access to both HTC and HPC computing services, implementing a commonly agreed framework for the execution of parallel computations and supporting interoperability models between Grids and Clouds. Besides, a rigourous requirements collection process, involving the WLCG and various NGIs across Europe, assures that the EMI stack is always committed to serving actual needs. With this background, the gLite Workload Management System (WMS), the meta-scheduler service delivered by EMI, is augmenting its functionality and scheduling models according to the aforementioned project roadmap and the numerous requirements collected over the first project year. This paper is about present and future work of the EMI WMS, reporting on design changes, implementation choices and longterm vision.

  12. Rework the workload.

    O'Bryan, Linda; Krueger, Janelle; Lusk, Ruth

    2002-03-01

    Kindred Healthcare, Inc., the nation's largest full-service network of long-term acute care hospitals, initiated a 3-year strategic plan to re-evaluate its workload management system. Here, follow the project's most important and difficult phase--designing and implementing the patient classification system.

  13. DIRAC optimized workload management

    Paterson, S K

    2008-01-01

    The LHCb DIRAC Workload and Data Management System employs advanced optimization techniques in order to dynamically allocate resources. The paradigms realized by DIRAC, such as late binding through the Pilot Agent approach, have proven to be highly successful. For example, this has allowed the principles of workload management to be applied not only at the time of user job submission to the Grid but also to optimize the use of computing resources once jobs have been acquired. Along with the central application of job priorities, DIRAC minimizes the system response time for high priority tasks. This paper will describe the recent developments to support Monte Carlo simulation, data processing and distributed user analysis in a consistent way across disparate compute resources including individual PCs, local batch systems, and the Worldwide LHC Computing Grid. The Grid environment is inherently unpredictable and whilst short-term studies have proven to deliver high job efficiencies, the system performance over ...

  14. Workload measurement: diagnostic imaging

    Nuss, Wayne [The Prince Charles Hospital, Chermside, QLD (Australia). Dept. of Medical Imaging

    1993-06-01

    Departments of medical imaging, as with many other service departments in the health industry, are being asked to develop performance indicators. No longer are they assured that annual budget allocations will be forthcoming without justification or some output measurement indicators that will substantiate a claim for a reasonable share of resources. The human resource is the most valuable and the most expensive to any department. This paper provides a brief overview of the research and implementation of a radiographer workload measurement system that was commenced in the Brisbane North Health Region. 2 refs., 10 tabs.

  15. WBDOC Weekly Workload Status Report

    Social Security Administration — Weekly reports of workloads processed in the Wilkes Barre Data Operation Center. Reports on quantities of work received, processed, pending and average processing...

  16. Replicated Computations Results (RCR) report for “A holistic approach for collaborative workload execution in volunteer clouds”

    Vandin, Andrea

    2018-01-01

    “A Holistic Approach for Collaborative Workload Execution in Volunteer Clouds” [3] proposes a novel approach to task scheduling in volunteer clouds. Volunteer clouds are decentralized cloud systems based on collaborative task execution, where clients voluntarily share their own unused computational...

  17. A Model of Student Workload

    Bowyer, Kyle

    2012-01-01

    Student workload is a contributing factor to students deciding to withdraw from their study before completion of the course, at significant cost to students, institutions and society. The aim of this paper is to create a basic workload model for a group of undergraduate students studying business law units at Curtin University in Western…

  18. Workload Control with Continuous Release

    Phan, B. S. Nguyen; Land, M. J.; Gaalman, G. J. C.

    2009-01-01

    Workload Control (WLC) is a production planning and control concept which is suitable for the needs of make-to-order job shops. Release decisions based on the workload norms form the core of the concept. This paper develops continuous time WLC release variants and investigates their due date

  19. Technology Trends in Cloud Infrastructure

    CERN. Geneva

    2018-01-01

    Cloud computing is growing at an exponential pace with an increasing number of workloads being hosted in mega-scale public clouds such as Microsoft Azure. Designing and operating such large infrastructures requires not only a significant capital spend for provisioning datacenters, servers, networking and operating systems, but also R&D investments to capitalize on disruptive technology trends and emerging workloads such as AI/ML. This talk will cover the various infrastructure innovations being implemented in large scale public clouds and opportunities/challenges ahead to deliver the next generation of scale computing. About the speaker Kushagra Vaid is the general manager and distinguished engineer for Hardware Infrastructure in the Microsoft Azure division. He is accountable for the architecture and design of compute and storage platforms, which are the foundation for Microsoft’s global cloud-scale services. He and his team have successfully delivered four generations of hyperscale cloud hardwar...

  20. Memory and subjective workload assessment

    Staveland, L.; Hart, S.; Yeh, Y. Y.

    1986-01-01

    Recent research suggested subjective introspection of workload is not based upon specific retrieval of information from long term memory, and only reflects the average workload that is imposed upon the human operator by a particular task. These findings are based upon global ratings of workload for the overall task, suggesting that subjective ratings are limited in ability to retrieve specific details of a task from long term memory. To clarify the limits memory imposes on subjective workload assessment, the difficulty of task segments was varied and the workload of specified segments was retrospectively rated. The ratings were retrospectively collected on the manipulations of three levels of segment difficulty. Subjects were assigned to one of two memory groups. In the Before group, subjects knew before performing a block of trials which segment to rate. In the After group, subjects did not know which segment to rate until after performing the block of trials. The subjective ratings, RTs (reaction times) and MTs (movement times) were compared within group, and between group differences. Performance measures and subjective evaluations of workload reflected the experimental manipulations. Subjects were sensitive to different difficulty levels, and recalled the average workload of task components. Cueing did not appear to help recall, and memory group differences possibly reflected variations in the groups of subjects, or an additional memory task.

  1. Psychological workload and body weight

    Overgaard, Dorthe; Gyntelberg, Finn; Heitmann, Berit L

    2004-01-01

    on the association between obesity and psychological workload. METHOD: We carried out a review of the associations between psychological workload and body weight in men and women. In total, 10 cross-sectional studies were identified. RESULTS: The review showed little evidence of a general association between...... adjustment for education. For women, there was no evidence of a consistent association. CONCLUSION: The reviewed articles were not supportive of any associations between psychological workload and either general or abdominal obesity. Future epidemiological studies in this field should be prospective......BACKGROUND: According to Karasek's Demand/Control Model, workload can be conceptualized as job strain, a combination of psychological job demands and control in the job. High job strain may result from high job demands combined with low job control. Aim To give an overview of the literature...

  2. Workload analyse of assembling process

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  3. The workload analysis in welding workshop

    Wahyuni, D.; Budiman, I.; Tryana Sembiring, M.; Sitorus, E.; Nasution, H.

    2018-03-01

    This research was conducted in welding workshop which produces doors, fences, canopies, etc., according to customer’s order. The symptoms of excessive workload were seen from the fact of employees complaint, requisition for additional employees, the lateness of completion time (there were 11 times of lateness from 28 orders, and 7 customers gave complaints). The top management of the workshop assumes that employees’ workload was still a tolerable limit. Therefore, it was required workload analysis to determine the number of employees required. The Workload was measured by using a physiological method and workload analysis. The result of this research can be utilized by the workshop for a better workload management.

  4. TideWatch: Fingerprinting the cyclicality of big data workloads

    Williams, Daniel W.

    2014-04-01

    Intrinsic to \\'big data\\' processing workloads (e.g., iterative MapReduce, Pregel, etc.) are cyclical resource utilization patterns that are highly synchronized across different resource types as well as the workers in a cluster. In Infrastructure as a Service settings, cloud providers do not exploit this characteristic to better manage VMs because they view VMs as \\'black boxes.\\' We present TideWatch, a system that automatically identifies cyclicality and similarity in running VMs. TideWatch predicts period lengths of most VMs in Hadoop workloads within 9% of actual iteration boundaries and successfully classifies up to 95% of running VMs as participating in the appropriate Hadoop cluster. Furthermore, we show how TideWatch can be used to improve the timing of VM migrations, reducing both migration time and network impact by over 50% when compared to a random approach. © 2014 IEEE.

  5. Single-Pilot Workload Management

    Rogers, Jason; Williams, Kevin; Hackworth, Carla; Burian, Barbara; Pruchnicki, Shawn; Christopher, Bonny; Drechsler, Gena; Silverman, Evan; Runnels, Barry; Mead, Andy

    2013-01-01

    Integrated glass cockpit systems place a heavy cognitive load on pilots (Burian Dismukes, 2007). Researchers from the NASA Ames Flight Cognition Lab and the FAA Flight Deck Human Factors Lab examined task and workload management by single pilots. This poster describes pilot performance regarding programming a reroute while at cruise and meeting a waypoint crossing restriction on the initial descent.

  6. Curriculum Change Management and Workload

    Alkahtani, Aishah

    2017-01-01

    This study examines the ways in which Saudi teachers have responded or are responding to the challenges posed by a new curriculum. It also deals with issues relating to workload demands which affect teachers' performance when they apply a new curriculum in a Saudi Arabian secondary school. In addition, problems such as scheduling and sharing space…

  7. Monday Morning Workload Reports (FY15 - 17)

    Department of Veterans Affairs — The Monday Morning Workload Report (MMWR) displays a snapshot of the Veterans Benefits Administration’s (VBA) workload as of a specified date, typically the previous...

  8. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Medrano Llamas, Ramón; Kucharczyk, Katarzyna; Denis, Marek Kamil; Cinquilli, Mattia

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain th...

  9. Image selection as a service for cloud computing environments

    Filepp, Robert; Shwartz, Larisa; Ward, Christopher; Kearney, Robert D.; Cheng, Karen; Young, Christopher C.; Ghosheh, Yanal

    2010-01-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe

  10. Operator Workload: Comprehensive Review and Evaluation of Operator Workload Methodologies

    1989-06-01

    E. A (1979), Measurement end scaing of workload In oornple performance. Aviation, Space and Environmental Medicine , 50, 376-381. Ctoow, S. L... Medicine , 53, 1087-1072. Harris, R. M., Glenn, F., laveocchia, H. P., & 7ak"d, A, (1986). Human Operndor Simulator. In W. Karwoski (Ed.), Trends in...McGiothlin, W. (1974). Effects of marihuana on auditory signal detection. Psychopharmacologia, 40, 137-145. Mulder, I. J. M., & Mulder, G. (1987

  11. gLExec Integration with the ATLAS PanDA Workload Management System

    Edward Karavakis; The ATLAS collaboration; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Maarten Litmaath; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    The ATLAS Experiment at the Large Hadron Collider has collected data during Run 1 and is ready to collect data in Run 2. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. At any given time, there are more than 150,000 concurrent jobs running and about a million jobs are submitted on a daily basis on behalf of thousands of physicists within the ATLAS collaboration. The Production and Distributed Analysis (PanDA) workload management system has proved to be a key component of ATLAS and plays a crucial role in the success of the large-scale distributed computing as it is the sole system for distributed processing of Grid jobs across the collaboration since October 2007. ATLAS user jobs are executed on worker nodes by pilots sent to the sites by pilot factories. This pilot architecture has greatly improved job reliability and although it has clear advantages, such as making the working environment homogeneous by hiding any potential heterogeneities, the ...

  12. Context-aware distributed cloud computing using CloudScheduler

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  13. Characterizing Energy per Job in Cloud Applications

    Thi Thao Nguyen Ho

    2016-12-01

    Full Text Available Energy efficiency is a major research focus in sustainable development and is becoming even more critical in information technology (IT with the introduction of new technologies, such as cloud computing and big data, that attract more business users and generate more data to be processed. While many proposals have been presented to optimize power consumption at a system level, the increasing heterogeneity of current workloads requires a finer analysis in the application level to enable adaptive behaviors and in order to reduce the global energy usage. In this work, we focus on batch applications running on virtual machines in the context of data centers. We analyze the application characteristics, model their energy consumption and quantify the energy per job. The analysis focuses on evaluating the efficiency of applications in terms of performance and energy consumed per job, in particular when shared resources are used and the hosts on which the virtual machines are running are heterogeneous in terms of energy profiles, with the aim of identifying the best combinations in the use of resources.

  14. ATLAS cloud R and D

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  15. ATLAS Cloud R&D

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  16. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  17. Cloud Computing Trace Characterization and Synthetic Workload Generation

    2013-03-01

    measurements [44]. Olio is primarily for learning Web 2.0 technologies, evaluating the three implementations (PHP, Java EE, and RubyOnRails (ROR...Add Event 17 Olio is well documented, but assumes prerequisite knowledge with setup and operation of apache web servers and MySQL databases. Olio...Faban supports numerous servers such as Apache httpd, Sun Java System Web, Portal and Mail Servers, Oracle RDBMS, memcached, and others [18]. Perhaps

  18. The CMS workload management system

    Cinquilli, M. [CERN; Evans, D. [Fermilab; Foulkes, S. [Fermilab; Hufnagel, D. [Fermilab; Mascheroni, M. [CERN; Norman, M. [UC, San Diego; Maxa, Z. [Caltech; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Riahi, H. [INFN, Perugia; Ryu, S. [Fermilab; Spiga, D. [CERN; Vaandering, E. [Fermilab; Wakefield, Stuart [Imperial Coll., London; Wilkinson, R. [Caltech

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager), a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  19. The CMS workload management system

    Cinquilli, M; Mascheroni, M; Spiga, D; Evans, D; Foulkes, S; Hufnagel, D; Ryu, S; Vaandering, E; Norman, M; Maxa, Z; Wilkinson, R; Melo, A; Metson, S; Riahi, H; Wakefield, S

    2012-01-01

    CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager); a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).

  20. OCCI-Compliant Cloud Configuration Simulation

    Ahmed-Nacer , Mehdi; Gaaloul , Walid; Tata , Samir

    2017-01-01

    In recent years many organizations such as, Amazon, Google, Microsoft, have accelerated the development of their cloud computing ecosystem. This rapid development has created a plethora of cloud resource management interfaces for provisioning, supervising, and managing cloud resources. Thus, there is an obvious need for the standardization of cloud resource management interfaces to cope with the prevalent issues of heterogeneity, integration, and portability issues.To this end, Open Cloud Com...

  1. Measuring perceived mental workload in children.

    Laurie-Rose, Cynthia; Frey, Meredith; Ennis, Aristi; Zamary, Amanda

    2014-01-01

    Little is known about the mental workload, or psychological costs, associated with information processing tasks in children. We adapted the highly regarded NASA Task Load Index (NASA-TLX) multidimensional workload scale (Hart & Staveland, 1988) to test its efficacy for use with elementary school children. We developed 2 types of tasks, each with 2 levels of demand, to draw differentially on resources from the separate subscales of workload. In Experiment 1, our participants were both typical and school-labeled gifted children recruited from 4th and 5th grades. Results revealed that task type elicited different workload profiles, and task demand directly affected the children's experience of workload. In general, gifted children experienced less workload than typical children. Objective response time and accuracy measures provide evidence for the criterion validity of the workload ratings. In Experiment 2, we applied the same method with 1st- and 2nd-grade children. Findings from Experiment 2 paralleled those of Experiment 1 and support the use of NASA-TLX with even the youngest elementary school children. These findings contribute to the fledgling field of educational ergonomics and attest to the innovative application of workload research. Such research may optimize instructional techniques and identify children at risk for experiencing overload.

  2. PanDA Beyond ATLAS: Workload Management for Data Intensive Science

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    The PanDA Production ANd Distributed Analysis system has been developed by ATLAS to meet the experiment's requirements for a data-driven workload management system for production and distributed analysis processing capable of operating at LHC data processing scale. After 7 years of impressively successful PanDA operation in ATLAS there are also other experiments which can benefit from PanDA in the Big Data challenge, with several at various stages of evaluation and adoption. The new project "Next Generation Workload Management and Analysis System for Big Data" is extending PanDA to meet the needs of other data intensive scientific applications in HEP, astro-particle and astrophysics communities, bio-informatics and other fields as a general solution to large scale workload management. PanDA can utilize dedicated or opportunistic computing resources such as grids, clouds, and High Performance Computing facilities, and is being extended to leverage next generation intelligent networks in automated workflow mana...

  3. Workload modelling for data-intensive systems

    Lassnig, Mario

    This thesis presents a comprehensive study built upon the requirements of a global data-intensive system, built for the ATLAS Experiment at CERN's Large Hadron Collider. First, a scalable method is described to capture distributed data management operations in a non-intrusive way. These operations are collected into a globally synchronised sequence of events, the workload. A comparative analysis of this new data-intensive workload against existing computational workloads is conducted, leading to the discovery of the importance of descriptive attributes in the operations. Existing computational workload models only consider the arrival rates of operations, however, in data-intensive systems the correlations between attributes play a central role. Furthermore, the detrimental effect of rapid correlated arrivals, so called bursts, is assessed. A model is proposed that can learn burst behaviour from captured workload, and in turn forecast potential future bursts. To help with the creation of a full representative...

  4. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. A Heuristic Task Scheduling Algorithm for Heterogeneous Virtual Clusters

    Weiwei Lin

    2016-01-01

    Full Text Available Cloud computing provides on-demand computing and storage services with high performance and high scalability. However, the rising energy consumption of cloud data centers has become a prominent problem. In this paper, we first introduce an energy-aware framework for task scheduling in virtual clusters. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. Secondly, based on this framework, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS. As a heuristic algorithm, VPEGS estimates task energy by considering factors including task resource demands, VM power efficiency, and server workload before scheduling tasks in a greedy manner. We simulated a heterogeneous VM cluster and conducted experiment to evaluate the effectiveness of VPEGS. Simulation results show that VPEGS effectively reduced total energy consumption by more than 20% without producing large scheduling overheads. With the similar heuristic ideology, it outperformed Min-Min and RASA with respect to energy saving by about 29% and 28%, respectively.

  7. impact of workload induced stress on the professional effectiveness

    PROF EKWUEME

    aids, evaluation of students, learning motivation, classroom management, supervision of co-curricular activities and ... of workload. KEYWORDS; Stress, Workload, Professional effectiveness, Teachers, Cross River State .... determining the relationship between workload ..... adapted to cope with the stress that could have.

  8. Patient Safety Incidents and Nursing Workload 1

    Carlesi, Katya Cuadros; Padilha, Kátia Grillo; Toffoletto, Maria Cecília; Henriquez-Roldán, Carlos; Juan, Monica Andrea Canales

    2017-01-01

    ABSTRACT Objective: to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. Method: quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs) was performed using the Therapeutic Interventions Scoring System (TISS-28) and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. Results: 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919) and rate of falls (r = 0.8770). The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. Conclusions: the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload. PMID:28403334

  9. Patient Safety Incidents and Nursing Workload

    Katya Cuadros Carlesi

    Full Text Available ABSTRACT Objective: to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. Method: quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs was performed using the Therapeutic Interventions Scoring System (TISS-28 and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. Results: 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919 and rate of falls (r = 0.8770. The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. Conclusions: the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload.

  10. Dynamic Extensions of Batch Systems with Cloud Resources

    Hauth, T; Quast, G; Büge, V; Scheurer, A; Kunze, M; Baun, C

    2011-01-01

    Compute clusters use Portable Batch Systems (PBS) to distribute workload among individual cluster machines. To extend standard batch systems to Cloud infrastructures, a new service monitors the number of queued jobs and keeps track of the price of available resources. This meta-scheduler dynamically adapts the number of Cloud worker nodes according to the requirement profile. Two different worker node topologies are presented and tested on the Amazon EC2 Cloud service.

  11. Mental workload in decision and control

    Sheridan, T. B.

    1979-01-01

    This paper briefly reviews the problems of defining and measuring the 'mental workload' of aircraft pilots and other human operators of complex dynamic systems. Of the alternative approaches the author indicates a clear preference for the use of subjective scaling. Some recent experiments from MIT and elsewhere are described which utilize subjective mental workload scales in conjunction with human decision and control tasks in the laboratory. Finally a new three-dimensional mental workload rating scale, under current development for use by IFR aircraft pilots, is presented.

  12. State of science: mental workload in ergonomics.

    Young, Mark S; Brookhuis, Karel A; Wickens, Christopher D; Hancock, Peter A

    2015-01-01

    Mental workload (MWL) is one of the most widely used concepts in ergonomics and human factors and represents a topic of increasing importance. Since modern technology in many working environments imposes ever more cognitive demands upon operators while physical demands diminish, understanding how MWL impinges on performance is increasingly critical. Yet, MWL is also one of the most nebulous concepts, with numerous definitions and dimensions associated with it. Moreover, MWL research has had a tendency to focus on complex, often safety-critical systems (e.g. transport, process control). Here we provide a general overview of the current state of affairs regarding the understanding, measurement and application of MWL in the design of complex systems over the last three decades. We conclude by discussing contemporary challenges for applied research, such as the interaction between cognitive workload and physical workload, and the quantification of workload 'redlines' which specify when operators are approaching or exceeding their performance tolerances.

  13. [Nursing workloads and working conditions: integrative review].

    Schmoeller, Roseli; Trindade, Letícia de Lima; Neis, Márcia Binder; Gelbcke, Francine Lima; de Pires, Denise Elvira Pires

    2011-06-01

    This study reviews theoretical production concerning workloads and working conditions for nurses. For that, an integrative review was carried out using scientific articles, theses and dissertations indexed in two Brazilian databases, Virtual Health Care Library (Biblioteca Virtual de Saúde) and Digital Database of Dissertations (Banco Digital de Teses), over the last ten years. From 132 identified studies, 27 were selected. Results indicate workloads as responsible for professional weariness, affecting the occurrence of work accidents and health problems. In order to adequate workloads studies indicate some strategies, such as having an adequate numbers of employees, continuing education, and better working conditions. The challenge is to continue research that reveal more precisely the relationships between workloads, working conditions, and health of the nursing team.

  14. ATLAS Cloud R&D

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  15. Community Cloud Computing

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  16. Adaptation in cloud resource configuration:a survey

    Hummaida, Abdul R.; Paton, Norman W.; Sakellariou, Rizos

    2016-01-01

    With increased demand for computing resources at a lower cost by end-users, cloud infrastructure providers need to find ways to protect their revenue. To achieve this, infrastructure providers aim to increase revenue and lower operational costs. A promising approach to addressing these challenges is to modify the assignment of resources to workloads. This can be used, for example, to consolidate existing workloads; the new capability can be used to serve new requests or alternatively unused r...

  17. Hidden in the Clouds: New Ideas in Cloud Computing

    CERN. Geneva

    2013-01-01

    Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the fullest degree. Please bring questions and opinions, and be ready to share both!   Bio: S...

  18. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  19. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    Kazi Masudul Alam

    2015-09-01

    Full Text Available Social Internet of Things (SIoT has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  20. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  1. DIRAC pilot framework and the DIRAC Workload Management System

    Casajus, Adrian; Graciani, Ricardo; Paterson, Stuart; Tsaregorodtsev, Andrei

    2010-01-01

    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot Jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, Pilot Jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC Workload Management System provides one single scheduling mechanism for jobs with very different profiles. To achieve an overall optimisation, it organizes pending jobs in task queues, both for individual users and production activities. Task queues are created with jobs having similar requirements. Following the VO policy a priority is assigned to each task queue. Pilot submission and subsequent job matching are based on these priorities following a statistical approach.

  2. DIRAC pilot framework and the DIRAC Workload Management System

    Casajus, Adrian; Graciani, Ricardo [Universitat de Barcelona (Spain); Paterson, Stuart [CERN (Switzerland); Tsaregorodtsev, Andrei, E-mail: adria@ecm.ub.e, E-mail: graciani@ecm.ub.e, E-mail: stuart.paterson@cern.c, E-mail: atsareg@in2p3.f [CPPM Marseille (France)

    2010-04-01

    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot Jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, Pilot Jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC Workload Management System provides one single scheduling mechanism for jobs with very different profiles. To achieve an overall optimisation, it organizes pending jobs in task queues, both for individual users and production activities. Task queues are created with jobs having similar requirements. Following the VO policy a priority is assigned to each task queue. Pilot submission and subsequent job matching are based on these priorities following a statistical approach.

  3. Integration Of PanDA Workload Management System With Supercomputers

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  4. Exploiting Virtualization and Cloud Computing in ATLAS

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  5. Have the 'black clouds' cleared with new residency programme regulations?

    Schissler, A J; Einstein, A J

    2016-06-01

    For decades, residents believed to work harder have been referred to as having a 'black cloud'. Residency training programmes recently instituted changes to improve physician wellness and achieve comparable clinical workload. All Internal Medicine residents in the internship class of 2014 at Columbia were surveyed to assess for the ongoing presence of 'black cloud' trainees. While some residents are still thought to have this designation, they did not have a greater workload when compared to their peers. © 2016 Royal Australasian College of Physicians.

  6. Continuous measures of situation awareness and workload

    Droeivoldsmo, Asgeir; Skraaning, Gyrd jr.; Sverrbo, Mona; Dalen, Joergen; Grimstad, Tone; Andresen, Gisle

    1998-03-01

    This report presents methods for continuous measures for Situation Awareness and Workload. The objective has been to identify, develop and test the new measures, and compare them to instruments that require interruptions of scenarios. The new measures are: (1) the Visual Indicator of Situation Awareness (VISA); where Situation Awareness is scored from predefined areas of visual interest critical for solving scenarios. Visual monitoring of areas was recorded by eye-movement tracking. (2) Workload scores reflected by Extended Dwell Time (EDT) and the operator Activity Level. EDT was calculated from eye-movement data files, and the activity level was estimated from simulator logs. Using experimental data from the 1996 CASH NRC Alarm study and the 1997 Human Error Analysis Project/ Human-Centred Automation study, the new measurement techniques have been tested and evaluated on a preliminary basis. The results showed promising relationships between the new continuous measures of situation awareness and workload, and established instruments based upon scenario interruptions. (author)

  7. Assessing physician job satisfaction and mental workload.

    Boultinghouse, Oscar W; Hammack, Glenn G; Vo, Alexander H; Dittmar, Mary Lynne

    2007-12-01

    Physician job satisfaction and mental workload were evaluated in a pilot study of five physicians engaged in a telemedicine practice at The University of Texas Medical Branch at Galveston Electronic Health Network. Several previous studies have examined physician satisfaction with specific telemedicine applications; however, few have attempted to identify the underlying factors that contribute to physician satisfaction or lack thereof. One factor that has been found to affect well-being and functionality in the workplace-particularly with regard to human interaction with complex systems and tasks as seen in telemedicine-is mental workload. Workload is generally defined as the "cost" to a person for performing a complex task or tasks; however, prior to this study, it was unexplored as a variable that influences physician satisfaction. Two measures of job satisfaction were used: The Job Descriptive Index and the Job In General scales. Mental workload was evaluated by means of the National Aeronautics and Space Administration Task Load Index. The measures were administered by means of Web-based surveys and were given twice over a 6-month period. Nonparametric statistical analyses revealed that physician job satisfaction was generally high relative to that of the general population and other professionals. Mental workload scores associated with the practice of telemedicine in this environment are also high, and appeared stable over time. In addition, they are commensurate with scores found in individuals practicing tasks with elevated information-processing demands, such as quality control engineers and air traffic controllers. No relationship was found between the measures of job satisfaction and mental workload.

  8. Cloud Governance

    Berthing, Hans Henrik

    Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing.......Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing....

  9. Dynamic workload balancing of parallel applications with user-level scheduling on the Grid

    Korkhov, Vladimir V; Krzhizhanovskaya, Valeria V

    2009-01-01

    This paper suggests a hybrid resource management approach for efficient parallel distributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources.

  10. Resource Management in Mobile Cloud Computing

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  11. Reducing feedback requirements of workload control

    Henrich, Peter; Land, Martin; van der Zee, Durk; Gaalman, Gerard

    2004-01-01

    The workload control concept is known as a robust shop floor control concept. It is especially suited for the dynamic environment of small- and medium-sized enterprises (SMEs) within the make-to-order sector. Before orders are released to the shop floor, they are collected in an ‘order pool’. To

  12. Workload Management Strategies for Online Educators

    Crews, Tena B.; Wilkinson, Kelly; Hemby, K. Virginia; McCannon, Melinda; Wiedmaier, Cheryl

    2008-01-01

    With increased use of online education, both students and instructors are adapting to the online environment. Online educators must adjust to the change in responsibilities required to teach online, as it is quite intensive during the designing, teaching, and revising stages. The purpose of this study is to examine and update workload management…

  13. CHROMagar Orientation Medium Reduces Urine Culture Workload

    Manickam, Kanchana; Karlowsky, James A.; Adam, Heather; Lagacé-Wiens, Philippe R. S.; Rendina, Assunta; Pang, Paulette; Murray, Brenda-Lee

    2013-01-01

    Microbiology laboratories continually strive to streamline and improve their urine culture algorithms because of the high volumes of urine specimens they receive and the modest numbers of those specimens that are ultimately considered clinically significant. In the current study, we quantitatively measured the impact of the introduction of CHROMagar Orientation (CO) medium into routine use in two hospital laboratories and compared it to conventional culture on blood and MacConkey agars. Based on data extracted from our Laboratory Information System from 2006 to 2011, the use of CO medium resulted in a 28% reduction in workload for additional procedures such as Gram stains, subcultures, identification panels, agglutination tests, and biochemical tests. The average number of workload units (one workload unit equals 1 min of hands-on labor) per urine specimen was significantly reduced (P < 0.0001; 95% confidence interval [CI], 0.5326 to 1.047) from 2.67 in 2006 (preimplementation of CO medium) to 1.88 in 2011 (postimplementation of CO medium). We conclude that the use of CO medium streamlined the urine culture process and increased bench throughput by reducing both workload and turnaround time in our laboratories. PMID:23363839

  14. Dynamic workload peak detection for slack management

    Milutinovic, A.; Goossens, Kees; Smit, Gerardus Johannes Maria; Kuper, Jan; Kuper, J.

    2009-01-01

    In this paper an analytical study on dynamism and possibilities on slack exploitation by dynamic power management is presented. We introduce a specific workload decomposition method for work required for (streaming) application processing data tokens (e.g. video frames) with work behaviour patterns

  15. Perceived Time as a Measure of Mental Workload

    Hertzum, Morten; Holmegaard, Kristin Due

    2013-01-01

    The mental workload imposed by systems is important to their operation and usability. Consequently, researchers and practitioners need reliable, valid, and easy-to-administer methods for measuring mental workload. The ratio of perceived time to clock time appears to be such a method, yet mental...... is a performance-related rather than task-related dimension of mental workload. We find a higher perceived time ratio for timed than untimed tasks. According to subjective workload ratings and pupil-diameter measurements the timed tasks impose higher mental workload. This finding contradicts the prospective...... paradigm, which asserts that perceived time decreases with increasing mental workload. We also find a higher perceived time ratio for solved than unsolved tasks, while subjective workload ratings indicate lower mental workload for the solved tasks. This finding shows that the relationship between...

  16. Trust Model to Enhance Security and Interoperability of Cloud Environment

    Li, Wenjuan; Ping, Lingdi

    Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.

  17. Characterization and Architectural Implications of Big Data Workloads

    Wang, Lei; Zhan, Jianfeng; Jia, Zhen; Han, Rui

    2015-01-01

    Big data areas are expanding in a fast way in terms of increasing workloads and runtime systems, and this situation imposes a serious challenge to workload characterization, which is the foundation of innovative system and architecture design. The previous major efforts on big data benchmarking either propose a comprehensive but a large amount of workloads, or only select a few workloads according to so-called popularity, which may lead to partial or even biased observations. In this paper, o...

  18. Cloud Provider Capacity Augmentation Through Automated Resource Bartering

    Gohera, Syeda ZarAfshan; Bloodsworth, Peter; Rasool, Raihan Ur; McClatchey, Richard

    2018-01-01

    Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary datacenter infrastructures. Resource limitations can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making con...

  19. Longwave indirect effect of mineral dusts on ice clouds

    Q. Min

    2010-08-01

    Full Text Available In addition to microphysical changes in clouds, changes in nucleation processes of ice cloud due to aerosols would result in substantial changes in cloud top temperature as mildly supercooled clouds are glaciated through heterogenous nucleation processes. Measurements from multiple sensors on multiple observing platforms over the Atlantic Ocean show that the cloud effective temperature increases with mineral dust loading with a slope of +3.06 °C per unit aerosol optical depth. The macrophysical changes in ice cloud top distributions as a consequence of mineral dust-cloud interaction exert a strong cooling effect (up to 16 Wm−2 of thermal infrared radiation on cloud systems. Induced changes of ice particle size by mineral dusts influence cloud emissivity and play a minor role in modulating the outgoing longwave radiation for optically thin ice clouds. Such a strong cooling forcing of thermal infrared radiation would have significant impacts on cloud systems and subsequently on climate.

  20. Managing Teacher Workload: Work-Life Balance and Wellbeing

    Bubb, Sara; Earley, Peter

    2004-01-01

    This book is divided into three sections. In the First Section, entitled "Wellbeing and Workload", the authors examine teacher workload and how teachers spend their time. Chapter 1 focuses on what the causes and effects of excessive workload are, especially in relation to wellbeing, stress and, crucially, recruitment and retention?…

  1. Workload Measurement in Human Autonomy Teaming: How and Why?

    Shively, Jay

    2016-01-01

    This is an invited talk on autonomy and workload for an AFRL Blue Sky workshop sponsored by the Florida Institute for Human Machine Studies. The presentation reviews various metrics of workload and how to move forward with measuring workload in a human-autonomy teaming environment.

  2. Workload based order acceptance in job shop environments

    Ebben, Mark; Hans, Elias W.; Olde Weghuis, F.M.; Olde Weghuis, F.M.

    2005-01-01

    In practice, order acceptance and production planning are often functionally separated. As a result, order acceptance decisions are made without considering the actual workload in the production system, or by only regarding the aggregate workload. We investigate the importance of a good workload

  3. Integration of Panda Workload Management System with supercomputers

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  4. Cloud Computing

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  5. Defining Inter-Cloud Architecture for Interoperability and Integration

    Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.J.; Laat, C. de

    2012-01-01

    This paper presents on-going research to develop the Inter-Cloud Architecture that should address problems in multi-provider multi-domain heterogeneous Cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure services. Cloud

  6. Combining Quick-Turnaround and Batch Workloads at Scale

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  7. Shift manager workload assessment - A case study

    Berntson, K.; Kozak, A.; Malcolm, J. S.

    2006-01-01

    In early 2003, Bruce Power restarted two of its previously laid up units in the Bruce A generating station, Units 3 and 4. However, due to challenges relating to the availability of personnel with active Shift Manager licenses, an alternate shift structure was proposed to ensure the safe operation of the station. This alternate structure resulted in a redistribution of responsibility, and a need to assess the resulting changes in workload. Atomic Energy of Canada Limited was contracted to perform a workload assessment based on the new shift structure, and to provide recommendations, if necessary, to ensure Shift Managers had sufficient resources available to perform their required duties. This paper discusses the performance of that assessment, and lessons learned as a result of the work performed during the Restart project. (authors)

  8. Exploring Individual Differences in Workload Assessment

    2014-12-26

    recall their workload accurately. However, it has been shown that the bias shown in subjective ratings can actually provide insight into significant...or subconsciously and embark on load shedding, postponing a task to permit another decision action to be completed in a required timeframe (Smith...or slow heart rate or unique physiological measure will not add unnecessary bias to the data. Individual baseline measures are typically taken at the

  9. Workload, flow, and telepresence during teleoperation

    Draper, J.V. [Oak Ridge National Lab., TN (United States); Blair, L.M. [Human Machine Interfaces, Inc., Knoxville, TN (United States)

    1996-04-01

    There is much speculation about the relations among workload, flow, telepresence, and performance during teleoperation, but few data that provide evidence concerning them. This paper presents results an investigation conducted during completion of a pipe cutting task using a teleoperator at ORNL. Results show support for the hypothesis that telepresence is related to expenditure of attentional resources, and some support for the hypothesis that telepresence is related to flow. The discussion examines the results from an attentional resources perspective on teleoperation.

  10. Identifying Dwarfs Workloads in Big Data Analytics

    Gao, Wanling; Luo, Chunjie; Zhan, Jianfeng; Ye, Hainan; He, Xiwen; Wang, Lei; Zhu, Yuqing; Tian, Xinhui

    2015-01-01

    Big data benchmarking is particularly important and provides applicable yardsticks for evaluating booming big data systems. However, wide coverage and great complexity of big data computing impose big challenges on big data benchmarking. How can we construct a benchmark suite using a minimum set of units of computation to represent diversity of big data analytics workloads? Big data dwarfs are abstractions of extracting frequently appearing operations in big data computing. One dwarf represen...

  11. Measurement of Workload: Physics, Psychophysics, and Metaphysics

    Gopher, D.

    1984-01-01

    The present paper reviews the results of two experiments in which workload analysis was conducted based upon performance measures, brain evoked potentials and magnitude estimations of subjective load. The three types of measures were jointly applied to the description of the behavior of subjects in a wide battery of experimental tasks. Data analysis shows both instances of association and dissociation between types of measures. A general conceptual framework and methodological guidelines are proposed to account for these findings.

  12. Forecasting Workload for Defense Logistics Agency Distribution

    2014-12-01

    Distribution workload ...........................18 Monthly DD Sales for the four primary supply chains ( Avn , Land, Maritime, Ind HW) plotted to...average AVN Aviation BSM Business Systems Modernization CIT consumable items transfer C&E Construction and Equipment C&T Clothing...992081.437 See Figure 2 below for the graphical output of the linear regression. Monthly DD Sales for the four primary supply chains ( Avn , Land

  13. Workload, flow, and telepresence during teleoperation

    Draper, J.V.; Blair, L.M.

    1996-01-01

    There is much speculation about the relations among workload, flow, telepresence, and performance during teleoperation, but few data that provide evidence concerning them. This paper presents results an investigation conducted during completion of a pipe cutting task using a teleoperator at ORNL. Results show support for the hypothesis that telepresence is related to expenditure of attentional resources, and some support for the hypothesis that telepresence is related to flow. The discussion examines the results from an attentional resources perspective on teleoperation

  14. Survey of Methods to Assess Workload

    1979-08-01

    thesis study which had to do with the effect of binaural beats upon performan:.e (2) found out there was a subjectively experienced quality of beats ...were forced to conclude that the neuralmechanism by which binaural beats influenced performance is not open to correct subjective evaluation. In terms of...methods for developing indicies of pilot workload, FAA Report (FAA-AN-77- 15), July 1977. 2. ,’ R. E. The effect of binaural beats on performance, J

  15. Relationship between workload and mind-wandering in simulated driving.

    Yuyu Zhang

    Full Text Available Mental workload and mind-wandering are highly related to driving safety. This study investigated the relationship between mental workload and mind-wandering while driving. Participants (N = 40 were asked to perform a car following task in driving simulator, and report whether they had experienced mind-wandering upon hearing a tone. After driving, participants reported their workload using the NASA-Task Load Index (TLX. Results revealed an interaction between workload and mind-wandering in two different perspectives. First, there was a negative correlation between workload and mind-wandering (r = -0.459, p < 0.01 for different individuals. Second, from temporal perspective workload and mind-wandering frequency increased significantly over task time and were positively correlated. Together, these findings contribute to understanding the roles of workload and mind-wandering in driving.

  16. Pilot Workload and Speech Analysis: A Preliminary Investigation

    Bittner, Rachel M.; Begault, Durand R.; Christopher, Bonny R.

    2013-01-01

    Prior research has questioned the effectiveness of speech analysis to measure the stress, workload, truthfulness, or emotional state of a talker. The question remains regarding the utility of speech analysis for restricted vocabularies such as those used in aviation communications. A part-task experiment was conducted in which participants performed Air Traffic Control read-backs in different workload environments. Participant's subjective workload and the speech qualities of fundamental frequency (F0) and articulation rate were evaluated. A significant increase in subjective workload rating was found for high workload segments. F0 was found to be significantly higher during high workload while articulation rates were found to be significantly slower. No correlation was found to exist between subjective workload and F0 or articulation rate.

  17. Performance of different radiotherapy workload models

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  18. Redundant VoD Streaming Service in a Private Cloud: Availability Modeling and Sensitivity Analysis

    Rosangela Maria De Melo; Maria Clara Bezerra; Jamilson Dantas; Rubens Matos; Ivanildo José De Melo Filho; Paulo Maciel

    2014-01-01

    For several years cloud computing has been generating considerable debate and interest within IT corporations. Since cloud computing environments provide storage and processing systems that are adaptable, efficient, and straightforward, thereby enabling rapid infrastructure modifications to be made according to constantly varying workloads, organizations of every size and type are migrating to web-based cloud supported solutions. Due to the advantages of the pay-per-use ...

  19. Cloud Cover

    Schaffhauser, Dian

    2012-01-01

    This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…

  20. Cloud Control

    Ramaswami, Rama; Raths, David; Schaffhauser, Dian; Skelly, Jennifer

    2011-01-01

    For many IT shops, the cloud offers an opportunity not only to improve operations but also to align themselves more closely with their schools' strategic goals. The cloud is not a plug-and-play proposition, however--it is a complex, evolving landscape that demands one's full attention. Security, privacy, contracts, and contingency planning are all…

  1. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    Cole, Brian S; Moore, Jason H

    2018-03-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  2. Evolution of the ATLAS PanDA workload management system for exascale computational science

    Maeno, T; Klimentov, A; Panitkin, S; Schovancova, J; Wenaus, T; Yu, D; De, K; Nilsson, P; Oleynik, D; Petrosyan, A; Vaniachine, A

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  3. Cloud Computing Fundamentals

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  4. ATLAS computing activities and developments in the Italian Grid cloud

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  5. ATLAS Cloud Computing R&D project

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  6. Cloud Computing

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  7. Radionuclide exercise ventriculography and levels of workload

    Wynchank, S.

    1982-01-01

    The wealth of useful information made available from the utilization of radionuclide cardiological investigations by non-invasive means is outlined and reasons for investigating results obtained under conditions of increased heart workload are explained. The lack of an accepted protocol for the determination of exercise levels is noted. A format for obtaining increasing heart loads dependent on increasing pulse rate is offered, with justification. Exercise radionuclide ventriculography examinations can be conducted which are simple, reproducible and allow appropriate levels of stress in patients who can benefit from such investigations

  8. Fatigue and workload among Danish fishermen

    Remmen, Line Nørgaard; Herttua, Kimmo; Riss-Jepsen, Jørgen

    2017-01-01

    . Highest levels of fatigue were observed among fishermen at Danish seiners (mean 10.21), and fatigue scores decreased with more days at sea. However, none of these results were significant. Adjusted analyses showed that physical workload was significantly related to general fatigue (b = 0.20, 95% CI: 0...... was additionally significantly associated to the levels of physical and mental fatigue. Fishermen had a lower average score for all fatigue dimensions compared to those seen in general Danish working population. Prospective studies are required to assess whether the identified associations are causal....

  9. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  10. Cloud Computing

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  11. Mobile Clouds

    Fitzek, Frank; Katz, Marcos

    A mobile cloud is a cooperative arrangement of dynamically connected communication nodes sharing opportunistic resources. In this book, authors provide a comprehensive and motivating overview of this rapidly emerging technology. The book explores how distributed resources can be shared by mobile...... users in very different ways and for various purposes. The book provides many stimulating examples of resource-sharing applications. Enabling technologies for mobile clouds are also discussed, highlighting the key role of network coding. Mobile clouds have the potential to enhance communications...... performance, improve utilization of resources and create flexible platforms to share resources in very novel ways. Energy efficient aspects of mobile clouds are discussed in detail, showing how being cooperative can bring mobile users significant energy saving. The book presents and discusses multiple...

  12. Academic workload management towards learning, components of academic work

    Ocvirk, Aleksandra; Trunk Širca, Nada

    2013-01-01

    This paper deals with attributing time value to academic workload from the point of view of an HEI, management of teaching and an individual. We have conducted a qualitative study aimed at analysing documents on academic workload in terms of its definition, and at analysing the attribution of time value to components of academic work in relation to the proportion of workload devoted to teaching in the sense of ensuring quality and effectiveness of learning, and in relation to financial implic...

  13. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  14. Cloud Computing Concepts for Academic Collaboration

    K.K. Jabbour

    2013-05-01

    Full Text Available The aim of this paper is to explain how cloud computing technologies improve academic collaboration. To accomplish that, we have to explore the current trend of the global computer network field. During the past few years, technology has evolved in many ways; many valuable web applications and services have been introduced to internet users. Social networking, synchronous/asynchronous communication, on-line video conferencing, and wikis are just a few examples of those web technologies that altered the way people interact nowadays. By utilizing some of the latest web tools and services and combining them with the most recent semantic Cloud Computing techniques, a wide and growing array of technology services and applications are provided, which are highly specialized or distinctive to individual or to educational campuses. Therefore, cloud computing can facilitate a new way of world academic collaboration; and introduce students to new and different ways that can help them manage massive workloads.

  15. Heterogeneous reactors

    Moura Neto, C. de; Nair, R.P.K.

    1979-08-01

    The microscopic study of a cell is meant for the determination of the infinite multiplication factor of the cell, which is given by the four factor formula: K(infinite) = n(epsilon)pf. The analysis of an homogeneous reactor is similar to that of an heterogeneous reactor, but each factor of the four factor formula can not be calculated by the formulas developed in the case of an homogeneous reactor. A great number of methods was developed for the calculation of heterogeneous reactors and some of them are discussed. (Author) [pt

  16. Individual differences and subjective workload assessment - Comparing pilots to nonpilots

    Vidulich, Michael A.; Pandit, Parimal

    1987-01-01

    Results by two groups of subjects, pilots and nonpilots, for two subjective workload assessment techniques (the SWAT and NASA-TLX tests) intended to evaluate individual differences in the perception and reporting of subjective workload are compared with results obtained for several traditional personality tests. The personality tests were found to discriminate between the groups while the workload tests did not. It is concluded that although the workload tests may provide useful information with respect to the interaction between tasks and personality, they are not effective as pure tests of individual differences.

  17. How the workload impacts on cognitive cooperation: A pilot study.

    Sciaraffa, Nicolina; Borghini, Gianluca; Arico, Pietro; Di Flumeri, Gianluca; Toppi, Jlenia; Colosimo, Alfredo; Bezerianos, Anastatios; Thakor, Nitish V; Babiloni, Fabio

    2017-07-01

    Cooperation degradation can be seen as one of the main causes of human errors. Poor cooperation could arise from aberrant mental processes, such as mental overload, that negatively affect the user's performance. Using different levels of difficulty in a cooperative task, we combined behavioural, subjective and neurophysiological data with the aim to i) quantify the mental workload under which the crew was operating, ii) evaluate the degree of their cooperation, and iii) assess the impact of the workload demands on the cooperation levels. The combination of such data showed that high workload demand impacted significantly on the performance, workload perception, and degree of cooperation.

  18. Measuring workload in collaborative contexts: trait versus state perspectives.

    Helton, William S; Funke, Gregory J; Knott, Benjamin A

    2014-03-01

    In the present study, we explored the state versus trait aspects of measures of task and team workload in a disaster simulation. There is often a need to assess workload in both individual and collaborative settings. Researchers in this field often use the NASATask Load Index (NASA-TLX) as a global measure of workload by aggregating the NASA-TLX's component items. Using this practice, one may overlook the distinction between traits and states. Fifteen dyadic teams (11 inexperienced, 4 experienced) completed five sessions of a tsunami disaster simulator. After every session, individuals completed a modified version of the NASA-TLX that included team workload measures.We then examined the workload items by using a between-subjects and within-subjects perspective. Between-subjects and within-subjects correlations among the items indicated the workload items are more independent within subjects (as states) than between subjects (as traits). Correlations between the workload items and simulation performance were also different at the trait and state levels. Workload may behave differently at trait (between-subjects) and state (within-subjects) levels. Researchers interested in workload measurement as a state should take a within-subjects perspective in their analyses.

  19. AVOCLOUDY: a simulator of volunteer clouds

    Sebastio, Stefano; Amoretti, Michele; Lluch Lafuente, Alberto

    2015-01-01

    The increasing demand of computational and storage resources is shifting users toward the adoption of cloud technologies. Cloud computing is based on the vision of computing as utility, where users no more need to buy machines but simply access remote resources made available on-demand by cloud...... application, intelligent agents constitute a feasible technology to add autonomic features to cloud operations. Furthermore, the volunteer computing paradigm—one of the Information and Communications Technology (ICT) trends of the last decade—can be pulled alongside traditional cloud approaches...... management solutions before their deployment in the production environment. However, currently available simulators of cloud platforms are not suitable to model and analyze such heterogeneous, large-scale, and highly dynamic systems. We propose the AVOCLOUDY simulator to fill this gap. This paper presents...

  20. The gLite Workload Management System

    Marco, Cecchi; Fabio, Capannini; Alvise, Dorigo; Antonia, Ghiselli; Alessio, Gianelle; Francesco, Giacomini; Elisabetta, Molinari; Salvatore, Monforte; Alessandro, Maraschini; Luca, Petronzio

    2010-01-01

    The gLite Workload Management System represents a key entry point to high-end services available on a Grid. Being designed as part of the european Grid within the six years long EU-funded EGEE project, now at its third phase, the WMS is meant to provide reliable and efficient distribution and management of end-user requests. This service basically translates user requirements and preferences into specific operations and decisions - dictated by the general status of all other Grid services - while taking responsibility to bring requests to successful completion. The WMS has become a reference implementation of the 'early binding' approach to meta-scheduling as a neat, Grid-aware solution, able to optimise resource access and to satisfy requests for computation together with data. Several added value features are provided for job submission, different job types are supported from simple batch to a variety of compounds. In this paper we outline what has been achieved to provide adequate workload and management components, suitable to be deployed in a production-quality Grid, while covering the design and development of the gLite WMS and focusing on the most recently achieved results.

  1. Physical workload and thoughts of retirement.

    Perkiö-Mäkelä, Merja; Hirvonen, Maria

    2012-01-01

    The aim of this paper is to present Finnish employees' opinions on continuing work until retirement pension and after the age of 63, and to find out if physical workload is related to these opinions. Altogether 39% of men and 40% of women had never had thoughts of early retirement, and 59% claimed (both men and women) that they would consider working beyond the age of 63. Own health (20%); financial gain such as salary and better pension (19%); meaningful, interesting and challenging work (15%); flexible working hours or part-time work (13%); lighter work load (13%); good work community (8%); and good work environment (6%) were stated as factors affecting the decision to continue working after the age of 63. Employees whose work involved low physical workload had less thoughts of early retirement and had considered continuing work after the age of 63 more often than those whose work involved high physical loads. Own health in particular was stated as a reason to consider continuing work by employees whose work was physically demanding.

  2. Evaluating the Influence of the Client Behavior in Cloud Computing.

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.

  3. Patient Safety Incidents and Nursing Workload.

    Carlesi, Katya Cuadros; Padilha, Kátia Grillo; Toffoletto, Maria Cecília; Henriquez-Roldán, Carlos; Juan, Monica Andrea Canales

    2017-04-06

    to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs) was performed using the Therapeutic Interventions Scoring System (TISS-28) and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919) and rate of falls (r = 0.8770). The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload. identificar a relação entre a carga de trabalho da equipe de enfermagem e a ocorrência de incidentes de segurança dos pacientes ligados aos cuidados de enfermagem de um hospital público no Chile. pesquisa transversal analítica quantitativa através de revisão de prontuários médicos. A estimativa da carga de trabalho em Unidade de Terapia Intensiva (UTI) foi realizada utilizando o Índice de Intervenções Terapêuticas-TISS-28 e para os outros serviços, foram utilizados os cocientes enfermeira/paciente e auxiliar de enfermagem/ paciente. Foram feitas análises univariada descritiva e multivariada. Para a análise multivariada utilizou-se análise de componentes principais e correlação de Pearson. foram analisados 879 prontuáriosclínicos de pós-alta e a carga de trabalho de 85 enfermeiros e 157

  4. Soft Clouding

    Søndergaard, Morten; Markussen, Thomas; Wetton, Barnabas

    2012-01-01

    Soft Clouding is a blended concept, which describes the aim of a collaborative and transdisciplinary project. The concept is a metaphor implying a blend of cognitive, embodied interaction and semantic web. Furthermore, it is a metaphor describing our attempt of curating a new semantics of sound...... archiving. The Soft Clouding Project is part of LARM - a major infrastructure combining research in and access to sound and radio archives in Denmark. In 2012 the LARM infrastructure will consist of more than 1 million hours of radio, combined with metadata who describes the content. The idea is to analyse...... the concept of ‘infrastructure’ and ‘interface’ on a creative play with the fundamentals of LARM (and any sound archive situation combining many kinds and layers of data and sources). This paper will present and discuss the Soft clouding project from the perspective of the three practices and competencies...

  5. Physiological Indicators of Workload in a Remotely Piloted Aircraft Simulation

    2015-10-01

    cognitive workload. That is, both cognitive underload and overload can negatively impact performance (Young & Stanton, 2002). One solution to...Report contains color. 14. ABSTRACT Toward preventing performance decrements associated with mental overload in remotely piloted aircraft (RPA...operations, the current research investigated the feasibility of using physiological measures to assess cognitive workload. Two RPA operators were

  6. Situation awareness and workload in complex tactical environments

    Veltman, J.A.

    1999-01-01

    The paper provides an example of a method to get insight into workload changes over time, executed tasks and situation awareness (SA) in complex task environments. The method is applied to measure the workload of a helicopter crew. The method has three components: 1) task analysis, 2) video

  7. Remuneration, workload, and allocation of time in general practice.

    Berg, M.J. van den; Westert, G.P.; Groenewegen, P.P.; Bakker, D.H. de; Zee, J. van der

    2006-01-01

    Background: General Practitioners (GPs) can cope with workload by, among others, spending more hours in patient care or by spending less time per patient. The way GPs are paid might affect the way they cope with workload. From an economical point of view, capitation payment is an incentive to

  8. Quantifying the Workload of Subject Bibliographers in Collection Development.

    Metz, Paul

    1991-01-01

    Discussion of the role of subject bibliographers in collection development activities focuses on an approach developed at Virginia Polytechnic and State Institute to provide a formula for estimating the collection development workload of subject bibliographers. Workload standards and matrix models of organizational structures are discussed, and…

  9. All Things Being Equal: Observing Australian Individual Academic Workloads

    Dobele, Angela; Rundle-Thiele, Sharyn; Kopanidis, Foula; Steel, Marion

    2010-01-01

    The achievement of greater gender equity within Australian universities is a significant issue for both the quality and the strength of Australian higher education. This paper contributes to our knowledge of academic workloads, observing individual workloads in business faculties. A multiple case study method was employed to observe individual…

  10. Workload demand in police officers during mountain bike patrols

    Takken, T.; Ribbink, A.; Heneweer, H.; Moolenaar, H.; Wittink, H.

    2009-01-01

    To the authors' knowledge this is the first paper that has used the training impulse (TRIMP) 'methodology' to calculate workload demand. It is believed that this is a promising method to calculate workload in a range of professions in order to understand the relationship between work demands and

  11. TASKILLAN II - Pilot strategies for workload management

    Segal, Leon D.; Wickens, Christopher D.

    1990-01-01

    This study focused on the strategies used by pilots in managing their workload level, and their subsequent task performance. Sixteen licensed pilots flew 42 missions on a helicopter simulation, and were evaluated on their performance of the overall mission, as well as individual tasks. Pilots were divided in four groups, defined by the presence or absence of scheduling control over tasks and the availability of intelligence concerning the type and stage of difficulties imposed during the flight. Results suggest that intelligence supported strategies that yielded significant higher performance levels, while scheduling control seemed to have no impact on performance. Both difficulty type and the stage of difficulty impacted performance significantly, with strongest effects for time stresss and difficulties imposed late in the flight.

  12. Cloud Chamber

    Gfader, Verina

    Cloud Chamber takes its roots in a performance project, titled The Guests 做东, devised by Verina Gfader for the 11th Shanghai Biennale, ‘Why Not Ask Again: Arguments, Counter-arguments, and Stories’. Departing from the inclusion of the biennale audience to write a future folk tale, Cloud Chamber......: fiction and translation and translation through time; post literacy; world picturing-world typing; and cartographic entanglements and expressions of subjectivity; through the lens a social imaginary of worlding or cosmological quest. Art at its core? Contributions by Nikos Papastergiadis, Rebecca Carson...

  13. A simplified method for assessing cytotechnologist workload.

    Vaickus, Louis J; Tambouret, Rosemary

    2014-01-01

    Examining cytotechnologist workflow and how it relates to job performance and patient safety is important in determining guidelines governing allowable workloads. This report discusses the development of a software tool that significantly simplifies the process of analyzing cytotechnologist workload while simultaneously increasing the quantity and resolution of the data collected. The program runs in Microsoft Excel and minimizes manual data entry and data transcription by automating as many tasks as is feasible. Data show the cytotechnologists tested were remarkably consistent in the amount of time it took them to screen a cervical cytology (Gyn) or a nongynecologic cytology (Non-Gyn) case and that this amount of time was directly proportional to the number of slides per case. Namely, the time spent per slide did not differ significantly in Gyn versus Non-Gyn cases (216 ± 3.4 seconds and 235 ± 24.6 seconds, respectively; P=.16). There was no significant difference in the amount of time needed to complete a Gyn case between the morning and the evening (314 ± 4.7 seconds and 312 ± 7.1 seconds; P=.39), but a significantly increased time spent screening Non-Gyn cases (slide-adjusted) in the afternoon hours (323 ± 20.1 seconds and 454 ± 67.6 seconds; P=.027), which was largely the result of significantly increased time spent on prescreening activities such as checking the electronic medical record (62 ± 6.9 seconds and 145 ± 36 seconds; P=.006). This Excel-based data collection tool generates highly detailed data in an unobtrusive manner and is highly customizable to the individual working environment and clinical climate. © 2013 American Cancer Society.

  14. Heterogeneous Gossip

    Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien

    Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.

  15. Workload-Aware Indexing of Continuously Moving Objects

    Tzoumas, Kostas; Yiu, Man Lung; Jensen, Christian Søndergaard

    2009-01-01

    structures can easily become performance bottlenecks. We address the need for indexing that is adaptive to the workload characteristics, called workload-aware, in order to cover the space in between maintaining an accurate index, and having no index at all. Our proposal, QU-Trade, extends R-tree type...... indexing and achieves workload-awareness by controlling the underlying index’s filtering quality. QU-Trade safely drops index updates, increasing the overlap in the index when the workload is update-intensive, and it restores the filtering capabilities of the index when the workload becomes query......-intensive. This is done in a non-uniform way in space so that the quality of the index remains high in frequently queried regions, while it deteriorates in frequently updated regions. The adaptation occurs online, without the need for a learning phase. We apply QU-Trade to the R-tree and the TPR-tree, and we offer...

  16. Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing.

    Jimenez-Molina, Angel; Retamal, Cristian; Lira, Hernan

    2018-02-03

    Knowledge of the mental workload induced by a Web page is essential for improving users' browsing experience. However, continuously assessing the mental workload during a browsing task is challenging. To address this issue, this paper leverages the correlation between stimuli and physiological responses, which are measured with high-frequency, non-invasive psychophysiological sensors during very short span windows. An experiment was conducted to identify levels of mental workload through the analysis of pupil dilation measured by an eye-tracking sensor. In addition, a method was developed to classify mental workload by appropriately combining different signals (electrodermal activity (EDA), electrocardiogram, photoplethysmo-graphy (PPG), electroencephalogram (EEG), temperature and pupil dilation) obtained with non-invasive psychophysiological sensors. The results show that the Web browsing task involves four levels of mental workload. Also, by combining all the sensors, the efficiency of the classification reaches 93.7%.

  17. A computerized multidimensional measurement of mental workload via handwriting analysis.

    Luria, Gil; Rosenblum, Sara

    2012-06-01

    The goal of this study was to test the effect of mental workload on handwriting behavior and to identify characteristics of low versus high mental workload in handwriting. We hypothesized differences between handwriting under three different load conditions and tried to establish a profile that integrated these indicators. Fifty-six participants wrote three numerical progressions of varying difficulty on a digitizer attached to a computer so that we could evaluate their handwriting behavior. Differences were found in temporal, spatial, and angular velocity handwriting measures, but no significant differences were found for pressure measures. Using data reduction, we identified three clusters of handwriting, two of which differentiated well according to the three mental workload conditions. We concluded that handwriting behavior is affected by mental workload and that each measure provides distinct information, so that they present a comprehensive indicator of mental workload.

  18. Strategic workload management and decision biases in aviation

    Raby, Mireille; Wickens, Christopher D.

    1994-01-01

    Thirty pilots flew three simulated landing approaches under conditions of low, medium, and high workload. Workload conditions were created by varying time pressure and external communications requirements. Our interest was in how the pilots strategically managed or adapted to the increasing workload. We independently assessed the pilot's ranking of the priority of different discrete tasks during the approach and landing. Pilots were found to sacrifice some aspects of primary flight control as workload increased. For discrete tasks, increasing workload increased the amount of time in performing the high priority tasks, decreased the time in performing those of lowest priority, and did not affect duration of performance episodes or optimality of scheduling of tasks of any priority level. Individual differences analysis revealed that high-performing subjects scheduled discrete tasks earlier in the flight and shifted more often between different activities.

  19. Academic context and perceived mental workload of psychology students.

    Rubio-Valdehita, Susana; López-Higes, Ramón; Díaz-Ramiro, Eva

    2014-01-01

    The excessive workload of university students is an academic stressor. Consequently, it is necessary to evaluate and control the workload in education. This research applies the NASA-TLX scale, as a measure of the workload. The objectives of this study were: (a) to measure the workload levels of a sample of 367 psychology students, (b) to group students according to their positive or negative perception of academic context (AC) and c) to analyze the effects of AC on workload. To assess the perceived AC, we used an ad hoc questionnaire designed according to Demand-Control-Social Support and Effort-Reward Imbalance models. Using cluster analysis, participants were classified into two groups (positive versus negative context). The differences between groups show that a positive AC improves performance (p student autonomy and result satisfaction were relevant dimensions of the AC (p < .001 in all cases).

  20. Balancing nurses' workload in hospital wards: study protocol of developing a method to manage workload.

    van den Oetelaar, W F J M; van Stel, H F; van Rhenen, W; Stellato, R K; Grolman, W

    2016-11-10

    Hospitals pursue different goals at the same time: excellent service to their patients, good quality care, operational excellence, retaining employees. This requires a good balance between patient needs and nursing staff. One way to ensure a proper fit between patient needs and nursing staff is to work with a workload management method. In our view, a nursing workload management method needs to have the following characteristics: easy to interpret; limited additional registration; applicable to different types of hospital wards; supported by nurses; covers all activities of nurses and suitable for prospective planning of nursing staff. At present, no such method is available. The research follows several steps to come to a workload management method for staff nurses. First, a list of patient characteristics relevant to care time will be composed by performing a Delphi study among staff nurses. Next, a time study of nurses' activities will be carried out. The 2 can be combined to estimate care time per patient group and estimate the time nurses spend on non-patient-related activities. These 2 estimates can be combined and compared with available nursing resources: this gives an estimate of nurses' workload. The research will take place in an academic hospital in the Netherlands. 6 surgical wards will be included, capacity 15-30 beds. The study protocol was submitted to the Medical Ethical Review Board of the University Medical Center (UMC) Utrecht and received a positive advice, protocol number 14-165/C. This method will be developed in close cooperation with staff nurses and ward management. The strong involvement of the end users will contribute to a broader support of the results. The method we will develop may also be useful for planning purposes; this is a strong advantage compared with existing methods, which tend to focus on retrospective analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  1. Measuring the effects of heterogeneity on distributed systems

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  2. School Nurse Workload: A Scoping Review of Acute Care, Community Health, and Mental Health Nursing Workload Literature

    Endsley, Patricia

    2017-01-01

    The purpose of this scoping review was to survey the most recent (5 years) acute care, community health, and mental health nursing workload literature to understand themes and research avenues that may be applicable to school nursing workload research. The search for empirical and nonempirical literature was conducted using search engines such as…

  3. Are Cloud Environments Ready for Scientific Applications?

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  4. Cloud computing.

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  5. Cloud Computing

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... There are several types of services available on a cloud. We describe .... CPU speed has been doubling every 18 months at constant cost. Besides this ... Plain text (e.g., email) may be read by anyone who is able to access it.

  6. Mental workload during brain-computer interface training.

    Felton, Elizabeth A; Williams, Justin C; Vanderheiden, Gregg C; Radwin, Robert G

    2012-01-01

    It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.

  7. Front-line ordering clinicians: matching workforce to workload.

    Fieldston, Evan S; Zaoutis, Lisa B; Hicks, Patricia J; Kolb, Susan; Sladek, Erin; Geiger, Debra; Agosto, Paula M; Boswinkel, Jan P; Bell, Louis M

    2014-07-01

    Matching workforce to workload is particularly important in healthcare delivery, where an excess of workload for the available workforce may negatively impact processes and outcomes of patient care and resident learning. Hospitals currently lack a means to measure and match dynamic workload and workforce factors. This article describes our work to develop and obtain consensus for use of an objective tool to dynamically match the front-line ordering clinician (FLOC) workforce to clinical workload in a variety of inpatient settings. We undertook development of a tool to represent hospital workload and workforce based on literature reviews, discussions with clinical leadership, and repeated validation sessions. We met with physicians and nurses from every clinical care area of our large, urban children's hospital at least twice. We successfully created a tool in a matrix format that is objective and flexible and can be applied to a variety of settings. We presented the tool in 14 hospital divisions and received widespread acceptance among physician, nursing, and administrative leadership. The hospital uses the tool to identify gaps in FLOC coverage and guide staffing decisions. Hospitals can better match workload to workforce if they can define and measure these elements. The Care Model Matrix is a flexible, objective tool that quantifies the multidimensional aspects of workload and workforce. The tool, which uses multiple variables that are easily modifiable, can be adapted to a variety of settings. © 2014 Society of Hospital Medicine.

  8. Activity-based differentiation of pathologists' workload in surgical pathology.

    Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M

    2009-06-01

    Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.

  9. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  10. Psychophysical workload in the operating room: primary surgeon versus assistant.

    Rieger, Annika; Fenger, Sebastian; Neubert, Sebastian; Weippert, Matthias; Kreuzfeld, Steffi; Stoll, Regina

    2015-07-01

    Working in the operating room is characterized by high demands and overall workload of the surgical team. Surgeons often report that they feel more stressed when operating as a primary surgeon than in the function as an assistant which has been confirmed in recent studies. In this study, intra-individual workload was assessed in both intraoperative functions using a multidimensional approach that combined objective and subjective measures in a realistic work setting. Surgeons' intraoperative psychophysiologic workload was assessed through a mobile health system. 25 surgeons agreed to take part in the 24-hour monitoring by giving their written informed consent. The mobile health system contained a sensor electronic module integrated in a chest belt and measuring physiological parameters such as heart rate (HR), breathing rate (BR), and skin temperature. Subjective workload was assessed pre- and postoperatively using an electronic version of the NASA-TLX on a smartphone. The smartphone served as a communication unit and transferred objective and subjective measures to a communication server where data were stored and analyzed. Working as a primary surgeon did not result in higher workload. Neither NASA-TLX ratings nor physiological workload indicators were related to intraoperative function. In contrast, length of surgeries had a significant impact on intraoperative physical demands (p NASA-TLX sum score (p < 0.01; η(2) = 0.287). Intra-individual workload differences do not relate to intraoperative role of surgeons when length of surgery is considered as covariate. An intelligent operating management that considers the length of surgeries by implementing short breaks could contribute to the optimization of intraoperative workload and the preservation of surgeons' health, respectively. The value of mobile health systems for continuous psychophysiologic workload assessment was shown.

  11. Cloud management and security

    Abbadi, Imad M

    2014-01-01

    Written by an expert with over 15 years' experience in the field, this book establishes the foundations of Cloud computing, building an in-depth and diverse understanding of the technologies behind Cloud computing. In this book, the author begins with an introduction to Cloud computing, presenting fundamental concepts such as analyzing Cloud definitions, Cloud evolution, Cloud services, Cloud deployment types and highlighting the main challenges. Following on from the introduction, the book is divided into three parts: Cloud management, Cloud security, and practical examples. Part one presents the main components constituting the Cloud and federated Cloud infrastructure(e.g., interactions and deployment), discusses management platforms (resources and services), identifies and analyzes the main properties of the Cloud infrastructure, and presents Cloud automated management services: virtual and application resource management services. Part two analyzes the problem of establishing trustworthy Cloud, discuss...

  12. Subjective workload and individual differences in information processing abilities

    Damos, D. L.

    1984-01-01

    This paper describes several experiments examining the source of individual differences in the experience of mental workload. Three sources of such differences were examined: information processing abilities, timesharing abilities, and personality traits/behavior patterns. On the whole, there was little evidence that individual differences in information processing abilities or timesharing abilities are related to perceived differences in mental workload. However, individuals with strong Type A coronary prone behavior patterns differed in both single- and multiple-task performance from individuals who showed little evidence of such a pattern. Additionally, individuals with a strong Type A pattern showed some dissociation between objective performance and the experience of mental workload.

  13. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  14. Cloud time

    Lockwood, Dean

    2012-01-01

    The ‘Cloud’, hailed as a new digital commons, a utopia of collaborative expression and constant connection, actually constitutes a strategy of vitalist post-hegemonic power, which moves to dominate immanently and intensively, organizing our affective political involvements, instituting new modes of enclosure, and, crucially, colonizing the future through a new temporality of control. The virtual is often claimed as a realm of invention through which capitalism might be cracked, but it is precisely here that power now thrives. Cloud time, in service of security and profit, assumes all is knowable. We bear witness to the collapse of both past and future virtuals into a present dedicated to the exploitation of the spectres of both.

  15. Workload composition of the organic horticulture.

    Abrahão, R F; Ribeiro, I A V; Tereso, M J A

    2012-01-01

    This project aimed the characterization of the physical workload of the organic horticulture by determining the frequency of exposure of operators to some activity categories. To do this, an adaptation of the PATH method (Posture, Activities, Tools and Handling) was done to be used in the context of agriculture work. The approach included an evaluation of physical effort demanded to perform the tasks in the work systems from an systematic sampling of work situations from a synchronized monitoring of the heart rate; a characterization of posture repertoire adopted by workers by adapting the OWAS method; an identification of pain body areas using the Corlett diagram; and a subjective evaluation of perceived effort using the RPE Borg scale. The results of the individual assessments were cross correlated and explained from an observation of the work activity. Postural demands were more significant than cardiovascular demands for the studied tasks, and correlated positively with the expressions of bodily discomfort. It is expected that, besides the knowledge obtained of the physical effort demanded by organic horticulture, this project will be useful for the development of new technologies directed to minimize the difficulties of the human work and to raise the work productivity.

  16. The gLite workload management system

    Andreetto, P; Andreozzi, S; Cecchi, M; Ciaschini, V; Dorise, A; Giacomini, F; Gianelle, A; Guarise, A; Lops, R; Martelli, V; Marzolla, M; Mezzadri, M; Molinari, E; Monforte, S; Avellino, G; Beco, S; Cavallini, A; Grandinetti, U; Krop, A; Maraschini, A

    2008-01-01

    The gLite Workload Management System (WMS) is a collection of components that provide the service responsible for distributing and managing tasks across computing and storage resources available on a Grid. The WMS basically receives requests of job execution from a client, finds the required appropriate resources, then dispatches and follows the jobs until completion, handling failure whenever possible. Other than single batch-like jobs, compound job types handled by the WMS are Directed Acyclic Graphs (a set of jobs where the input/output/execution of one of more jobs may depend on one or more other jobs), Parametric Jobs (multiple jobs with one parametrized description), and Collections (multiple jobs with a common description). Jobs are described via a flexible, high-level Job Definition Language (JDL). New functionality was recently added to the system (use of Service Discovery for obtaining new service endpoints to be contacted, automatic sandbox files archival/compression and sharing, support for bulk-submission and bulk-matchmaking). Intensive testing and troubleshooting allowed to dramatically increase both job submission rate and service stability. Future developments of the gLite WMS will be focused on reducing external software dependency, improving portability, robustness and usability

  17. CERN Computing Colloquium | Hidden in the Clouds: New Ideas in Cloud Computing | 30 May

    2013-01-01

    by Dr. Shevek (NEBULA) Thursday 30 May 2013 from 2 p.m. to 4 p.m. at CERN ( 40-S2-D01 - Salle Dirac ) Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the ful...

  18. Defining inter-cloud architecture for interoperability and integration

    Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.; de Laat, C.; Zimmermann, W.; Lee, Y.W.; Demchenko, Y.

    2012-01-01

    This paper presents an on-going research to develop the Inter-Cloud Architecture, which addresses the architectural problems in multi-provider multi-domain heterogeneous cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure

  19. Lxcloud: a prototype for an internal cloud in HEP. Experiences and lessons learned

    Goasguen, Sebastien; Moreira, Belmiro; Roche, Ewan; Schwickerath, Ulrich

    2012-01-01

    Born out of the desire to virtualize our batch compute farm CERN has developed an internal cloud known as lxcloud. Since December 2010 it has been used to run a small but sufficient part of our batch workload thus allowing operational and development experience to be gained. Recently, this service has evolved to a public cloud allowing selected physics users an alternate way of accessing resources.

  20. The hipster approach for improving cloud system efficiency

    Nishtala, Rajiv; Carpenter, Paul Matthew; Petrucci, Vinicius; Martorell Bofill, Xavier

    2017-01-01

    In 2013, U.S. data centers accounted for 2.2% of the country’s total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important data center workloads in cloud computing are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to optimize power consumption along with increasing performance demands. This article introduces Hipster, a technique that combines heuristics and rein...

  1. Experimental Analysis on Autonomic Strategies for Cloud Elasticity

    Dupont , Simon; Lejeune , Jonathan; Alvares , Frederico; Ledoux , Thomas

    2015-01-01

    International audience; In spite of the indubitable advantages of elasticity in Cloud infrastructures, some technical and conceptual limitations are still to be considered. For instance , resource start up time is generally too long to react to unexpected workload spikes. Also, the billing cycles' granularity of existing pricing models may incur consumers to suffer from partial usage waste. We advocate that the software layer can take part in the elasticity process as the overhead of software...

  2. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  3. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  4. Early experience on using glideinWMS in the cloud

    Andrews, W; Dost, J; Martin, T; McCrea, A; Pi, H; Sfiligoi, I; Würthwein, F; Bockelman, B; Weitzel, D; Bradley, D; Frey, J; Livny, M; Tannenbaum, T; Evans, D; Fisk, I; Holzman, B; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  5. Early experience on using glidein WMS in the cloud

    Andrews, W. [UC, San Diego; Bockelman, B. [Nebraska U.; Bradley, D. [Wisconsin U., Madison; Dost, J. [UC, San Diego; Evans, D. [Fermilab; Fisk, I. [Fermilab; Frey, J. [Wisconsin U., Madison; Holzman, B. [Fermilab; Livny, M. [Wisconsin U., Madison; Martin, T. [UC, San Diego; McCrea, A. [UC, San Diego; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Pi, H. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Sheldon, P. [Vanderbilt U.; Tannenbaum, T. [Wisconsin U., Madison; Tiradani, A. [Fermilab; Wurthwein, F. [UC, San Diego; Weitzel, D. [Nebraska U.

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  6. Evaluation of Mental Workload among ICU Ward's Nurses

    Mohsen Mohammadi

    2015-12-01

    Conclusion: Various performance obstacles are correlated with nurses' workload, affirms the signifi­cance of nursing work system characteristics. Interventions are recommended based on the results of this study in the work settings of nurses in ICUs.

  7. Using Statistical Process Control Methods to Classify Pilot Mental Workloads

    Kudo, Terence

    2001-01-01

    .... These include cardiac, ocular, respiratory, and brain activity measures. The focus of this effort is to apply statistical process control methodology on different psychophysiological features in an attempt to classify pilot mental workload...

  8. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  9. Simple grain mill boosts production and eases women's workload ...

    ... grain mill boosts production and eases women's workload. 11 janvier 2013. Image ... It aims to increase the production, improve the processing, develop new ... farmer societies, women's self-help groups, and the food-processing industry.

  10. Empirical investigation of workloads of operators in advanced control rooms

    Kim, Yochan; Jung, Wondea; Kim, Seunghwan

    2014-01-01

    This paper compares the workloads of operators in a computer-based control room of an advanced power reactor (APR 1400) nuclear power plant to investigate the effects from the changes in the interfaces in the control room. The cognitive-communicative-operative activity framework was employed to evaluate the workloads of the operator's roles during emergency operations. The related data were obtained by analyzing the tasks written in the procedures and observing the speech and behaviors of the reserved operators in a full-scope dynamic simulator for an APR 1400. The data were analyzed using an F-test and a Duncan test. It was found that the workloads of the shift supervisors (SSs) were larger than other operators and the operative activities of the SSs increased owing to the computer-based procedure. From these findings, methods to reduce the workloads of the SSs that arise from the computer-based procedure are discussed. (author)

  11. Essentials of cloud computing

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  12. Hybrid resource provisioning for clouds

    Rahman, Mahfuzur; Graham, Peter

    2012-01-01

    Flexible resource provisioning, the assignment of virtual machines (VMs) to physical machine, is a key requirement for cloud computing. To achieve 'provisioning elasticity', the cloud needs to manage its available resources on demand. A-priori, static, VM provisioning introduces no runtime overhead but fails to deal with unanticipated changes in resource demands. Dynamic provisioning addresses this problem but introduces runtime overhead. To reduce VM management overhead so more useful work can be done and to also avoid sub-optimal provisioning we propose a hybrid approach that combines static and dynamic provisioning. The idea is to adapt a good initial static placement of VMs in response to evolving load characteristics, using live migration, as long as the overhead of doing so is low and the effectiveness is high. When this is no longer so, we trigger a revised static placement. (Thus, we are essentially applying local multi-objective optimization to tune a global optimization with reduced overhead.) This approach requires a complicated migration decision algorithm based on current and predicted:future workloads, power consumptions and memory usage in the host machines as well as network burst characteristics for the various possible VM multiplexings (combinations of VMs on a host). A further challenge is to identify those characteristics of the dynamic provisioning that should trigger static re-provisioning.

  13. GPs' perceptions of workload in England: a qualitative interview study.

    Croxson, Caroline Hd; Ashdown, Helen F; Hobbs, Fd Richard

    2017-02-01

    GPs report the lowest levels of morale among doctors, job satisfaction is low, and the GP workforce is diminishing. Workload is frequently cited as negatively impacting on commitment to a career in general practice, and many GPs report that their workload is unmanageable. To gather an in-depth understanding of GPs' perceptions and attitudes towards workload. All GPs working within NHS England were eligible. Advertisements were circulated via regional GP e-mail lists and national social media networks in June 2015. Of those GPs who responded, a maximum-variation sample was selected until data saturation was reached. Semi-structured, qualitative interviews were conducted. Data were analysed thematically. In total, 171 GPs responded, and 34 were included in this study. GPs described an increase in workload over recent years, with current working days being long and intense, raising concerns over the wellbeing of GPs and patients. Full-time partnership was generally not considered to be possible, and many participants felt workload was unsustainable, particularly given the diminishing workforce. Four major themes emerged to explain increased workload: increased patient needs and expectations; a changing relationship between primary and secondary care; bureaucracy and resources; and the balance of workload within a practice. Continuity of care was perceived as being eroded by changes in contracts and working patterns to deal with workload. This study highlights the urgent need to address perceived lack of investment and clinical capacity in general practice, and suggests that managing patient expectations around what primary care can deliver, and reducing bureaucracy, have become key issues, at least until capacity issues are resolved. © British Journal of General Practice 2017.

  14. Patient Workload Profile: National Naval Medical Center (NNMC), Bethesda, MD.

    1980-06-01

    AD-A09a 729 WESTEC SERVICES NC SAN DIEGOCA0S / PATIENT WORKLOAD PROFILE: NATIONAL NAVAL MEDICAL CENTER NNMC),- ETC(U) JUN 80 W T RASMUSSEN, H W...provides site workload data for the National Naval Medical Center (NNMC) within the following functional support areas: Patient Appointment...on managing medical and patient data, thereby offering the health care provider and administrator more powerful capabilities in dealing with and

  15. The Effects of Workload Transitions in a Multitasking Environment

    2016-09-13

    Workload Transitions in a Multitasking Environment 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Margaret A. Bowers...DISTRIBUTION STATEMENT A: Approved for public release. The Effects of Workload Transitions in a Multitasking Environment Margaret A. Bowers1,2, James C...well as performance in a complex multitasking environment. The results of the NASA TLX and shortened DSSQ did not provide support for the position

  16. Nursing workload for cancer patients under palliative care

    Fuly, Patrícia dos Santos Claro; Pires, Livia Márcia Vidal; Souza, Claudia Quinto Santos de; Oliveira, Beatriz Guitton Renaud Baptista de; Padilha, Katia Grillo

    2016-01-01

    Abstract OBJECTIVE To verify the nursing workload required by cancer patients undergoing palliative care and possible associations between the demographic and clinical characteristics of the patients and the nursing workload. METHOD This is a quantitative, cross-sectional, prospective study developed in the Connective Bone Tissue (TOC) clinics of Unit II of the Brazilian National Cancer Institute José Alencar Gomes da Silva with patients undergoing palliative care. RESULTS Analysis of 197 ...

  17. Evaluation of Mental Workload among ICU Ward's Nurses.

    Mohammadi, Mohsen; Mazloumi, Adel; Kazemi, Zeinab; Zeraati, Hojat

    2015-01-01

    High level of workload has been identified among stressors of nurses in intensive care units (ICUs). The present study investigated nursing workload and identified its influencing perfor-mance obstacles in ICUs. This cross-sectional study was conducted, in 2013, on 81 nurses working in ICUs in Imam Khomeini Hospital in Tehran, Iran. NASA-TLX was applied for assessment of workload. Moreover, ICUs Performance Obstacles Questionnaire was used to identify performance obstacles associated with ICU nursing. Physical demand (mean=84.17) was perceived as the most important dimensions of workload by nurses. The most critical performance obstacles affecting workload included: difficulty in finding a place to sit down, hectic workplace, disorganized workplace, poor-conditioned equipment, waiting for using a piece of equipment, spending much time seeking for supplies in the central stock, poor quality of medical materials, delay in getting medications, unpredicted problems, disorganized central stock, outpatient surgery, spending much time dealing with family needs, late, inadequate, and useless help from nurse assistants, and ineffective morning rounds (P-value<0.05). Various performance obstacles are correlated with nurses' workload, affirms the significance of nursing work system characteristics. Interventions are recommended based on the results of this study in the work settings of nurses in ICUs.

  18. EFFECTIVE INDICES FOR MONITORING MENTAL WORKLOAD WHILE PERFORMING MULTIPLE TASKS.

    Hsu, Bin-Wei; Wang, Mao-Jiun J; Chen, Chi-Yuan; Chen, Fang

    2015-08-01

    This study identified several physiological indices that can accurately monitor mental workload while participants performed multiple tasks with the strategy of maintaining stable performance and maximizing accuracy. Thirty male participants completed three 10-min. simulated multitasks: MATB (Multi-Attribute Task Battery) with three workload levels. Twenty-five commonly used mental workload measures were collected, including heart rate, 12 HRV (heart rate variability), 10 EEG (electroencephalography) indices (α, β, θ, α/θ, θ/β from O1-O2 and F4-C4), and two subjective measures. Analyses of index sensitivity showed that two EEG indices, θ and α/θ (F4-C4), one time-domain HRV-SDNN (standard deviation of inter-beat intervals), and four frequency-domain HRV: VLF (very low frequency), LF (low frequency), %HF (percentage of high frequency), and LF/HF were sensitive to differentiate high workload. EEG α/θ (F4-C4) and LF/HF were most effective for monitoring high mental workload. LF/HF showed the highest correlations with other physiological indices. EEG α/θ (F4-C4) showed strong correlations with subjective measures across different mental workload levels. Operation strategy would affect the sensitivity of EEG α (F4-C4) and HF.

  19. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  20. Cloud Computing, Tieto Cloud Server Model

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  1. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the

  2. Blue skies for CLOUD

    2006-01-01

    Through the recently approved CLOUD experiment, CERN will soon be contributing to climate research. Tests are being performed on the first prototype of CLOUD, an experiment designed to assess cosmic radiation influence on cloud formation.

  3. A comparison of policies on nurse faculty workload in the United States.

    Ellis, Peggy A

    2013-01-01

    This article describes nurse faculty workload policies from across the nation in order to assess current practice. There is a well-documented shortage of nursing faculty leading to an increase in workload demands. Increases in faculty workload results in difficulties with work-life balance and dissatisfaction threatening to make nursing education less attractive to young faculty. In order to begin an examination of faculty workload in nursing, existing workloads must be known. Faculty workload data were solicited from nursing programs nationwide and analyzed to determine the current workloads. The most common faculty teaching workload reported overall for nursing is 12 credit hours per semester; however, some variations exist. Consideration should be given to the multiple components of the faculty workload. Research is needed to address the most effective and efficient workload allocation for nursing faculty.

  4. Quantitative assessment of workload and stressors in clinical radiation oncology.

    Mazur, Lukasz M; Mosaly, Prithima R; Jackson, Marianne; Chang, Sha X; Burkhardt, Katharin Deschesne; Adams, Robert D; Jones, Ellen L; Hoyle, Lesley; Xu, Jing; Rockwell, John; Marks, Lawrence B

    2012-08-01

    Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045). Workload level and sources of stressors vary

  5. Quantitative Assessment of Workload and Stressors in Clinical Radiation Oncology

    Mazur, Lukasz M.; Mosaly, Prithima R.; Jackson, Marianne; Chang, Sha X.; Burkhardt, Katharin Deschesne; Adams, Robert D.; Jones, Ellen L.; Hoyle, Lesley; Xu, Jing; Rockwell, John; Marks, Lawrence B.

    2012-01-01

    Purpose: Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Methods and Materials: Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). Results: A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045

  6. Quantitative Assessment of Workload and Stressors in Clinical Radiation Oncology

    Mazur, Lukasz M., E-mail: lukasz_mazur@ncsu.edu [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States); Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Biomedical Engineering, North Carolina State University, Raleigh, North Carolina (United States); Mosaly, Prithima R. [Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Jackson, Marianne; Chang, Sha X.; Burkhardt, Katharin Deschesne; Adams, Robert D.; Jones, Ellen L.; Hoyle, Lesley [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States); Xu, Jing [Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Rockwell, John; Marks, Lawrence B. [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States)

    2012-08-01

    Purpose: Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Methods and Materials: Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). Results: A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045

  7. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  8. Moving towards Cloud Security

    Edit Szilvia Rubóczki; Zoltán Rajnai

    2015-01-01

    Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment th...

  9. Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation

    Muraraşu, Alin; Weidendorfer, Josef; Bode, Arndt

    2012-01-01

    load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression

  10. Cloud-Top Entrainment in Stratocumulus Clouds

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  11. Cloud Infrastructure & Applications - CloudIA

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  12. The Magellan Final Report on Cloud Computing

    ,; Coghlan, Susan; Yelick, Katherine

    2011-12-21

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

  13. Silicon Photonics Cloud (SiCloud)

    DeVore, P. T. S.; Jiang, Y.; Lynch, M.

    2015-01-01

    Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths.......Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths....

  14. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  15. File-System Workload on a Scientific Multiprocessor

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  16. Nursing workloads in family health: implications for universal access.

    de Pires, Denise Elvira Pires; Machado, Rosani Ramos; Soratto, Jacks; Scherer, Magda dos Anjos; Gonçalves, Ana Sofia Resque; Trindade, Letícia Lima

    2016-01-01

    to identify the workloads of nursing professionals of the Family Health Strategy, considering its implications for the effectiveness of universal access. qualitative study with nursing professionals of the Family Health Strategy of the South, Central West and North regions of Brazil, using methodological triangulation. For the analysis, resources of the Atlas.ti software and Thematic Content Analysis were associated; and the data were interpreted based on the labor process and workloads as theorical approaches. the way of working in the Family Health Strategy has predominantly resulted in an increase in the workloads of the nursing professionals, with emphasis on the work overload, excess of demand, problems in the physical infrastructure of the units and failures in the care network, which hinders its effectiveness as a preferred strategy to achieve universal access to health. On the other hand, teamwork, affinity for the work performed, bond with the user, and effectiveness of the assistance contributed to reduce their workloads. investments on elements that reduce the nursing workloads, such as changes in working conditions and management, can contribute to the effectiveness of the Family Health Strategy and achieving the goal of universal access to health.

  17. Evaluating the Efficacy of the Cloud for Cluster Computation

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  18. [Analysis on workload for hospital DOTS service].

    Nagata, Yoko; Urakawa, Minako; Kobayashi, Noriko; Kato, Seiya

    2014-04-01

    A directly observed treatment short course (DOTS) trial was launched in Japan in the late 1990s and targeted patients with social depression at urban areas. Based on these findings, the Ministry of Health, Labour and Welfare established the Japanese DOTS Strategy in 2003, which is a comprehensive support service ensuring the adherence of tuberculosis patients to drug administration. DOTS services are initially provided at the hospital to patients with infectious tuberculosis who are hospitalized according to the Infectious Diseases Control Law. After being discharged from the hospital, the patients are referred to a public health center. However, a survey conducted in 2008 indicated that all the patients do not receive appropriate DOTS services at some hospitals. In the present study, we aimed to evaluate the protocols and workload of DOTS at hospitals that are actively involved in tuberculosis medical practice, including DOTS, to assess whether the hospital DOTS services were adequate. We reviewed a series of articles on hospital DOTS from a Japanese journal on nursing for tuberculosis patients and identified 25 activities regarding the hospital DOTS service. These 25 items were then classified into 3 categories: health education to patients, support for adherence, and coordination with the health center. In total, 20 hospitals that had > 20 authorized tuberculosis beds were selected--while considering the geographical balance, schedule of this survey, etc.--from 33 hospitals where an ex-trainee of the tuberculosis control expert training program in the Research Institute of Tuberculosis (RIT) was working and 20 hospitals that had collaborated with our previous survey on tuberculosis medical facilities. All the staff associated with the DOTS service were asked to record the total working time as well as the time spent for each activity. The data were collected and analyzed at the RIT. The working times for each activity of the DOTS service for nurses, pharmacists

  19. The research of the availability at cloud service systems

    Demydov, Ivan; Klymash, Mykhailo; Kharkhalis, Zenoviy; Strykhaliuk, Bohdan; Komada, Paweł; Shedreyeva, Indira; Targeusizova, Aliya; Iskakova, Aigul

    2017-08-01

    This paper is devoted to the numerical investigation of the availability at cloud service systems. In this paper criteria and constraints calculations were performed and obtained results were analyzed for synthesis purposes of distributed service platforms based on the cloud service-oriented architecture such as availability and system performance index variations by defined set of the main parameters. The method of synthesis has been numerically generalized considering the type of service workload in statistical form by Hurst parameter application for each integrated service that requires implementation within the service delivery platform, which is synthesized by structural matching of virtual machines using combination of elementary servicing components up to functionality into a best-of-breed solution. As a result of restrictions from Amdahl's Law the necessity of cloud-networks clustering was shown, which makes it possible to break the complex dynamic network into separate segments that simplifies access to the resources of virtual machines and, in general, to the "clouds" and respectively simplifies complex topological structure, enhancing the overall system performance. In overall, proposed approaches and obtained results numerically justifying and algorithmically describing the process of structural and functional synthesis of efficient distributed service platforms, which under process of their configuring and exploitation provides an opportunity to act on the dynamic environment in terms of comprehensive services range and nomadic users' workload pulsing.

  20. NASA TLX: software for assessing subjective mental workload.

    Cao, Alex; Chintamani, Keshav K; Pandya, Abhilash K; Ellis, R Darin

    2009-02-01

    The NASA Task Load Index (TLX) is a popular technique for measuring subjective mental workload. It relies on a multidimensional construct to derive an overall workload score based on a weighted average of ratings on six subscales: mental demand, physical demand, temporal demand, performance, effort, and frustration level. A program for implementing a computerized version of the NASA TLX is described. The software version assists in simplifying collection, postprocessing, and storage of raw data. The program collects raw data from the subject and calculates the weighted (or unweighted) workload score, which is output to a text file. The program can also be tailored to a specific experiment using a simple input text file, if desired. The program was designed in Visual Studio 2005 and is capable of running on a Pocket PC with Windows CE or on a PC with Windows 2000 or higher. The NASA TLX program is available for free download.

  1. Nursing workload in a trauma intensive care unit

    Luana Loppi Goulart

    2014-06-01

    Full Text Available Severely injured patients with multiple and conflicting injuries present themselves to nursing professionals at critical care units faced with care management challenges. The goal of the present study is to evaluate nursing workload and verify the correlation between workload and the APACHE II severity index. It is a descriptive study, conducted in the Trauma Intensive Care Unit of a teaching hospital. We used the Nursing Activities Score and APACHE II as instruments. The sample comprised 32 patients, of which most were male, young adults, presenting polytrauma, coming from the Reference Emergency Unit, in surgical treatment, and discharged from the ICU. The average obtained on the Nursing Activities Score instrument was 72% during hospitalization periods. The data displayed moderate correlation between workload and patient severity. In other words, the higher the score, the higher the patient’s mortality risk. doi: 10.5216/ree.v16i2.22922.

  2. Is aerobic workload positively related to ambulatory blood pressure?

    Korshøj, Mette; Clays, Els; Lidegaard, Mark

    2016-01-01

    workload and ambulatory blood pressure (ABP) are lacking. The aim was to explore the relationship between objectively measured relative aerobic workload and ABP. METHODS: A total of 116 cleaners aged 18-65 years were included after informed consent was obtained. A portable device (Spacelabs 90217......) was mounted for 24-h measurements of ABP, and an Actiheart was mounted for 24-h heart rate measurements to calculate relative aerobic workload as percentage of relative heart rate reserve. A repeated-measure multi-adjusted mixed model was applied for analysis. RESULTS: A fully adjusted mixed model...... of measurements throughout the day showed significant positive relations (p ABP and 0.30 ± 0.04 mmHg (95 % CI 0.22-0.38 mmHg) in diastolic ABP. Correlations between...

  3. Multi tenancy for cloud-based in-memory column databases workload management and data placement

    Schaffner, Jan

    2014-01-01

    With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using ""multi tenancy,"" a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently

  4. Training improves laparoscopic tasks performance and decreases operator workload.

    Hu, Jesse S L; Lu, Jirong; Tan, Wee Boon; Lomanto, Davide

    2016-05-01

    It has been postulated that increased operator workload during task performance may increase fatigue and surgical errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a validated tool for self-assessment for workload. Our study aims to assess the relationship of workload and performance of novices in simulated laparoscopic tasks of different complexity levels before and after training. Forty-seven novices without prior laparoscopic experience were recruited in a trial to investigate whether training improves task performance as well as mental workload. The participants were tested on three standard tasks (ring transfer, precision cutting and intracorporeal suturing) in increasing complexity based on the Fundamentals of Laparoscopic Surgery (FLS) curriculum. Following a period of training and rest, participants were tested again. Test scores were computed from time taken and time penalties for precision errors. Test scores and NASA-TLX scores were recorded pre- and post-training and analysed using paired t tests. One-way repeated measures ANOVA was used to analyse differences in NASA-TLX scores between the three tasks. NASA-TLX score was lowest with ring transfer and highest with intracorporeal suturing. This was statistically significant in both pre-training (p NASA-TLX scores mirror the changes in test scores for the three tasks. Workload scores decreased significantly after training for all three tasks (ring transfer = 2.93, p NASA-TLX score is an accurate reflection of the complexity of simulated laparoscopic tasks in the FLS curriculum. This also correlates with the relationship of test scores between the three tasks. Simulation training improves both performance score and workload score across the tasks.

  5. Dynamic electronic institutions in agent oriented cloud robotic systems.

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  6. The CLOUD experiment

    Maximilien Brice

    2006-01-01

    The Cosmics Leaving Outdoor Droplets (CLOUD) experiment as shown by Jasper Kirkby (spokesperson). Kirkby shows a sketch to illustrate the possible link between galactic cosmic rays and cloud formations. The CLOUD experiment uses beams from the PS accelerator at CERN to simulate the effect of cosmic rays on cloud formations in the Earth's atmosphere. It is thought that cosmic ray intensity is linked to the amount of low cloud cover due to the formation of aerosols, which induce condensation.

  7. BUSINESS INTELLIGENCE IN CLOUD

    Celina M. Olszak

    2014-01-01

    . The paper reviews and critiques current research on Business Intelligence (BI) in cloud. This review highlights that organizations face various challenges using BI cloud. The research objectives for this study are a conceptualization of the BI cloud issue, as well as an investigation of some benefits and risks from BI cloud. The study was based mainly on a critical analysis of literature and some reports on BI cloud using. The results of this research can be used by IT and business leaders ...

  8. Cloud Robotics Platforms

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  9. Classification Systems for Individual Differences in Multiple-task Performance and Subjective Estimates of Workload

    Damos, D. L.

    1984-01-01

    Human factors practitioners often are concerned with mental workload in multiple-task situations. Investigations of these situations have demonstrated repeatedly that individuals differ in their subjective estimates of workload. These differences may be attributed in part to individual differences in definitions of workload. However, after allowing for differences in the definition of workload, there are still unexplained individual differences in workload ratings. The relation between individual differences in multiple-task performance, subjective estimates of workload, information processing abilities, and the Type A personality trait were examined.

  10. Evaluation of mental workload on digital maintenance systems in nuclear power plants

    Hwang, S. L.; Huang, F. H.; Lin, J. C.; Liang, G. F.; Yenn, T. C.; Hsu, C. C.

    2006-01-01

    The purpose of this study is to evaluate operators' mental workload dealing with digital maintenance systems in Nuclear Power Plants. First of all, according to the factors affected the mental workload, a questionnaire was designed to evaluate the mental workload of maintenance operators at the second Nuclear Power (NPP) in Taiwan. Then, sixteen maintenance engineers of the Second NPP participated in the questionnaire survey. The results indicated that the mental workload was lower in digital systems than that in analog systems. Finally, a mental workload model based on Neural Network technique was developed to predict the workload of maintenance operators in digital maintenance systems. (authors)

  11. Using the NASA Task Load Index to Assess Workload in Electronic Medical Records.

    Hudson, Darren; Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Electronic medical records (EMRs) has been expected to decrease health professional workload. The NASA Task Load Index has become an important tool for assessing workload in many domains. However, its application in assessing the impact of an EMR on nurse's workload has remained to be explored. In this paper we report the results of a study of workload and we explore the utility of applying the NASA Task Load Index to assess impact of an EMR at the end of its lifecycle on nurses' workload. It was found that mental and temporal demands were the most responsible for the workload. Further work along these lines is recommended.

  12. Cloud Processed CCN Suppress Stratus Cloud Drizzle

    Hudson, J. G.; Noble, S. R., Jr.

    2017-12-01

    Conversion of sulfur dioxide to sulfate within cloud droplets increases the sizes and decreases the critical supersaturation, Sc, of cloud residual particles that had nucleated the droplets. Since other particles remain at the same sizes and Sc a size and Sc gap is often observed. Hudson et al. (2015) showed higher cloud droplet concentrations (Nc) in stratus clouds associated with bimodal high-resolution CCN spectra from the DRI CCN spectrometer compared to clouds associated with unimodal CCN spectra (not cloud processed). Here we show that CCN spectral shape (bimodal or unimodal) affects all aspects of stratus cloud microphysics and drizzle. Panel A shows mean differential cloud droplet spectra that have been divided according to traditional slopes, k, of the 131 measured CCN spectra in the Marine Stratus/Stratocumulus Experiment (MASE) off the Central California coast. K is generally high within the supersaturation, S, range of stratus clouds (< 0.5%). Because cloud processing decreases Sc of some particles, it reduces k. Panel A shows higher concentrations of small cloud droplets apparently grown on lower k CCN than clouds grown on higher k CCN. At small droplet sizes the concentrations follow the k order of the legend, black, red, green, blue (lowest to highest k). Above 13 µm diameter the lines cross and the hierarchy reverses so that blue (highest k) has the highest concentrations followed by green, red and black (lowest k). This reversed hierarchy continues into the drizzle size range (panel B) where the most drizzle drops, Nd, are in clouds grown on the least cloud-processed CCN (blue), while clouds grown on the most processed CCN (black) have the lowest Nd. Suppression of stratus cloud drizzle by cloud processing is an additional 2nd indirect aerosol effect (IAE) that along with the enhancement of 1st IAE by higher Nc (panel A) are above and beyond original IAE. However, further similar analysis is needed in other cloud regimes to determine if MASE was

  13. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    Spiegel, J.K.; Aemisegger, F.; Scholl, M.; Wienhold, F.G.; Collett, J.L.; Lee, T.; van Pinxteren, D.; Mertes, S.; Tilgner, A.; Herrmann, H.; Werner, Roland A.; Buchmann, N.; Eugster, W.

    2012-01-01

    In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog) during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010) using a three-stage Caltech Active Strand Cloud water Collector (CASCC). An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range) were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  14. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    J. K. Spiegel

    2012-10-01

    Full Text Available In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010 using a three-stage Caltech Active Strand Cloud water Collector (CASCC. An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  15. Assessing Clinical Trial-Associated Workload in Community-Based Research Programs Using the ASCO Clinical Trial Workload Assessment Tool.

    Good, Marjorie J; Hurley, Patricia; Woo, Kaitlin M; Szczepanek, Connie; Stewart, Teresa; Robert, Nicholas; Lyss, Alan; Gönen, Mithat; Lilenbaum, Rogerio

    2016-05-01

    Clinical research program managers are regularly faced with the quandary of determining how much of a workload research staff members can manage while they balance clinical practice and still achieve clinical trial accrual goals, maintain data quality and protocol compliance, and stay within budget. A tool was developed to measure clinical trial-associated workload, to apply objective metrics toward documentation of work, and to provide clearer insight to better meet clinical research program challenges and aid in balancing staff workloads. A project was conducted to assess the feasibility and utility of using this tool in diverse research settings. Community-based research programs were recruited to collect and enter clinical trial-associated monthly workload data into a web-based tool for 6 consecutive months. Descriptive statistics were computed for self-reported program characteristics and workload data, including staff acuity scores and number of patient encounters. Fifty-one research programs that represented 30 states participated. Median staff acuity scores were highest for staff with patients enrolled in studies and receiving treatment, relative to staff with patients in follow-up status. Treatment trials typically resulted in higher median staff acuity, relative to cancer control, observational/registry, and prevention trials. Industry trials exhibited higher median staff acuity scores than trials sponsored by the National Institutes of Health/National Cancer Institute, academic institutions, or others. The results from this project demonstrate that trial-specific acuity measurement is a better measure of workload than simply counting the number of patients. The tool was shown to be feasible and useable in diverse community-based research settings. Copyright © 2016 by American Society of Clinical Oncology.

  16. Relationship between cloud radiative forcing, cloud fraction and cloud albedo, and new surface-based approach for determining cloud albedo

    Y. Liu; W. Wu; M. P. Jensen; T. Toto

    2011-01-01

    This paper focuses on three interconnected topics: (1) quantitative relationship between surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo; (2) surfaced-based approach for measuring cloud albedo; (3) multiscale (diurnal, annual and inter-annual) variations and covariations of surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo. An analytical expression is first derived to quantify the relationship between cloud radiative forcing, cloud fractio...

  17. Role of adenosine in regulating the heterogeneity of skeletal muscle blood flow during exercise in humans

    Heinonen, Ilkka; Nesterov, Sergey V; Kemppainen, Jukka

    2007-01-01

    receptor blockade. BF heterogeneity within muscles was calculated from 16-mm(3) voxels in BF images and heterogeneity among the muscles from the mean values of the four QF compartments. Mean BF in the whole QF and its four parts increased, and heterogeneity decreased with workload both without......Evidence from both animal and human studies suggests that adenosine plays a role in the regulation of exercise hyperemia in skeletal muscle. We tested whether adenosine also plays a role in the regulation of blood flow (BF) distribution and heterogeneity among and within quadriceps femoris (QF...... and with theophylline (P heterogeneity among the QF muscles, yet blockade increased within-muscle BF heterogeneity in all four QF muscles (P = 0.03). Taken together, these results show that BF becomes less heterogeneous with increasing...

  18. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  19. Cloud CCN feedback

    Hudson, J.G.

    1992-01-01

    Cloud microphysics affects cloud albedo precipitation efficiency and the extent of cloud feedback in response to global warming. Compared to other cloud parameters, microphysics is unique in its large range of variability and the fact that much of the variability is anthropogenic. Probably the most important determinant of cloud microphysics is the spectra of cloud condensation nuclei (CCN) which display considerable variability and have a large anthropogenic component. When analyzed in combination three field observation projects display the interrelationship between CCN and cloud microphysics. CCN were measured with the Desert Research Institute (DRI) instantaneous CCN spectrometer. Cloud microphysical measurements were obtained with the National Center for Atmospheric Research Lockheed Electra. Since CCN and cloud microphysics each affect the other a positive feedback mechanism can result

  20. Reasons for adopting technological innovations reducing physical workload in bricklaying

    Jong, A.M. de; Vink, P.; Kroon, J.C.A. de

    2003-01-01

    In this paper the adoption of technological innovations to improve the work of bricklayers and bricklayers' assistants is evaluated. Two studies were performed among 323 subjects to determine the adoption of the working methods, the perceived workload, experiences with the working methods, and the

  1. HIV infection, tuberculosis and workload in a general paediatric ward

    South African Journal of Child Health ... To describe the impact of HIV infection and tuberculosis on the workload of a general paediatric ward at Red Cross War Memorial Children's Hospital in 2007. Methods. Prospective descriptive surveillance of the patient composition of a general paediatric ward over a 1-year period.

  2. A participatory ergonomics approach to reduce mental and physical workload

    Vink, P.; Peeters, M.; Grundemann, R.W.M.; Smulders, P.G.W.; Kompier, M.A.J.; Dul, J.

    1995-01-01

    A step-by-step approach to better work, aimed at reducing mental and physical workload in office work, is evaluated. This approach is based on a strong commitment of the management in the enterprise, and on as much direct worker participation as possible. After every step the workers proposed how to

  3. Bitwise dimensional co-clustering for analytical workloads

    S. Baumann (Stephan); P.A. Boncz (Peter); K.-U. Sattler

    2016-01-01

    htmlabstractAnalytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called Bitwise Dimensional

  4. Bitwise dimensional co-clustering for analytical workloads

    Baumann, Stephan; Boncz, Peter; Sattler, Kai Uwe

    2016-01-01

    Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC)

  5. Workloads in Australian emergency departments a descriptive study.

    Lyneham, Joy; Cloughessy, Liz; Martin, Valmai

    2008-07-01

    This study aimed to identify the current workload of clinical nurses, managers and educators in Australian Emergency Departments according to the classification of the department Additionally the relationship of experienced to inexperienced clinical staff was examined. A descriptive research method utilising a survey distributed to 394 Australian Emergency departments with a 21% response rate. Nursing workloads were calculated and a ratio of nurse to patient was established. The ratios included nurse to patient, management and educators to clinical staff. Additionally the percentage of junior to senior clinical staff was also calculated. Across all categories of emergency departments the mean nurse:patient ratios were 1:15 (am shift), 1:7 (pm shift) and 1:4 (night shift). During this period an average of 17.1% of attendances were admitted to hospital. There were 27 staff members for each manager and 23.3 clinical staff for each educator. The percentage of junior staff rostered ranged from 10% to 38%. Emergency nurses cannot work under such pressure as it may compromise the care given to patients and consequently have a negative effect on the nurse personally. However, emergency nurses are dynamically adjusting to the workload. Such conditions as described in this study could give rise to burnout and attrition of experienced emergency nurses as they cannot resolve the conflict between workload and providing quality nursing care.

  6. Respiratory sinus arrhythmia as a measure of cognitive workload.

    Muth, Eric R; Moss, Jason D; Rosopa, Patrick J; Salley, James N; Walker, Alexander D

    2012-01-01

    The current standard for measuring cognitive workload is the NASA Task-load Index (TLX) questionnaire. Although this measure has a high degree of reliability, diagnosticity, and sensitivity, a reliable physiological measure of cognitive workload could provide a non-invasive, objective measure of workload that could be tracked in real or near real-time without interrupting the task. This study investigated changes in respiratory sinus arrhythmia (RSA) during seven different sub-sections of a proposed selection test for Navy aviation and compared them to changes reported on the NASA-TLX. 201 healthy participants performed the seven tasks of the Navy's Performance Based Measure. RSA was measured during each task and the NASA-TLX was administered after each task. Multi-level modeling revealed that RSA significantly predicted NASA-TLX scores. A moderate within-subject correlation was also found between RSA and NASA TLX scores. The findings support the potential development of RSA as a real-time measure of cognitive workload. Copyright © 2011. Published by Elsevier B.V.

  7. Measuring Workload Weak Resilience Signals at a Rail Control Post

    Siegel, A.W.; Schraagen, J.M.C.

    2014-01-01

    OCCUPATIONAL APPLICATIONS This article describes an observational study at a rail control post to measure workload weak resilience signals. A weak resilience signal indicates a possible degradation of a system's resilience, which is defined as the ability of a complex socio-technical system to cope

  8. Pilot workload evaluated with subjective and physiological measures

    Veltman, J.A.; Gaillard, A.W.K.

    1993-01-01

    The aim of the present study is to validate different measures for mental workload. Ten aspirant fighter jet pilots flew several scenarios in a flight simulator. The scenarios were divided into segments with different levels of task load. During the flight, heart rate, respiration and blood pressure

  9. Estimation of the workload correlation in a Markov fluid queue

    Kaynar, B.; Mandjes, M.R.H.

    2013-01-01

    This paper considers a Markov fluid queue, focusing on the correlation function of the stationary workload process. A simulation-based computation technique is proposed, which relies on a coupling idea. Then an upper bound on the variance of the resulting estimator is given, which reveals how the

  10. Nonparametric inference from the M/G/1 workload

    Hansen, Martin Bøgsted; Pitts, Susan M.

    2006-01-01

    Consider an M/G/1 queue with unknown service-time distribution and unknown traffic intensity ρ. Given systematically sampled observations of the workload, we construct estimators of ρ and of the service-time distribution function, and we study asymptotoic properties of these estimators....

  11. Nonparametric inference from the M/G/1 workload

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Consider an M/G/1 queue with unknown service-time distribution and unknown traffic intensity $\\rho$. Given systematically sampled observations of the workload, we construct estimators of $\\rho$ and of the service-time distribution function, and we study asymptotic properties of these estimators....

  12. Effects of life event stress, exercise workload, hardiness and coping ...

    Effects of life event stress, exercise workload, hardiness and coping style on susceptibility to the common cold. GA Struwig, M Papaikonomou, P Kruger. Abstract. No Abstract. South African Journal for Physical, Health Education, Recreation and DanceVol. 12(4) 2006: pp. 369-383. Full Text: EMAIL FULL TEXT EMAIL FULL ...

  13. Comparison of physical workload in four Gari -frying working ...

    All physical labour requires physical exertion which indicates the level of physical workload involved. This paper examines the energy expenditure in four working postures of gari-frying (garification) workers in southwestern Nigeria. The postures include sitting-beside (SB), sitting-in-front (SF), ...

  14. Simple grain mill boosts production and eases women's workload ...

    2013-01-11

    Jan 11, 2013 ... Simple grain mill boosts production and eases women's workload ... Farmers also like the design because, unlike other machines, it can be easily adjusted for different millet varieties and sizes. ... Local manufacturing. Discussions have begun with local entrepreneurs to manufacture the grain mill, which ...

  15. Work and workload of Dutch primary care midwives in 2010.

    Wiegers, T.A.; Warmelink, J.C.; Spelten, E.R.; Klomp, G.M.T.; Hutton, E.K.

    2014-01-01

    Objective: To re-assess the work and workload of primary care midwives in the Netherlands. Background: In the Netherlands most midwives work in primary care as independent practitioners in a midwifery practice with two or more colleagues. Each practice provides 24/7 care coverage through office

  16. The effect of inclement weather on trauma orthopaedic workload.

    Cashman, J P

    2012-01-31

    BACKGROUND: Climate change models predict increasing frequency of extreme weather. One of the challenges hospitals face is how to make sure they have adequate staffing at various times of the year. AIMS: The aim of this study was to examine the effect of this severe inclement weather on hospital admissions, operative workload and cost in the Irish setting. We hypothesised that there is a direct relationship between cold weather and workload in a regional orthopaedic trauma unit. METHODS: Trauma orthopaedic workload in a regional trauma unit was examined over 2 months between December 2009 and January 2010. This corresponded with a period of severe inclement weather. RESULTS: We identified a direct correlation between the drop in temperature and increase in workload, with a corresponding increase in demand on resources. CONCLUSIONS: Significant cost savings could be made if these injuries were prevented. While the information contained in this study is important in the context of resource planning and staffing of hospital trauma units, it also highlights the vulnerability of the Irish population to wintery weather.

  17. Development of a nursing workload measurement instrument in burn care

    Jong, A.E.; Leeman, J.; Middelkoop, E.

    2009-01-01

    Existing workload measurement instruments fail to represent specific nursing activities in a setting where patients are characterized by a diversity of cause, location, extent and depth of burns, of age and of history. They also do not include educational levels and appropriate time standards. The

  18. Workload Characterization of a Leadership Class Storage Cluster

    Kim, Youngjae [ORNL; Gunasekaran, Raghul [ORNL; Shipman, Galen M [ORNL; Dillow, David A [ORNL; Zhang, Zhe [ORNL; Settlemyer, Bradley W [ORNL

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.

  19. Activity-based differentiation of pathologists' workload in surgical pathology

    Meijer, G.A.; Oudejans, J.J.; Koevoets, J.J.M.; Meijer, C.J.L.M.

    2009-01-01

    Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a

  20. Benchmarking transaction and analytical processing systems the creation of a mixed workload benchmark and its application

    Bog, Anja

    2014-01-01

    This book introduces a new benchmark for hybrid database systems, gauging the effect of adding OLAP to an OLTP workload and analyzing the impact of commonly used optimizations in historically separate OLTP and OLAP domains in mixed-workload scenarios.

  1. Analysis and modeling of social influence in high performance computing workloads

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  2. Workload assessment of surgeons: correlation between NASA TLX and blinks.

    Zheng, Bin; Jiang, Xianta; Tien, Geoffrey; Meneghetti, Adam; Panton, O Neely M; Atkins, M Stella

    2012-10-01

    Blinks are known as an indicator of visual attention and mental stress. In this study, surgeons' mental workload was evaluated utilizing a paper assessment instrument (National Aeronautics and Space Administration Task Load Index, NASA TLX) and by examining their eye blinks. Correlation between these two assessments was reported. Surgeons' eye motions were video-recorded using a head-mounted eye-tracker while the surgeons performed a laparoscopic procedure on a virtual reality trainer. Blink frequency and duration were computed using computer vision technology. The level of workload experienced during the procedure was reported by surgeons using the NASA TLX. A total of 42 valid videos were recorded from 23 surgeons. After blinks were computed, videos were divided into two groups based on the blink frequency: infrequent group (≤ 6 blinks/min) and frequent group (more than 6 blinks/min). Surgical performance (measured by task time and trajectories of tool tips) was not significantly different between these two groups, but NASA TLX scores were significantly different. Surgeons who blinked infrequently reported a higher level of frustration (46 vs. 34, P = 0.047) and higher overall level of workload (57 vs. 47, P = 0.045) than those who blinked more frequently. The correlation coefficients (Pearson test) between NASA TLX and the blink frequency and duration were -0.17 and 0.446. Reduction of blink frequency and shorter blink duration matched the increasing level of mental workload reported by surgeons. The value of using eye-tracking technology for assessment of surgeon mental workload was shown.

  3. Unsupervised classification of operator workload from brain signals

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin

    2016-06-01

    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  4. Designing workload analysis questionnaire to evaluate needs of employees

    Astuti, Rahmaniyah Dwi; Navi, Muhammad Abdu Haq

    2018-02-01

    Incompatibility between workload with work capacity is one of main problem to make optimal result. In case at the office, there are constraints to determine workload because of non-repetitive works. Employees do work based on the targets set in a working period. At the end of the period is usually performed an evaluation of employees performance to evaluate needs of employees. The aims of this study to design a workload questionnaire tools to evaluate the efficiency level of position as indicator to determine needs of employees based on the Indonesian State Employment Agency Regulation on workload analysis. This research is applied to State-Owned Enterprise PT. X by determining 3 positions as a pilot project. Position A is held by 2 employees, position B is held by 7 employees, and position C is held by 6 employees. From the calculation result, position A has an efficiency level of 1,33 or "very good", position B has an efficiency level of 1.71 or "enough", and position C has an efficiency level of 1.03 or "very good". The application of this tools giving suggestion the needs of employees of position A is 3 people, position B is 5 people, and position C is 6 people. The difference between the number of employees and the calculation result is then analyzed by interviewing the employees to get more data about personal perception. It can be concluded that this workload evaluation tools can be used as an alternative solution to evaluate needs of employees in office.

  5. Hybrid cloud for dummies

    Hurwitz, Judith; Halper, Fern; Kirsch, Dan

    2012-01-01

    Understand the cloud and implement a cloud strategy for your business Cloud computing enables companies to save money by leasing storage space and accessing technology services through the Internet instead of buying and maintaining equipment and support services. Because it has its own unique set of challenges, cloud computing requires careful explanation. This easy-to-follow guide shows IT managers and support staff just what cloud computing is, how to deliver and manage cloud computing services, how to choose a service provider, and how to go about implementation. It also covers security and

  6. Secure cloud computing

    Jajodia, Sushil; Samarati, Pierangela; Singhal, Anoop; Swarup, Vipin; Wang, Cliff

    2014-01-01

    This book presents a range of cloud computing security challenges and promising solution paths. The first two chapters focus on practical considerations of cloud computing. In Chapter 1, Chandramouli, Iorga, and Chokani describe the evolution of cloud computing and the current state of practice, followed by the challenges of cryptographic key management in the cloud. In Chapter 2, Chen and Sion present a dollar cost model of cloud computing and explore the economic viability of cloud computing with and without security mechanisms involving cryptographic mechanisms. The next two chapters addres

  7. Clouds of Venus

    Knollenberg, R G [Particle Measuring Systems, Inc., 1855 South 57th Court, Boulder, Colorado 80301, U.S.A.; Hansen, J [National Aeronautics and Space Administration, New York (USA). Goddard Inst. for Space Studies; Ragent, B [National Aeronautics and Space Administration, Moffett Field, Calif. (USA). Ames Research Center; Martonchik, J [Jet Propulsion Lab., Pasadena, Calif. (USA); Tomasko, M [Arizona Univ., Tucson (USA)

    1977-05-01

    The current state of knowledge of the Venusian clouds is reviewed. The visible clouds of Venus are shown to be quite similar to low level terrestrial hazes of strong anthropogenic influence. Possible nucleation and particle growth mechanisms are presented. The Pioneer Venus experiments that emphasize cloud measurements are described and their expected findings are discussed in detail. The results of these experiments should define the cloud particle composition, microphysics, thermal and radiative heat budget, rough dynamical features and horizontal and vertical variations in these and other parameters. This information should be sufficient to initialize cloud models which can be used to explain the cloud formation, decay, and particle life cycle.

  8. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  9. SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS

    Nikita A. Balashov

    2018-03-01

    Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.

  10. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the

  11. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Wei-Chun Chung

    Full Text Available Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce.We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard.CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate

  12. Radiative properties of clouds

    Twomey, S.

    1993-01-01

    The climatic effects of condensation nuclei in the formation of cloud droplets and the subsequent role of the cloud droplets as contributors to the planetary short-wave albedo is emphasized. Microphysical properties of clouds, which can be greatly modified by the degree of mixing with cloud-free air from outside, are discussed. The effect of clouds on visible radiation is assessed through multiple scattering of the radiation. Cloudwater or ice absorbs more with increasing wavelength in the near-infrared region, with water vapor providing the stronger absorption over narrower wavelength bands. Cloud thermal infrared absorption can be solely related to liquid water content at least for shallow clouds and clouds in the early development state. Three-dimensional general circulation models have been used to study the climatic effect of clouds. It was found for such studies (which did not consider variations in cloud albedo) that the cooling effects due to the increase in planetary short-wave albedo from clouds were offset by heating effects due to thermal infrared absorption by the cloud. Two permanent direct effects of increased pollution are discussed in this chapter: (a) an increase of absorption in the visible and near infrared because of increased amounts of elemental carbon, which gives rise to a warming effect climatically, and (b) an increased optical thickness of clouds due to increasing cloud droplet number concentration caused by increasing cloud condensation nuclei number concentration, which gives rise to a cooling effect climatically. An increase in cloud albedo from 0.7 to 0.87 produces an appreciable climatic perturbation of cooling up to 2.5 K at the ground, using a hemispheric general circulation model. Effects of pollution on cloud thermal infrared absorption are negligible

  13. Workload and job satisfaction among general practitioners: a review of the literature.

    Groenewegen, P.P.; Hutten, J.B.F.

    1991-01-01

    The workload of general practitioners (GPs) is an important issue in health care systems with capitation payment for GPs services. This article reviews the literature on determinants and consequences of workload and job satisfaction of GPs. Determinants of workload are located on the demand side

  14. Role of Academic Managers in Workload and Performance Management of Academic Staff: A Case Study

    Graham, Andrew T.

    2016-01-01

    This small-scale case study focused on academic managers to explore the ways in which they control the workload of academic staff and the extent to which they use the workload model in performance management of academic staff. The links that exist between the workload and performance management were explored to confirm or refute the conceptual…

  15. The associations between psychosocial workload and mental health complaints in different age groups

    Zoer, I.; Ruitenburg, M. M.; Botje, D.; Frings-Dresen, M. H. W.; Sluiter, J. K.

    2011-01-01

    The objective of the present study was to explore associations between psychosocial workload and mental health complaints in different age groups. A questionnaire was sent to 2021 employees of a Dutch railway company. Six aspects of psychosocial workload (work pressure, mental workload, emotional

  16. The Use of the Dynamic Solution Space to Assess Air Traffic Controller Workload

    D'Engelbronner, J.G.; Mulder, M.; Van Paassen, M.M.; De Stigter, S.; Huisman, H.

    2010-01-01

    Air traffic capacity is mainly bound by air traffic controller workload. In order to effectively find solutions for this problem, off-line pre-experimental workload assessment methods are desirable. In order to better understand the workload associated with air traffic control, previous research

  17. ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION

    2016-03-24

    ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W

  18. The associations between psychosocial workload and mental health complaints in different age groups.

    Zoer, I.; Ruitenburg, M.M.; Botje, D.; Frings-Dresen, M.H.W.; Sluiter, J.K.

    2011-01-01

    The objective of the present study was to explore associations between psychosocial workload and mental health complaints in different age groups. A questionnaire was sent to 2021 employees of a Dutch railway company. Six aspects of psychosocial workload (work pressure, mental workload, emotional

  19. Nursing Workload and the Changing Health Care Environment: A Review of the Literature

    Neill, Denise

    2011-01-01

    Changes in the health care environment have impacted nursing workload, quality of care, and patient safety. Traditional nursing workload measures do not guarantee efficiency, nor do they adequately capture the complexity of nursing workload. Review of the literature indicates nurses perceive the quality of their work has diminished. Research has…

  20. The performance of workload control concepts in job shops : Improving the release method

    Land, MJ; Gaalman, GJC

    1998-01-01

    A specific class of production control concepts for jobs shops is based on the principles of workload control. Practitioners emphasise the importance of workload control. However, order release methods that reduce the workload on the shop floor show poor due date performance in job shop simulations.

  1. Moving towards Cloud Security

    Edit Szilvia Rubóczki

    2015-01-01

    Full Text Available Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment the users have to know the rule of cloud usage, however they have little knowledge about traditional IT security. It is important to measure the level of their knowledge, and evolve the training system to develop the security awareness. The article proves the importance of suggesting new metrics and algorithms for measuring security awareness of corporate users and employees to include the requirements of emerging cloud security.

  2. Cloud Computing for radiologists.

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  3. Cloud Computing for radiologists

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  4. Cloud computing for radiologists

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  5. Beating the tyranny of scale with a private cloud configured for Big Data

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end

  6. Marine cloud brightening

    Latham, John; Bower, Keith; Choularton, Tom; Coe, Hugh; Connolly, Paul; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John

    2012-01-01

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could—subject to satisfactory resolution of technical and scientific problems identi...

  7. Cloud computing strategies

    Chorafas, Dimitris N

    2011-01-01

    A guide to managing cloud projects, Cloud Computing Strategies provides the understanding required to evaluate the technology and determine how it can be best applied to improve business and enhance your overall corporate strategy. Based on extensive research, it examines the opportunities and challenges that loom in the cloud. It explains exactly what cloud computing is, what it has to offer, and calls attention to the important issues management needs to consider before passing the point of no return regarding financial commitments.

  8. Towards Indonesian Cloud Campus

    Thamrin, Taqwan; Lukman, Iing; Wahyuningsih, Dina Ika

    2013-01-01

    Nowadays, Cloud Computing is most discussed term in business and academic environment.Cloud campus has many benefits such as accessing the file storages, e-mails, databases,educational resources, research applications and tools anywhere for faculty, administrators,staff, students and other users in university, on demand. Furthermore, cloud campus reduces universities’ IT complexity and cost.This paper discuss the implementation of Indonesian cloud campus and various opportunies and benefits...

  9. Cloud Infrastructure Security

    Velev , Dimiter; Zlateva , Plamena

    2010-01-01

    Part 4: Security for Clouds; International audience; Cloud computing can help companies accomplish more by eliminating the physical bonds between an IT infrastructure and its users. Users can purchase services from a cloud environment that could allow them to save money and focus on their core business. At the same time certain concerns have emerged as potential barriers to rapid adoption of cloud services such as security, privacy and reliability. Usually the information security professiona...

  10. Cloud services in organization

    FUXA, Jan

    2013-01-01

    The work deals with the definition of the word cloud computing, cloud computing models, types, advantages, disadvantages, and comparing SaaS solutions such as: Google Apps and Office 365 in the area of electronic communications. The work deals with the use of cloud computing in the corporate practice, both good and bad practice. The following section describes the methodology for choosing the appropriate cloud service organization. Another part deals with analyzing the possibilities of SaaS i...

  11. Orchestrating Your Cloud Orchestra

    Hindle, Abram

    2015-01-01

    Cloud computing potentially ushers in a new era of computer music performance with exceptionally large computer music instruments consisting of 10s to 100s of virtual machines which we propose to call a `cloud-orchestra'. Cloud computing allows for the rapid provisioning of resources, but to deploy such a complicated and interconnected network of software synthesizers in the cloud requires a lot of manual work, system administration knowledge, and developer/operator skills. This is a barrier ...

  12. Cloud security mechanisms

    2014-01-01

    Cloud computing has brought great benefits in cost and flexibility for provisioning services. The greatest challenge of cloud computing remains however the question of security. The current standard tools in access control mechanisms and cryptography can only partly solve the security challenges of cloud infrastructures. In the recent years of research in security and cryptography, novel mechanisms, protocols and algorithms have emerged that offer new ways to create secure services atop cloud...

  13. Cloud computing for radiologists

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  14. Cloud Robotics Model

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  15. Genomics With Cloud Computing

    Sukhamrit Kaur; Sandeep Kaur

    2015-01-01

    Abstract Genomics is study of genome which provides large amount of data for which large storage and computation power is needed. These issues are solved by cloud computing that provides various cloud platforms for genomics. These platforms provides many services to user like easy access to data easy sharing and transfer providing storage in hundreds of terabytes more computational power. Some cloud platforms are Google genomics DNAnexus and Globus genomics. Various features of cloud computin...

  16. Chargeback for cloud services.

    Baars, T.; Khadka, R.; Stefanov, H.; Jansen, S.; Batenburg, R.; Heusden, E. van

    2014-01-01

    With pay-per-use pricing models, elastic scaling of resources, and the use of shared virtualized infrastructures, cloud computing offers more efficient use of capital and agility. To leverage the advantages of cloud computing, organizations have to introduce cloud-specific chargeback practices.

  17. On CLOUD nine

    2009-01-01

    The team from the CLOUD experiment - the world’s first experiment using a high-energy particle accelerator to study the climate - were on cloud nine after the arrival of their new three-metre diameter cloud chamber. This marks the end of three years’ R&D and design, and the start of preparations for data taking later this year.

  18. Cloud Computing Explained

    Metz, Rosalyn

    2010-01-01

    While many talk about the cloud, few actually understand it. Three organizations' definitions come to the forefront when defining the cloud: Gartner, Forrester, and the National Institutes of Standards and Technology (NIST). Although both Gartner and Forrester provide definitions of cloud computing, the NIST definition is concise and uses…

  19. Greening the Cloud

    van den Hoed, Robert; Hoekstra, Eric; Procaccianti, G.; Lago, P.; Grosso, Paola; Taal, Arie; Grosskop, Kay; van Bergen, Esther

    The cloud has become an essential part of our daily lives. We use it to store our documents (Dropbox), to stream our music and lms (Spotify and Net ix) and without giving it any thought, we use it to work on documents in the cloud (Google Docs). The cloud forms a massive storage and processing

  20. Security in the cloud.

    Degaspari, John

    2011-08-01

    As more provider organizations look to the cloud computing model, they face a host of security-related questions. What are the appropriate applications for the cloud, what is the best cloud model, and what do they need to know to choose the best vendor? Hospital CIOs and security experts weigh in.

  1. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  4. Coordinated Energy Management in Heterogeneous Processors

    Indrani Paul

    2014-01-01

    Full Text Available This paper examines energy management in a heterogeneous processor consisting of an integrated CPU–GPU for high-performance computing (HPC applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types – a new and less understood problem. We examine the intra-node CPU–GPU frequency sensitivity of HPC applications on tightly coupled CPU–GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU–GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power- and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED2 product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.

  5. Federated Access Control in Heterogeneous Intercloud Environment: Basic Models and Architecture Patterns

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lee, C.

    2014-01-01

    This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF)

  6. Job scheduling in a heterogenous grid environment

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  7. Crew workload-management strategies - A critical factor in system performance

    Hart, Sandra G.

    1989-01-01

    This paper reviews the philosophy and goals of the NASA/USAF Strategic Behavior/Workload Management Program. The philosophical foundation of the program is based on the assumption that an improved understanding of pilot strategies will clarify the complex and inconsistent relationships observed among objective task demands and measures of system performance and pilot workload. The goals are to: (1) develop operationally relevant figures of merit for performance, (2) quantify the effects of strategic behaviors on system performance and pilot workload, (3) identify evaluation criteria for workload measures, and (4) develop methods of improving pilots' abilities to manage workload extremes.

  8. Mission control of multiple unmanned aerial vehicles: a workload analysis.

    Dixon, Stephen R; Wickens, Christopher D; Chang, Dervon

    2005-01-01

    With unmanned aerial vehicles (UAVs), 36 licensed pilots flew both single-UAV and dual-UAV simulated military missions. Pilots were required to navigate each UAV through a series of mission legs in one of the following three conditions: a baseline condition, an auditory autoalert condition, and an autopilot condition. Pilots were responsible for (a) mission completion, (b) target search, and (c) systems monitoring. Results revealed that both the autoalert and the autopilot automation improved overall performance by reducing task interference and alleviating workload. The autoalert system benefited performance both in the automated task and mission completion task, whereas the autopilot system benefited performance in the automated task, the mission completion task, and the target search task. Practical implications for the study include the suggestion that reliable automation can help alleviate task interference and reduce workload, thereby allowing pilots to better handle concurrent tasks during single- and multiple-UAV flight control.

  9. Single Pilot Workload Management During Cruise in Entry Level Jets

    Burian, Barbara K.; Pruchnicki, Shawn; Christopher, Bonny; Silverman, Evan; Hackworth, Carla; Rogers, Jason; Williams, Kevin; Drechsler, Gena; Runnels, Barry; Mead, Andy

    2013-01-01

    Advanced technologies and automation are important facilitators of single pilot operations, but they also contribute to the workload management challenges faced by the pilot. We examined task completion, workload management, and automation use in an entry level jet (ELJ) flown by single pilots. Thirteen certificated Cessna Citation Mustang (C510-S) pilots flew an instrument flight rules (IFR) experimental flight in a Cessna Citation Mustang simulator. At one point participants had to descend to meet a crossing restriction prior to a waypoint and prepare for an instrument approach into an un-towered field while facilitating communication from a lost pilot who was flying too low for ATC to hear. Four participants experienced some sort of difficulty with regard to meeting the crossing restriction and almost half (n=6) had problems associated with the instrument approach. Additional errors were also observed including eight participants landing at the airport with an incorrect altimeter setting.

  10. Measurement of nurses' workload in an oncology outpatient clinic

    Célia Alves de Souza

    2014-02-01

    Full Text Available The growing demand and the degree of patient care in oncological outpatient services, as well as the complexity of treatment have had an impact on the workload of nurses. This study aimed at measuring the workload and productivity of nurses in an oncological outpatient service. An observational study using a work sampling technique was conducted and included seven nurses working in an oncological outpatient service in the south-eastern region of Brazil. A total of 1,487 intervention or activity samples were obtained. Nurses used 43.2% of their time on indirect care, 33.2% on direct care, 11.6% on associated activities, and 12% on personal activities. Their mean productivity was 88.0%. The findings showed that nurses in this service spend most of their time in indirect care activities. Moreover, the productivity index in this study was above that recommended in the literature.

  11. Reducing Concurrency Bottlenecks in Parallel I/O Workloads

    Manzanares, Adam C. [Los Alamos National Laboratory; Bent, John M. [Los Alamos National Laboratory; Wingate, Meghan [Los Alamos National Laboratory

    2011-01-01

    To enable high performance parallel checkpointing we introduced the Parallel Log Structured File System (PLFS). PLFS is middleware interposed on the file system stack to transform concurrent writing of one application file into many non-concurrently written component files. The promising effectiveness of PLFS makes it important to examine its performance for workloads other than checkpoint capture, notably the different ways that state snapshots may be later read, to make the case for using PLFS in the Exascale I/O stack. Reading a PLFS file involved reading each of its component files. In this paper we identify performance limitations on broader workloads in an early version of PLFS, specifically the need to build and distribute an index for the overall file, and the pressure on the underlying parallel file system's metadata server, and show how PLFS's decomposed components architecture can be exploited to alleviate bottlenecks in the underlying parallel file system.

  12. Medical Resident Workload at a Multidisciplinary Hospital in Iran

    Anahita Sadeghi

    2014-12-01

    Full Text Available Introduction: Medical resident workload has been shown to be associated with learning efficiency and patient satisfaction. However, there is limited evidence about it in developing countries. This study aimed to evaluate the medical resident workload in a multidisciplinary teaching hospital in Tehran, Iran.Methods: All medical residents at Shariati Hospital, a teaching hospital affiliated with Tehran University of Medical Science, who were working between November and December 2011 were enrolled in this cross-sectional study. A self–reported questionnaire was used to gather information about their duty hours (including daily activities and shifts and financial issues.Results:135 (52.5% out of 257 residents responded to the questionnaire. 72 (53.3% residents were in surgical departments and 63 (46.7% were in non-surgical departments. Mean duty hours per month were significantly higher in surgical (350.8 ±76.7 than non-surgical (300.6±74.2 departments (p=0.001. Three cardiology (a non-surgical group residents (5.7% and 30 residents (41% in surgical groups (p<0.001 declared a number of “on-calls in the hospital” more than the approved number in the curriculum. The majority of residents (97.8% declared that their salary was not sufficient to manage their lives and they needed other financial resources. Conclusion: Medical residents at teaching hospitals in Iran suffer from high workloads and low income. There is a need to reduce medical resident workload and increase salary to improve worklife balance and finances.

  13. Investigating Facial Electromyography as an Indicator of Cognitive Workload

    2017-02-22

    operator’s ability to perform at the level required to prevent hazardous consequences (Young & Stanton, 2002). Cognitive overload and underload can both...the operator’s performance to lessen performance abatement induced by cognitive overload or underload (Wilson & Russell, 2007; Hoepf, Middendorf...Investigating Facial Electromyography as an Indicator of Cognitive Workload 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  14. Modeling Workload Impact in Multiple Unmanned Vehicle Supervisory Control

    2010-01-01

    task (e.g., replanning the path of a UV because of an emergent target). Compared to more common measures of workload (e.g., pupil dilation, NASA TLX ...utilization (p=.005). 0 5 10 15 20 25 30 35 40 45 50 W ai t t im es d ue t o at te nt io n in ef fic ie nc ie s (s ec ) Utilization (%) No

  15. The study of postural workload in assembly of furniture upholstery

    Marek Lasota Andrzej; Hankiewicz Krzysztof

    2017-01-01

    The productivity of the workers is affected by the Work-related Musculoskeletal Disorders (WRMSDs) which common cause of health problems, sick leave and it can result in decreased quality of work and increased absenteeism. The objective of this study is to evaluate and investigate the postural workload of sewing machine operators in the assembly of upholstery in furniture factory by using the Ovako Working Posture Analysing System (OWAS) with sampling. The results indicated that posture code ...

  16. Workload of Attending Physicians at an Academic Center in Taiwan

    Hsueh-Fen Chen

    2010-08-01

    Conclusion: This study found that work hours among departments differed significantly and that physicians in surgical departments spend the longest hours in clinical work. Those in administrative positions are most involved in clinical work. However, work hours do not definitely represent work intensity, and to define the workload by working hours may be inappropriate for some departments. This possible difference between work hours and work intensity merits further consideration.

  17. CLOUD STORAGE SERVICES

    Yan, Cheng

    2017-01-01

    Cloud computing is a hot topic in recent research and applications. Because it is widely used in various fields. Up to now, Google, Microsoft, IBM, Amazon and other famous co partnership have proposed their cloud computing application. Look upon cloud computing as one of the most important strategy in the future. Cloud storage is the lower layer of cloud computing system which supports the service of the other layers above it. At the same time, it is an effective way to store and manage heavy...

  18. Cloud Computing Quality

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  19. Benchmarking Cloud Storage Systems

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  20. The Magellanic clouds

    1989-01-01

    As the two galaxies nearest to our own, the Magellanic Clouds hold a special place in studies of the extragalactic distance scale, of stellar evolution and the structure of galaxies. In recent years, results from the South African Astronomical Observatory (SAAO) and elsewhere have shown that it is possible to begin understanding the three dimensional structure of the Clouds. Studies of Magellanic Cloud Cepheids have continued, both to investigate the three-dimensional structure of the Clouds and to learn more about Cepheids and their use as extragalactic distance indicators. Other research undertaken at SAAO includes studies on Nova LMC 1988 no 2 and red variables in the Magellanic Clouds

  1. Cloud Computing Bible

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  2. Eleven quick tips for architecting biomedical informatics workflows with cloud computing

    Moore, Jason H.

    2018-01-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416

  3. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    Brian S Cole

    2018-03-01

    Full Text Available Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  4. Assessment of mental workload and academic motivation in medical students.

    Atalay, Kumru Didem; Can, Gulin Feryal; Erdem, Saban Remzi; Muderrisoglu, Ibrahim Haldun

    2016-05-01

    To investigate the level of correlation and direction of linearity between academic motivation and subjective workload. The study was conducted at Baskent University School of Medicine, Ankara, Turkey, from December 2013 to February 2014, and comprised Phase 5 Phase 6 medical students. Subjective workload level was determined by using National Aeronautics and Space Administration Task Load Index scale that was adapted to Turkish. Academic motivation values were obtained with the help of Academic Motivation Scale university form. SPSS 17 was used for statistical analysis. Of the total 105 subjects, 65(62%) students were in Phase 5 and 40(38%) were in Phase 6. Of the Phase 5 students, 18(27.7%) were boys and 47(72.3%) were girls, while of the Phase 6 students, 16(40%) were boys and 24(60%) were girls. There were significant differences in Phase 5 and Phase 6 students for mental effort (p=0.00) and physical effort (p=0.00). The highest correlation in Phase 5 was between mental effort and intrinsic motivation (r=0.343). For Phase 6, highest correlation was between effort and amotivation (r= -0.375). Subjective workload affected academic motivation in medical students.

  5. Modelling of cirrus clouds – Part 2: Competition of different nucleation mechanisms

    P. Spichtinger

    2009-04-01

    Full Text Available We study the competition of two different freezing mechanisms (homogeneous and heterogeneous freezing in the same environment for cold cirrus clouds. To this goal we use the recently developed and validated ice microphysics scheme (Spichtinger and Gierens, 2009a which distinguishes between ice classes according to their formation process. We investigate cases with purely homogeneous ice formation and compare them with cases where background ice nuclei in varying concentration heterogeneously form ice prior to homogeneous nucleation. We perform additionally a couple of sensitivity studies regarding threshold humidity for heterogeneous freezing, uplift speed, and ambient temperature, and we study the influence of random motions induced by temperature fluctuations in the clouds. We find three types of cloud evolution, homogeneously dominated, heterogeneously dominated, and a mixed type where neither nucleation process dominates. The latter case is prone to long–lasting in–cloud ice supersaturation of the order 30% and more.

  6. CLOUD COMPUTING SECURITY

    Ştefan IOVAN

    2016-05-01

    Full Text Available Cloud computing reprentes the software applications offered as a service online, but also the software and hardware components from the data center.In the case of wide offerd services for any type of client, we are dealing with a public cloud. In the other case, in wich a cloud is exclusively available for an organization and is not available to the open public, this is consider a private cloud [1]. There is also a third type, called hibrid in which case an user or an organization might use both services available in the public and private cloud. One of the main challenges of cloud computing are to build the trust and ofer information privacy in every aspect of service offerd by cloud computingle. The variety of existing standards, just like the lack of clarity in sustenability certificationis not a real help in building trust. Also appear some questions marks regarding the efficiency of traditionsecurity means that are applied in the cloud domain. Beside the economic and technology advantages offered by cloud, also are some advantages in security area if the information is migrated to cloud. Shared resources available in cloud includes the survey, use of the "best practices" and technology for advance security level, above all the solutions offered by the majority of medium and small businesses, big companies and even some guvermental organizations [2].

  7. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    Megino, Fernando H Barreiro; Jones, Robert; Llamas, Ramón Medrano; Ster, Daniel van der; Kucharczyk, Katarzyna

    2014-01-01

    The recent paradigm shift toward cloud computing in IT, and general interest in 'Big Data' in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R and D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula – the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  8. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    Barreiro Megino, Fernando H.; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel

    2014-06-01

    The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  9. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    M. Schnaiter

    2016-04-01

    Full Text Available This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere cloud chamber of the Karlsruhe Institute of Technology (KIT. A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the −40 to −60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3. It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN probe of Laboratoire de Métérologie et Physique (LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  10. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  11. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Supriya Kinger

    2014-01-01

    Full Text Available Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  12. Prediction based proactive thermal virtual machine scheduling in green clouds.

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  13. Mobile Cloud Computing for Telemedicine Solutions

    Mihaela GHEORGHE

    2014-01-01

    Full Text Available Mobile Cloud Computing is a significant technology which combines emerging domains such as mobile computing and cloud computing which has conducted to the development of one of the most IT industry challenging and innovative trend. This is still at the early stage of devel-opment but its main characteristics, advantages and range of services which are provided by an internet-based cluster system have a strong impact on the process of developing telemedi-cine solutions for overcoming the wide challenges the medical system is confronting with. Mo-bile Cloud integrates cloud computing into the mobile environment and has the advantage of overcoming obstacles related to performance (e.g. battery life, storage, and bandwidth, envi-ronment (e.g. heterogeneity, scalability, availability and security (e.g. reliability and privacy which are commonly present at mobile computing level. In this paper, I will present a compre-hensive overview on mobile cloud computing including definitions, services and the use of this technology for developing telemedicine application.

  14. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES

    F. Poux

    2016-10-01

    Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  15. Resource Sharing in Heterogeneous and Cloud Radio Access Networks

    Zakrzewska, Anna; Iversen, Villy Bæk

    2012-01-01

    be improved. However, we identify the benefit of individual operators and show that it’s not equal but highly depends on the initial network dimensioning. Furthermore, we demonstrate that under specific conditions the blocking probability in an area is lower than for fully accessible system and therefore...

  16. Temperature Dependence in Homogeneous and Heterogeneous Nucleation

    McGraw R. L.; Winkler, P. M.; Wagner, P. E.

    2017-08-01

    Heterogeneous nucleation on stable (sub-2 nm) nuclei aids the formation of atmospheric cloud condensation nuclei (CCN) by circumventing or reducing vapor pressure barriers that would otherwise limit condensation and new particle growth. Aerosol and cloud formation depend largely on the interaction between a condensing liquid and the nucleating site. A new paper published this year reports the first direct experimental determination of contact angles as well as contact line curvature and other geometric properties of a spherical cap nucleus at nanometer scale using measurements from the Vienna Size Analyzing Nucleus Counter (SANC) (Winkler et al., 2016). For water nucleating heterogeneously on silver oxide nanoparticles we find contact angles around 15 degrees compared to around 90 degrees for the macroscopically measured equilibrium angle for water on bulk silver. The small microscopic contact angles can be attributed via the generalized Young equation to a negative line tension that becomes increasingly dominant with increasing curvature of the contact line. These results enable a consistent theoretical description of heterogeneous nucleation and provide firm insight to the wetting of nanosized objects.

  17. Searchable Encryption in Cloud Storage

    Ren-Junn Hwang; Chung-Chien Lu; Jain-Shing Wu

    2014-01-01

    Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying ...

  18. TINJAUAN KEAMANAN SISTEM PADA TEKNOLOGI CLOUD COMPUTING

    Yuli Fauziah

    2014-01-01

    Full Text Available Dalam perspektif teknologi informasi, cloud computing atau komputasi awan dapat diartikan sebagai suatu teknologi yang memanfaatkan internet sebagai resource untuk komputasi yang dapat di-request oleh pengguna dan merupakan sebuah layanan dengan pusat server bersifat virtual atau berada dalam cloud (internet itu sendiri. Banyak perusahaan yang ingin memindahkan aplikasi dan storage-nya ke dalam cloudcomputing. Teknologi ini menjadi trend dikalangan peneliti dan praktisi IT untuk menggali potensi yang dapat ditawarkan kepada masyarakat luas. Tetapi masih banyak isu keamanan yang muncul, karena teknologi yang masih baru. Salah satu isu keamanannya adalah Theft of Information, yaitu pencurian terhadap data yang disimpan di dalam Storage aplikasi yang menggunakan teknologi Cloud Computing. Kerugian yang akan diperoleh oleh pengguna teknologi ini sangat besar, karena informasi yang dicuri menyangkut data rahasia milik perusahaan, maupun data-data penting lainnya.Beberapa tindakan untuk mencegah terjadinya pencurian data ini, yaitu dengan  menghindari jenis ancaman keamanan berupa kehilangan atau kebocoran data dan pembajakan account atau service, serta Identity Management dan access control adalah kebutuhan yang utama bagi SaaS Cloud computing perusahaan. Dan salah satu metode yang digunakan dalam keamanan data aspek autentikasi dan otorisasi pada aplikasi atau service cloud computing adalah teknologi Single-sign-on. Teknologi Single-sign-on (SSO adalah teknologi yang mengizinkan pengguna jaringan agar dapat mengakses sumber daya dalam jaringan hanya dengan menggunakan satu akun pengguna saja. Teknologi ini sangat diminati, khususnya dalam jaringan yang sangat besar dan bersifat heterogen, juga pada jaringan cloud computing. Dengan menggunakan SSO, seorang pengguna hanya cukup melakukan proses autentikasi sekali saja untuk mendapatkan izin akses terhadap semua layanan yang terdapat di dalam jaringan. Kata Kunci : Storage, Aplikasi, Software as a

  19. Enterprise Cloud Adoption - Cloud Maturity Assessment Model

    Conway, Gerry; Doherty, Eileen; Carcary, Marian; Crowley, Catherine

    2017-01-01

    The introduction and use of cloud computing by an organization has the promise of significant benefits that include reduced costs, improved services, and a pay-per-use model. Organizations that successfully harness these benefits will potentially have a distinct competitive edge, due to their increased agility and flexibility to rapidly respond to an ever changing and complex business environment. However, as cloud technology is a relatively new ph...

  20. Sedimentation Efficiency of Condensation Clouds in Substellar Atmospheres

    Gao, Peter; Marley, Mark S.; Ackerman, Andrew S.

    2018-03-01

    Condensation clouds in substellar atmospheres have been widely inferred from spectra and photometric variability. Up until now, their horizontally averaged vertical distribution and mean particle size have been largely characterized using models, one of which is the eddy diffusion–sedimentation model from Ackerman and Marley that relies on a sedimentation efficiency parameter, f sed, to determine the vertical extent of clouds in the atmosphere. However, the physical processes controlling the vertical structure of clouds in substellar atmospheres are not well understood. In this work, we derive trends in f sed across a large range of eddy diffusivities (K zz ), gravities, material properties, and cloud formation pathways by fitting cloud distributions calculated by a more detailed cloud microphysics model. We find that f sed is dependent on K zz , but not gravity, when K zz is held constant. f sed is most sensitive to the nucleation rate of cloud particles, as determined by material properties like surface energy and molecular weight. High surface energy materials form fewer, larger cloud particles, leading to large f sed (>1), and vice versa for materials with low surface energy. For cloud formation via heterogeneous nucleation, f sed is sensitive to the condensation nuclei flux and radius, connecting cloud formation in substellar atmospheres to the objects’ formation environments and other atmospheric aerosols. These insights could lead to improved cloud models that help us better understand substellar atmospheres. For example, we demonstrate that f sed could increase with increasing cloud base depth in an atmosphere, shedding light on the nature of the brown dwarf L/T transition.

  1. A Dynamic Resource Scheduling Method Based on Fuzzy Control Theory in Cloud Environment

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic re...

  2. Star clouds of Magellan

    Tucker, W.

    1981-01-01

    The Magellanic Clouds are two irregular galaxies belonging to the local group which the Milky Way belongs to. By studying the Clouds, astronomers hope to gain insight into the origin and composition of the Milky Way. The overall structure and dynamics of the Clouds are clearest when studied in radio region of the spectrum. One benefit of directly observing stellar luminosities in the Clouds has been the discovery of the period-luminosity relation. Also, the Clouds are a splendid laboratory for studying stellar evolution. It is believed that both Clouds may be in the very early stage in the development of a regular, symmetric galaxy. This raises a paradox because some of the stars in the star clusters of the Clouds are as old as the oldest stars in our galaxy. An explanation for this is given. The low velocity of the Clouds with respect to the center of the Milky Way shows they must be bound to it by gravity. Theories are given on how the Magellanic Clouds became associated with the galaxy. According to current ideas the Clouds orbits will decay and they will spiral into the Galaxy

  3. Cloud Computing Governance Lifecycle

    Soňa Karkošková

    2016-06-01

    Full Text Available Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is unclear how to achieve them. Cloud computing governance helps to create business value through obtain benefits from use of cloud computing services while optimizing investment and risk. Challenge, which organizations are facing in relation to governing of cloud services, is how to design and implement cloud computing governance to gain expected benefits. This paper aims to provide guidance on implementation activities of proposed Cloud computing governance lifecycle from cloud consumer perspective. Proposed model is based on SOA Governance Framework and consists of lifecycle for implementation and continuous improvement of cloud computing governance model.

  4. THE CALIFORNIA MOLECULAR CLOUD

    Lada, Charles J.; Lombardi, Marco; Alves, Joao F.

    2009-01-01

    We present an analysis of wide-field infrared extinction maps of a region in Perseus just north of the Taurus-Auriga dark cloud complex. From this analysis we have identified a massive, nearby, but previously unrecognized, giant molecular cloud (GMC). Both a uniform foreground star density and measurements of the cloud's velocity field from CO observations indicate that this cloud is likely a coherent structure at a single distance. From comparison of foreground star counts with Galactic models, we derive a distance of 450 ± 23 pc to the cloud. At this distance the cloud extends over roughly 80 pc and has a mass of ∼ 10 5 M sun , rivaling the Orion (A) molecular cloud as the largest and most massive GMC in the solar neighborhood. Although surprisingly similar in mass and size to the more famous Orion molecular cloud (OMC) the newly recognized cloud displays significantly less star formation activity with more than an order of magnitude fewer young stellar objects than found in the OMC, suggesting that both the level of star formation and perhaps the star formation rate in this cloud are an order of magnitude or more lower than in the OMC. Analysis of extinction maps of both clouds shows that the new cloud contains only 10% the amount of high extinction (A K > 1.0 mag) material as is found in the OMC. This, in turn, suggests that the level of star formation activity and perhaps the star formation rate in these two clouds may be directly proportional to the total amount of high extinction material and presumably high density gas within them and that there might be a density threshold for star formation on the order of n(H 2 ) ∼ a few x 10 4 cm -3 .

  5. Zero Trust Cloud Networks using Transport Access Control and High Availability Optical Bypass Switching

    Casimer DeCusatis

    2017-04-01

    Full Text Available Cyberinfrastructure is undergoing a radical transformation as traditional enterprise and cloud computing environments hosting dynamic, mobile workloads replace telecommunication data centers. Traditional data center security best practices involving network segmentation are not well suited to these new environments. We discuss a novel network architecture, which enables an explicit zero trust approach, based on a steganographic overlay, which embeds authentication tokens in the TCP packet request, and first-packet authentication. Experimental demonstration of this approach is provided in both an enterprise-class server and cloud computing data center environment.

  6. [Effects of mental workload on work ability in primary and secondary school teachers].

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-02-01

    To investigate the change pattern of primary and secondary school teachers' work ability with the changes in their mental workload. A total of 901 primary and secondary school teachers were selected by random cluster sampling, and then their mental workload and work ability were assessed by National Aeronautics and Space Administration-Task Load Index (NASA-TLX) and Work Ability Index (WAI) questionnaires, whose reliability and validity had been tested. The effects of their mental workload on the work ability were analyzed. Primary and secondary school teachers' work ability reached the highest level at a certain level of mental workload (55.73work ability had a positive correlation with the mental workload. Their work ability increased or maintained stable with the increasing mental workload. Moreover, the percentage of teachers with good work ability increased, while that of teachers with moderate work ability decreased. But when their mental workload was higher than the level, their work ability had a negative correlation with the mental workload. Their work ability significantly decreased with the increasing mental workload (P work ability decreased, while that of teachers with moderate work ability increased (P work ability. Moderate mental workload (55.73∼64.10) will benefit the maintaining and stabilization of their work ability.

  7. [Distribution and main influential factors of mental workload of middle school teachers in Nanchang City].

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-01-01

    To investigate the distribution and main influential factors of mental workload of middle school teachers in Nanchang City. A total of 504 middle school teachers were sampled by random cluster sampling from middle schools in Nanchang City, and the mental workload level was assessed with National Aeronautics and Space Administration-Task Load Index (NASA-TLX) which was verified in reliability and validity. The mental workload scores of middle school teachers in Nanchang was approximately normal distribution. The mental workload level of middle school teachers aged 31 -35 years old was the highest. For those no more than 35 years old, there was positive correlation between mental workload and age (r = 0.146, P teachers with lower educational level seemed to have a higher mental workload (P teacher worked per day, the higher the mental workload was. Working hours per day was the most influential factor on mental workload in all influential factors (P teachers was closely related to age, educational level and work hours per day. Working hours per day was the important risk factor of mental workload. Reducing working hours per day, especially reducing it to be no more than 8 hours per day, may be a significant and useful approach alleviating mental workload of middle school teachers in Nanchang City.

  8. Mobile cloud computing for computation offloading: Issues and challenges

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  9. Expansion of magnetic clouds

    Suess, S.T.

    1987-01-01

    Magnetic clouds are a carefully defined subclass of all interplanetary signatures of coronal mass ejections whose geometry is thought to be that of a cylinder embedded in a plane. It has been found that the total magnetic pressure inside the clouds is higher than the ion pressure outside, and that the clouds are expanding at 1 AU at about half the local Alfven speed. The geometry of the clouds is such that even though the magnetic pressure inside is larger than the total pressure outside, expansion will not occur because the pressure is balanced by magnetic tension - the pinch effect. The evidence for expansion of clouds at 1 AU is nevertheless quite strong so another reason for its existence must be found. It is demonstrated that the observations can be reproduced by taking into account the effects of geometrical distortion of the low plasma beta clouds as they move away from the Sun

  10. Encyclopedia of cloud computing

    Bojanova, Irena

    2016-01-01

    The Encyclopedia of Cloud Computing provides IT professionals, educators, researchers and students with a compendium of cloud computing knowledge. Authored by a spectrum of subject matter experts in industry and academia, this unique publication, in a single volume, covers a wide range of cloud computing topics, including technological trends and developments, research opportunities, best practices, standards, and cloud adoption. Providing multiple perspectives, it also addresses questions that stakeholders might have in the context of development, operation, management, and use of clouds. Furthermore, it examines cloud computing's impact now and in the future. The encyclopedia presents 56 chapters logically organized into 10 sections. Each chapter covers a major topic/area with cross-references to other chapters and contains tables, illustrations, side-bars as appropriate. Furthermore, each chapter presents its summary at the beginning and backend material, references and additional resources for further i...

  11. Integrating Containers in the CERN Private Cloud

    Noel, Bertrand; Michelino, Davide; Velten, Mathieu; Rocha, Ricardo; Trigazis, Spyridon

    2017-10-01

    Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs. The CERN private cloud is made of thousands of nodes and users, with many different use cases. A single solution for container deployment would not cover every one of them, and supporting multiple solutions involves repeating the same process multiple times for integration with authentication services, storage services or networking. In this paper we describe OpenStack Magnum as the solution to offer container management in the CERN cloud. We will cover its main functionality and some advanced use cases using Docker Swarm and Kubernetes, highlighting some relevant differences between the two. We will describe the most common use cases in HEP and how we integrated popular services like CVMFS or AFS in the most transparent way possible, along with some limitations found. Finally we will look into ongoing work on advanced scheduling for both Swarm and Kubernetes, support for running batch like workloads and integration of container networking technologies with the CERN infrastructure.

  12. Leveraging Renewable Energies in Distributed Private Clouds

    Pape Christian

    2016-01-01

    Full Text Available The vast and unstoppable rise of virtualization technologies and the related hardware abstraction in the last years established the foundation for new cloud-based infrastructures and new scalable and elastic services. This new paradigm has already found its way in modern data centers and their infrastructures. A positive side effect of these technologies is the transparency of the execution of workloads in a location-independent and hardware-independent manner. For instance, due to higher utilization of underlying hardware thanks to the consolidation of virtual resources or by moving virtual resources to sites with lower energy prices or more available renewable energy resources, data centers can counteract their economic and ecological downsides resulting from their steadily increasing energy demand. This paper introduces a vector-based algorithm for the placement of virtual machines in distributed private cloud environments. After outlining the basic operation of our approach, we provide a formal definition as well as an outlook for further research.

  13. Considerations for Cloud Security Operations

    Cusick, James

    2016-01-01

    Information Security in Cloud Computing environments is explored. Cloud Computing is presented, security needs are discussed, and mitigation approaches are listed. Topics covered include Information Security, Cloud Computing, Private Cloud, Public Cloud, SaaS, PaaS, IaaS, ISO 27001, OWASP, Secure SDLC.

  14. Evaluating statistical cloud schemes

    Grützun, Verena; Quaas, Johannes; Morcrette , Cyril J.; Ament, Felix

    2015-01-01

    Statistical cloud schemes with prognostic probability distribution functions have become more important in atmospheric modeling, especially since they are in principle scale adaptive and capture cloud physics in more detail. While in theory the schemes have a great potential, their accuracy is still questionable. High-resolution three-dimensional observational data of water vapor and cloud water, which could be used for testing them, are missing. We explore the potential of ground-based re...

  15. Cloud Computing Governance Lifecycle

    Soňa Karkošková; George Feuerlicht

    2016-01-01

    Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is uncle...

  16. Security in cloud computing

    Moreno Martín, Oriol

    2016-01-01

    Security in Cloud Computing is becoming a challenge for next generation Data Centers. This project will focus on investigating new security strategies for Cloud Computing systems. Cloud Computingisarecent paradigmto deliver services over Internet. Businesses grow drastically because of it. Researchers focus their work on it. The rapid access to exible and low cost IT resources on an on-demand fashion, allows the users to avoid planning ahead for provisioning, and enterprises to save money ...

  17. Cognitive Privacy for Personal Clouds

    Milena Radenkovic

    2016-01-01

    Full Text Available This paper proposes a novel Cognitive Privacy (CogPriv framework that improves privacy of data sharing between Personal Clouds for different application types and across heterogeneous networks. Depending on the behaviour of neighbouring network nodes, their estimated privacy levels, resource availability, and social network connectivity, each Personal Cloud may decide to use different transmission network for different types of data and privacy requirements. CogPriv is fully distributed, uses complex graph contacts analytics and multiple implicit novel heuristics, and combines these with smart probing to identify presence and behaviour of privacy compromising nodes in the network. Based on sensed local context and through cooperation with remote nodes in the network, CogPriv is able to transparently and on-the-fly change the network in order to avoid transmissions when privacy may be compromised. We show that CogPriv achieves higher end-to-end privacy levels compared to both noncognitive cellular network communication and state-of-the-art strategies based on privacy-aware adaptive social mobile networks routing for a range of experiment scenarios based on real-world user and network traces. CogPriv is able to adapt to varying network connectivity and maintain high quality of service while managing to keep low data exposure for a wide range of privacy leakage levels in the infrastructure.

  18. Image selection as a service for cloud computing environments

    Filepp, Robert

    2010-12-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe that as the number of images proliferates it will become increasingly difficult for users to decide effectively. Cloud service providers often allow their customers to instantiate standard system images, to modify their instances, and to store images of these customized instances for public or private future use. Storing modified instances as images enables customers to avoid re-provisioning and re-configuration of required resources thereby reducing their future costs. However Cloud service providers generally do not expose details regarding the configurations of the images in a rigorous canonical fashion nor offer services that assist clients in the best target image selection to support client transformation objectives. Rather, they allow customers to enter a free-form description of an image based on client\\'s best effort. This means in order to find a "best fit" image to instantiate, a human user must review potentially thousands of image descriptions, reading each description to evaluate its suitability as a platform to host their source application. Furthermore, the actual content of the selected image may differ greatly from its description. Finally, even images that have been customized and retained for future use may need additional provisioning and customization to accommodate specific needs. In this paper we propose a service that accumulates image configuration details in a canonical fashion and a further service that employs an algorithm to order images per best fit /least cost in conformance to user-specified policies. These services collectively facilitate workload transformation into enterprise cloud environments.

  19. CLOUD TECHNOLOGY IN EDUCATION

    Alexander N. Dukkardt

    2014-01-01

    Full Text Available This article is devoted to the review of main features of cloud computing that can be used in education. Particular attention is paid to those learning and supportive tasks, that can be greatly improved in the case of the using of cloud services. Several ways to implement this approach are proposed, based on widely accepted models of providing cloud services. Nevertheless, the authors have not ignored currently existing problems of cloud technologies , identifying the most dangerous risks and their impact on the core business processes of the university. 

  20. Cloud Computing: An Overview

    Qian, Ling; Luo, Zhiguo; Du, Yujian; Guo, Leitao

    In order to support the maximum number of user and elastic service with the minimum resource, the Internet service provider invented the cloud computing. within a few years, emerging cloud computing has became the hottest technology. From the publication of core papers by Google since 2003 to the commercialization of Amazon EC2 in 2006, and to the service offering of AT&T Synaptic Hosting, the cloud computing has been evolved from internal IT system to public service, from cost-saving tools to revenue generator, and from ISP to telecom. This paper introduces the concept, history, pros and cons of cloud computing as well as the value chain and standardization effort.

  1. Genomics With Cloud Computing

    Sukhamrit Kaur

    2015-04-01

    Full Text Available Abstract Genomics is study of genome which provides large amount of data for which large storage and computation power is needed. These issues are solved by cloud computing that provides various cloud platforms for genomics. These platforms provides many services to user like easy access to data easy sharing and transfer providing storage in hundreds of terabytes more computational power. Some cloud platforms are Google genomics DNAnexus and Globus genomics. Various features of cloud computing to genomics are like easy access and sharing of data security of data less cost to pay for resources but still there are some demerits like large time needed to transfer data less network bandwidth.

  2. Workload, mental health and burnout indicators among female physicians.

    Győrffy, Zsuzsa; Dweik, Diana; Girasek, Edmond

    2016-04-01

    Female doctors in Hungary have worse indicators of physical and mental health compared with other professional women. We aimed to cast light on possible indicators of mental health, workload, and burnout of female physicians. Two time-points (T) were compared, in 2003 (T1 n = 408) and 2013 (T2 n = 2414), based on two nationally representative surveys of female doctors, and comparison made with data from other professional control groups. Independent samples t test or chi-squared test was used both for the two time-point comparison and the comparison between the index and the control groups. The background factors of sleep disorders and burnout were assessed by binary logistic regression analysis. No significant differences in the rates of depressive symptoms and suicidal thoughts and attempts were detected between the 2003 and 2013 cohorts, but the prevalence of sleep disorders increased. The workload increased, and there was less job satisfaction in 2013 than in 2003, coupled to more stressful or difficult work-related situations. The personal accomplishment component of burnout significantly decreased in line with the declining work-related satisfaction. Compared to the professional control groups, the prevalence of depressive symptoms, suicide attempts, and sleep disorders was higher among female physicians at both time-points. The number of workplaces, frequency of work-related stressful situations, and intensive role conflict was associated with sleep disorders and decreased personal accomplishment. In comparison with the other professional groups, female doctors had worse mental health indicators with regard to depression, suicidal ideas, and sleep disorders both in 2003 and 2013 while within professional strata the changes seemed to be less. Increasing workload had a clear impact on sleep disorders and the personal accomplishment dimension of burnout.

  3. Physical Workload and Work Capacity across Occupational Groups.

    Stefanie Brighenti-Zogg

    Full Text Available This study aimed to determine physical performance criteria of different occupational groups by investigating physical activity and energy expenditure in healthy Swiss employees in real-life workplaces on workdays and non-working days in relation to their aerobic capacity (VO2max. In this cross-sectional study, 337 healthy and full-time employed adults were recruited. Participants were classified (nine categories according to the International Standard Classification of Occupations 1988 and merged into three groups with low-, moderate- and high-intensity occupational activity. Daily steps, energy expenditure, metabolic equivalents and activity at different intensities were measured using the SenseWear Mini armband on seven consecutive days (23 hours/day. VO2max was determined by the 20-meter shuttle run test. Data of 303 subjects were considered for analysis (63% male, mean age: 33 yrs, SD 12, 101 from the low-, 102 from the moderate- and 100 from the high-intensity group. At work, the high-intensity group showed higher energy expenditure, metabolic equivalents, steps and activity at all intensities than the other groups (p<0.001. There were no significant differences in physical activity between the occupational groups on non-working days. VO2max did not differ across groups when stratified for gender. The upper workload limit was 21%, 29% and 44% of VO2max in the low-, moderate- and high-intensity group, respectively. Men had a lower limit than women due to their higher VO2max (26% vs. 37%, when all groups were combined. While this study did confirm that the average workload limit is one third of VO2max, it showed that the average is misrepresenting the actual physical work demands of specific occupational groups, and that it does not account for gender-related differences in relative workload. Therefore, clinical practice needs to consider these differences with regard to a safe return to work, particularly for the high-intensity group.

  4. Physical Workload and Work Capacity across Occupational Groups

    Brighenti-Zogg, Stefanie; Mundwiler, Jonas; Schüpbach, Ulla; Dieterle, Thomas; Wolfer, David Paul; Leuppi, Jörg Daniel; Miedinger, David

    2016-01-01

    This study aimed to determine physical performance criteria of different occupational groups by investigating physical activity and energy expenditure in healthy Swiss employees in real-life workplaces on workdays and non-working days in relation to their aerobic capacity (VO2max). In this cross-sectional study, 337 healthy and full-time employed adults were recruited. Participants were classified (nine categories) according to the International Standard Classification of Occupations 1988 and merged into three groups with low-, moderate- and high-intensity occupational activity. Daily steps, energy expenditure, metabolic equivalents and activity at different intensities were measured using the SenseWear Mini armband on seven consecutive days (23 hours/day). VO2max was determined by the 20-meter shuttle run test. Data of 303 subjects were considered for analysis (63% male, mean age: 33 yrs, SD 12), 101 from the low-, 102 from the moderate- and 100 from the high-intensity group. At work, the high-intensity group showed higher energy expenditure, metabolic equivalents, steps and activity at all intensities than the other groups (pphysical activity between the occupational groups on non-working days. VO2max did not differ across groups when stratified for gender. The upper workload limit was 21%, 29% and 44% of VO2max in the low-, moderate- and high-intensity group, respectively. Men had a lower limit than women due to their higher VO2max (26% vs. 37%), when all groups were combined. While this study did confirm that the average workload limit is one third of VO2max, it showed that the average is misrepresenting the actual physical work demands of specific occupational groups, and that it does not account for gender-related differences in relative workload. Therefore, clinical practice needs to consider these differences with regard to a safe return to work, particularly for the high-intensity group. PMID:27136206

  5. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    Thanh Dinh

    2016-06-01

    Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  6. Evaluating and optimizing the NERSC workload on Knights Landing

    Barnes, T; Cook, B; Deslippe, J; Doerfler, D; Friesen, B; He, Y; Kurth, T; Koskela, T; Lobet, M; Malas, T; Oliker, L; Ovsyannikov, A; Sarje, A; Vay, JL; Vincenti, H; Williams, S; Carrier, P; Wichmann, N; Wagner, M; Kent, P; Kerr, C; Dennis, J

    2017-01-30

    NERSC has partnered with 20 representative application teams to evaluate performance on the Xeon-Phi Knights Landing architecture and develop an application-optimization strategy for the greater NERSC workload on the recently installed Cori system. In this article, we present early case studies and summarized results from a subset of the 20 applications highlighting the impact of important architecture differences between the Xeon-Phi and traditional Xeon processors. We summarize the status of the applications and describe the greater optimization strategy that has formed.

  7. Approximate entropy: a new evaluation approach of mental workload under multitask conditions

    Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying

    2014-04-01

    There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.

  8. Effect of time span and task load on pilot mental workload

    Berg, S. L.; Sheridan, T. B.

    1986-01-01

    Two sets of simulations designed to examine how a pilot's mental workload is affected by continuous manual-control activity versus discrete mental tasks that included the length of time between receiving an assignment and executing it are described. The first experiment evaluated two types of measures: objective performance indicators and subjective ratings. Subjective ratings for the two missions were different, but the objective performance measures were similar. In the second experiments, workload levels were increased and a second performance measure was taken. Mental workload had no influence on either performance-based workload measure. Subjective ratings discriminated among the scenarios and correlated with performance measures for high-workload flights. The number of mental tasks performed did not influence error rates, although high manual workloads did increase errors.

  9. Workload and Marital Satisfaction over Time: Testing Lagged Spillover and Crossover Effects during the Newlywed Years.

    Lavner, Justin A; Clark, Malissa A

    2017-08-01

    Although many studies have found that higher workloads covary with lower levels of marital satisfaction, the question of whether workloads may also predict changes in marital satisfaction over time has been overlooked. To address this question, we investigated the lagged association between own and partner workload and marital satisfaction using eight waves of data collected every 6 months over the first four years of marriage from 172 heterosexual couples. Significant crossover, but not spillover, effects were found, indicating that partners of individuals with higher workloads at one time point experience greater declines in marital satisfaction by the following time point compared to the partners of individuals with lower workloads. These effects were not moderated by gender or parental status. These findings suggest that higher partner workloads can prove deleterious for relationship functioning over time and call for increased attention to the long-term effects of spillover and crossover from work to marital functioning.

  10. Review of Cloud Computing and existing Frameworks for Cloud adoption

    Chang, Victor; Walters, Robert John; Wills, Gary

    2014-01-01

    This paper presents a selected review for Cloud Computing and explains the benefits and risks of adopting Cloud Computing in a business environment. Although all the risks identified may be associated with two major Cloud adoption challenges, a framework is required to support organisations as they begin to use Cloud and minimise risks of Cloud adoption. Eleven Cloud Computing frameworks are investigated and a comparison of their strengths and limitations is made; the result of the comparison...

  11. +Cloud: An Agent-Based Cloud Computing Platform

    González, Roberto; Hernández de la Iglesia, Daniel; de la Prieta Pintado, Fernando; Gil González, Ana Belén

    2017-01-01

    Cloud computing is revolutionizing the services provided through the Internet, and is continually adapting itself in order to maintain the quality of its services. This study presents the platform +Cloud, which proposes a cloud environment for storing information and files by following the cloud paradigm. This study also presents Warehouse 3.0, a cloud-based application that has been developed to validate the services provided by +Cloud.

  12. ATLAS Global Shares Implementation in the PanDA Workload Management System

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    PanDA (Production and Distributed Analysis) is the workload management system for ATLAS across the Worldwide LHC Computing Grid. While analysis tasks are submitted to PanDA by over a thousand users following personal schedules (e.g. PhD or conference deadlines), production campaigns are scheduled by a central Physics Coordination group based on the organization’s calendar. The Physics Coordination group needs to allocate the amount of Grid resources dedicated to each activity, in order to manage sharing of CPU resources among various parallel campaigns and to make sure that results can be achieved in time for important deadlines. While dynamic and static shares on batch systems have been around for a long time, we are trying to move away from local resource partitioning and manage shares at a global level in the PanDA system. The global solution is not straightforward, given different requirements of the activities (number of cores, memory, I/O and CPU intensity), the heterogeneity of Grid resources (site/H...

  13. Big Data X-Learning Resources Integration and Processing in Cloud Environments

    Kong Xiangsheng

    2014-09-01

    Full Text Available The cloud computing platform has good flexibility characteristics, more and more learning systems are migrated to the cloud platform. Firstly, this paper describes different types of educational environments and the data they provide. Then, it proposes a kind of heterogeneous learning resources mining, integration and processing architecture. In order to integrate and process the different types of learning resources in different educational environments, this paper specifically proposes a novel solution and massive storage integration algorithm and conversion algorithm to the heterogeneous learning resources storage and management cloud environments.

  14. Comparative analysis of methods for workload assessment of the main control room operators of NPP

    Georgiev, V.; Petkov, G.

    2008-01-01

    The paper presents benchmarking workload results obtained by a method for operator workload assessment – NASA Task Load Index, and a method for human error probability assessment - Performance Evaluation of Teamwork. Based on the archives of FSS-1000 training in the accident “Main Steam Line Tube Rupture at the WWER-1000 Containment” the capacities of the two methods for direct and indirect workload assessment are evaluated

  15. Level of Workload and Its Relationship with Job Burnout among Administrative Staff

    MANSOUR ZIAEI; HAMED YARMOHAMMADI; MEISAM MORADI; MOHAMMAD KHANDAN

    2015-01-01

    Burnout syndrome is a response to prolonged occupational stress. Workload is one of the organizational risk factors of burnout. With regards to the topic, there are no data on administrative employees’ burnout and workload in Iran. This study seeks to determine the levels of job burnout and their relationships with workload among administrative members of staff. Two hundred and forty two administrative staff from Kermanshah University of Medical Sciences [Iran] volunteered to participate in t...

  16. How to reduce workload--augmented reality to ease the work of air traffic controllers.

    Hofmann, Thomas; König, Christina; Bruder, Ralph; Bergner, Jörg

    2012-01-01

    In the future the air traffic will rise--the workload of the controllers will do the same. In the BMWi research project, one of the tasks is, how to ensure safe air traffic, and a reasonable workload for the air traffic controllers. In this project it was the goal to find ways how to reduce the workload (and stress) for the controllers to allow safe air traffic, esp. at huge hub-airports by implementing augmented reality visualization and interaction.

  17. Lost in Cloud

    Maluf, David A.; Shetye, Sandeep D.; Chilukuri, Sri; Sturken, Ian

    2012-01-01

    Cloud computing can reduce cost significantly because businesses can share computing resources. In recent years Small and Medium Businesses (SMB) have used Cloud effectively for cost saving and for sharing IT expenses. With the success of SMBs, many perceive that the larger enterprises ought to move into Cloud environment as well. Government agency s stove-piped environments are being considered as candidates for potential use of Cloud either as an enterprise entity or pockets of small communities. Cloud Computing is the delivery of computing as a service rather than as a product, whereby shared resources, software, and information are provided to computers and other devices as a utility over a network. Underneath the offered services, there exists a modern infrastructure cost of which is often spread across its services or its investors. As NASA is considered as an Enterprise class organization, like other enterprises, a shift has been occurring in perceiving its IT services as candidates for Cloud services. This paper discusses market trends in cloud computing from an enterprise angle and then addresses the topic of Cloud Computing for NASA in two possible forms. First, in the form of a public Cloud to support it as an enterprise, as well as to share it with the commercial and public at large. Second, as a private Cloud wherein the infrastructure is operated solely for NASA, whether managed internally or by a third-party and hosted internally or externally. The paper addresses the strengths and weaknesses of both paradigms of public and private Clouds, in both internally and externally operated settings. The content of the paper is from a NASA perspective but is applicable to any large enterprise with thousands of employees and contractors.

  18. Identity based Encryption and Biometric Authentication Scheme for Secure Data Access in Cloud Computing

    Cheng, Hongbing; Rong, Chunming; Tan, Zheng-Hua

    2012-01-01

    Cloud computing will be a main information infrastructure in the future; it consists of many large datacenters which are usually geographically distributed and heterogeneous. How to design a secure data access for cloud computing platform is a big challenge. In this paper, we propose a secure data...... access scheme based on identity-based encryption and biometric authentication for cloud computing. Firstly, we describe the security concern of cloud computing and then propose an integrated data access scheme for cloud computing, the procedure of the proposed scheme include parameter setup, key...... distribution, feature template creation, cloud data processing and secure data access control. Finally, we compare the proposed scheme with other schemes through comprehensive analysis and simulation. The results show that the proposed data access scheme is feasible and secure for cloud computing....

  19. Impact of Conflict Avoidance Responsibility Allocation on Pilot Workload in a Distributed Air Traffic Management System

    Ligda, Sarah V.; Dao, Arik-Quang V.; Vu, Kim-Phuong; Strybel, Thomas Z.; Battiste, Vernol; Johnson, Walter W.

    2010-01-01

    Pilot workload was examined during simulated flights requiring flight deck-based merging and spacing while avoiding weather. Pilots used flight deck tools to avoid convective weather and space behind a lead aircraft during an arrival into Louisville International airport. Three conflict avoidance management concepts were studied: pilot, controller or automation primarily responsible. A modified Air Traffic Workload Input Technique (ATWIT) metric showed highest workload during the approach phase of flight and lowest during the en-route phase of flight (before deviating for weather). In general, the modified ATWIT was shown to be a valid and reliable workload measure, providing more detailed information than post-run subjective workload metrics. The trend across multiple workload metrics revealed lowest workload when pilots had both conflict alerting and responsibility of the three concepts, while all objective and subjective measures showed highest workload when pilots had no conflict alerting or responsibility. This suggests that pilot workload was not tied primarily to responsibility for resolving conflicts, but to gaining and/or maintaining situation awareness when conflict alerting is unavailable.

  20. Mental Workload and Its Determinants among Nurses in One Hospital in Kermanshah City, Iran

    Ehsan Bakhshi

    2017-03-01

    Full Text Available Background & Aims: Mental workload is one of the factors influencing the behavior, performance and efficiency of nurses in the workplace. There are diverse factors that can affect mental workload level. present study performed with the aim of Surveying Mental Workload and its Determinants among Nursing in one of hospital in Kermanshah City Materials and methods: In this cross-sectional study, 203 nurses from 5 wards of infants, emergency, surgery, internal and ICU were selected randomly and surveyed. Data collection tools were demographics and NASA-TLX questionnaires. The statistical data analysis conducted using Independent sample  t-test, ANOVA and Pearson correlation coefficient using software SPSS 19. Results: The mean and standard deviation of overall  mental workload estimated as 69.73±15.26. Among  aspects of mental workload,  the aspect of  effort with an average score of 70.96 was the highest and the aspect of frustration and disappointment with average of 44.93 was the lowest one. There were significant relationship between physical aspect of workload with age, type of shift working, number of shifts, type of employment, between temporal aspect of workload with BMI, type of employment and work experience, and between effort aspect with BMI (p-value≤0/05. Conclusion: Due to the different amount of mental workload in studied hospital wards, relocation of nurses between wards can improve situation and increase the number of nurses can lead to decrease mental workload.

  1. MEASURING WORKLOAD OF ICU NURSES WITH A QUESTIONNAIRE SURVEY: THE NASA TASK LOAD INDEX (TLX).

    Hoonakker, Peter; Carayon, Pascale; Gurses, Ayse; Brown, Roger; McGuire, Kerry; Khunlertkit, Adjhaporn; Walker, James M

    2011-01-01

    High workload of nurses in Intensive Care Units (ICUs) has been identified as a major patient safety and worker stress problem. However, relative little attention has been dedicated to the measurement of workload in healthcare. The objectives of this study are to describe and examine several methods to measure workload of ICU nurses. We then focus on the measurement of ICU nurses' workload using a subjective rating instrument: the NASA TLX.We conducted secondary data analysis on data from two, multi-side, cross-sectional questionnaire studies to examine several instruments to measure ICU nurses' workload. The combined database contains the data from 757 ICU nurses in 8 hospitals and 21 ICUs.Results show that the different methods to measure workload of ICU nurses, such as patient-based and operator-based workload, are only moderately correlated, or not correlated at all. Results show further that among the operator-based instruments, the NASA TLX is the most reliable and valid questionnaire to measure workload and that NASA TLX can be used in a healthcare setting. Managers of hospitals and ICUs can benefit from the results of this research as it provides benchmark data on workload experienced by nurses in a variety of ICUs.

  2. Effects of Visual, Auditory, and Tactile Navigation Cues on Navigation Performance, Situation Awareness, and Mental Workload

    Davis, Bradley M

    2007-01-01

    .... Results from both experiments indicate that augmented visual displays reduced time to complete navigation, maintained situation awareness, and drastically reduced mental workload in comparison...

  3. The Impact of Heavy Perceived Nurse Workloads on Patient and Nurse Outcomes

    Maura MacPhee

    2017-03-01

    Full Text Available This study investigated the relationships between seven workload factors and patient and nurse outcomes. (1 Background: Health systems researchers are beginning to address nurses’ workload demands at different unit, job and task levels; and the types of administrative interventions needed for specific workload demands. (2 Methods: This was a cross-sectional correlational study of 472 acute care nurses from British Columbia, Canada. The workload factors included nurse reports of unit-level RN staffing levels and patient acuity and patient dependency; job-level nurse perceptions of heavy workloads, nursing tasks left undone and compromised standards; and task-level interruptions to work flow. Patient outcomes were nurse-reported frequencies of medication errors, patient falls and urinary tract infections; and nurse outcomes were emotional exhaustion and job satisfaction. (3 Results: Job-level perceptions of heavy workloads and task-level interruptions had significant direct effects on patient and nurse outcomes. Tasks left undone mediated the relationships between heavy workloads and nurse and patient outcomes; and between interruptions and nurse and patient outcomes. Compromised professional nursing standards mediated the relationships between heavy workloads and nurse outcomes; and between interruptions and nurse outcomes. (4 Conclusion: Administrators should work collaboratively with nurses to identify work environment strategies that ameliorate workload demands at different levels.

  4. Neutron beam irradiation study of workload dependence of SER in a microprocessor

    Michalak, Sarah E [Los Alamos National Laboratory; Graves, Todd L [Los Alamos National Laboratory; Hong, Ted [STANFORD; Ackaret, Jerry [IBM; Sonny, Rao [IBM; Subhasish, Mitra [STANFORD; Pia, Sanda [IBM

    2009-01-01

    It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.

  5. Follow up on a workloaded interventional radiologist's occupational radiation doses - a study case

    Ketner, D.; Ofer, A.; Engel, A.

    2004-01-01

    During many interventional procedures, patients' radiation doses are high, affecting radiologist's radiation doses. We checked occupational doses of a workloaded interventional radiologist during seven years

  6. The Management of Local Government Apparatus Resource Based on Job and Workload Analysis

    Cahyasari, Erlita

    2016-01-01

    This Papers focus on Job analysis as the basis of human resource system. It is describe about the job and workload and also the obstacles that are perhaps to observe during the work, and to supply all of activities of human resource management in the organization. Workload analysis is a process to decide the sum of time required to finish a specific job. The result of job and workload analysis goals to determine the number of employees needed in correspond to some specific workload and respon...

  7. Research on cloud computing solutions

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  8. VMware vCloud security

    Sarkar, Prasenjit

    2013-01-01

    VMware vCloud Security provides the reader with in depth knowledge and practical exercises sufficient to implement a secured private cloud using VMware vCloud Director and vCloud Networking and Security.This book is primarily for technical professionals with system administration and security administration skills with significant VMware vCloud experience who want to learn about advanced concepts of vCloud security and compliance.

  9. Security Architecture of Cloud Computing

    V.KRISHNA REDDY; Dr. L.S.S.REDDY

    2011-01-01

    The Cloud Computing offers service over internet with dynamically scalable resources. Cloud Computing services provides benefits to the users in terms of cost and ease of use. Cloud Computing services need to address the security during the transmission of sensitive data and critical applications to shared and public cloud environments. The cloud environments are scaling large for data processing and storage needs. Cloud computing environment have various advantages as well as disadvantages o...

  10. Security in hybrid cloud computing

    Koudelka, Ondřej

    2016-01-01

    This bachelor thesis deals with the area of hybrid cloud computing, specifically with its security. The major aim of the thesis is to analyze and compare the chosen hybrid cloud providers. For the minor aim this thesis compares the security challenges of hybrid cloud as opponent to other deployment models. In order to accomplish said aims, this thesis defines the terms cloud computing and hybrid cloud computing in its theoretical part. Furthermore the security challenges for cloud computing a...

  11. MPI support in the DIRAC Pilot Job Workload Management System

    Tsaregorodtsev, A; Hamar, V

    2012-01-01

    Parallel job execution in the grid environment using MPI technology presents a number of challenges for the sites providing this support. Multiple flavors of the MPI libraries, shared working directories required by certain applications, special settings for the batch systems make the MPI support difficult for the site managers. On the other hand the workload management systems with Pilot Jobs became ubiquitous although the support for the MPI applications in the Pilot frameworks was not available. This support was recently added in the DIRAC Project in the context of the GISELA Latin American Grid Initiative. Special services for dynamic allocation of virtual computer pools on the grid sites were developed in order to deploy MPI rings corresponding to the requirements of the jobs in the central task queue of the DIRAC Workload Management System. Pilot Jobs using user space file system techniques install the required MPI software automatically. The same technique is used to emulate shared working directories for the parallel MPI processes. This makes it possible to execute MPI jobs even on the sites not supporting them officially. Reusing so constructed MPI rings for execution of a series of parallel jobs increases dramatically their efficiency and turnaround. In this contribution we describe the design and implementation of the DIRAC MPI Service as well as its support for various types of MPI libraries. Advantages of coupling the MPI support with the Pilot frameworks are outlined and examples of usage with real applications are presented.

  12. [Impact of chronic illness on hospital nursing workloads].

    Vallés, S; Valdavida, E; Menéndez, C; Natal, C

    To evaluate the short-term impact of chronic illness in hospital units and to establish a method that allows nursing workloads to be adapted according to the care needs of patients. A descriptive study of the evolution of workloads of nursing staff associated with the care needs of patients between 1 July 2014 and 30 June 2016, in a county hospital. The care needs of the patients were assessed daily using an adaptation of the Montesinos scheme. The estimated times of nursing care and auxiliary nursing required by the patients, based on their level of dependence for time distribution, were based on the standards and recommendations of the Ministry of Health, Social Services and Equality. During the study period, there was a change in the patient care needs, with no increase in activity, which resulted in an increase in the nursing staffing needs of 1,396 theoretical hours per year. This increase implies an increase in the workforce of 5 nurses in the second period. In the study period, the needs for direct nursing care increased by 7%, this increase is not related to the increase in activity, but to the level of dependency of the patients with chronic diseases. This increase occurred in both medical and surgical units. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. A new measurement of workload in Web application reliability assessment

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  14. Reasons for adopting technological innovations reducing physical workload in bricklaying.

    de Jong, A M; Vink, P; de Kroon, J C A

    2003-09-15

    In this paper the adoption of technological innovations to improve the work of bricklayers and bricklayers' assistants is evaluated. Two studies were performed among 323 subjects to determine the adoption of the working methods, the perceived workload, experiences with the working methods, and the reasons for adopting the working methods. Furthermore, a comparison of the results of the studies was made with those of two similar studies in the literature. The results show that more than half of the sector adopted the innovations. The perceived workload was reduced. The employees and employers are satisfied with the working methods and important reasons for adoption were cost/benefit advantages, improvement of work and health, and increase in productivity. Problems preventing the adoption were the use of the working methods at specific sites, for instance in renovation work. The adoption of the new working methods could perhaps have been higher or faster if more attention had been paid to the active participation of bricklayers and bricklayers' assistants during the development of the new working methods and to the use of modern media techniques, such as the Internet and CD/DVD.

  15. Workloads, strain processes and sickness absenteeism in nursing

    Vivian Aline Mininel

    2013-12-01

    Full Text Available OBJECTIVE: to analyze the workloads, strain processes and sickness absenteeism among nursing workers from a teaching hospital in the Brazilian Central-West. METHOD: a descriptive and cross-sectional study was developed with a quantitative approach, based on the theoretical framework of the social determination of the health-disease process. Data were collected between January and December 2009, based on records of complaints related to occupational exposure among nursing professionals, filed in the software Monitoring System of Nursing Workers' Health. For the sake of statistical analysis, relative and absolute frequencies of the variables and the risk coefficient were considered. RESULTS: 144 notifications of occupational exposure were registered across the analysis period, which represented 25% of the total nursing population at the hospital. The physiological and psychic workloads were the most representative, corresponding to 37% and 36%, respectively. These notifications culminated in 1567 days of absenteeism for disease treatment. CONCLUSIONS: the findings evidence the impact of occupational illnesses on the absenteeism of nursing workers, and can be used to demonstrate the importance of institutional investments in occupational health surveillance.

  16. Heterogeneous network architectures

    Christiansen, Henrik Lehrmann

    2006-01-01

    is flexibility. This thesis investigates such heterogeneous network architectures and how to make them flexible. A survey of algorithms for network design is presented, and it is described how using heuristics can increase the speed. A hierarchical, MPLS based network architecture is described......Future networks will be heterogeneous! Due to the sheer size of networks (e.g., the Internet) upgrades cannot be instantaneous and thus heterogeneity appears. This means that instead of trying to find the olution, networks hould be designed as being heterogeneous. One of the key equirements here...... and it is discussed that it is advantageous to heterogeneous networks and illustrated by a number of examples. Modeling and simulation is a well-known way of doing performance evaluation. An approach to event-driven simulation of communication networks is presented and mixed complexity modeling, which can simplify...

  17. Cloud security in vogelvlucht

    Pieters, Wolter

    2011-01-01

    Cloud computing is dé hype in IT op het moment, en hoewel veel aspecten niet nieuw zijn, leidt het concept wel tot de noodzaak voor nieuwe vormen van beveiliging. Het idee van cloud computing biedt echter ook juist kansen om hierover na te denken: wat is de rol van informatiebeveiliging in een

  18. CLOUD SERVICES IN EDUCATION

    Z.S. Seydametova

    2011-05-01

    Full Text Available We present the on-line services based on cloud computing, provided by Google to educational institutions. We describe the own experience of the implementing the Google Apps Education Edition in the educational process. We analyzed and compared the other universities experience of using cloud technologies.

  19. Cloud MicroAtlas

    We begin by outlining the life cycle of a tall cloud, and thenbriefly discuss cloud systems. We choose one aspect of thislife cycle, namely, the rapid growth of water droplets in ice freeclouds, to then discuss in greater detail. Taking a singlevortex to be a building block of turbulence, we demonstrateone mechanism by which ...

  20. Greening the cloud

    van den Hoed, Robert; Hoekstra, Eric; Procaccianti, Giuseppe; Lago, Patricia; Grosso, Paolo; Taal, Arie; Grosskop, Kay; van Bergen, Esther

    The cloud has become an essential part of our daily lives. We use it to store our documents (Dropbox), to stream our music and films (Spotify and Netflix) and without giving it any thought, we use it to work on documents in the cloud (Google Docs).

  1. Learning in the Clouds?

    Butin, Dan W.

    2013-01-01

    Engaged learning--the type that happens outside textbooks and beyond the four walls of the classroom--moves beyond right and wrong answers to grappling with the uncertainties and contradictions of a complex world. iPhones back up to the "cloud." GoogleDocs is all about "cloud computing." Facebook is as ubiquitous as the sky.…

  2. Kernel structures for Clouds

    Spafford, Eugene H.; Mckendry, Martin S.

    1986-01-01

    An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.

  3. Cloud computing basics

    Srinivasan, S

    2014-01-01

    Cloud Computing Basics covers the main aspects of this fast moving technology so that both practitioners and students will be able to understand cloud computing. The author highlights the key aspects of this technology that a potential user might want to investigate before deciding to adopt this service. This book explains how cloud services can be used to augment existing services such as storage, backup and recovery. Addressing the details on how cloud security works and what the users must be prepared for when they move their data to the cloud. Also this book discusses how businesses could prepare for compliance with the laws as well as industry standards such as the Payment Card Industry.

  4. Solar variability and clouds

    Kirkby, Jasper

    2000-01-01

    Satellite observations have revealed a surprising imprint of the 11- year solar cycle on global low cloud cover. The cloud data suggest a correlation with the intensity of Galactic cosmic rays. If this apparent connection between cosmic rays and clouds is real, variations of the cosmic ray flux caused by long-term changes in the solar wind could have a significant influence on the global energy radiation budget and the climate. However a direct link between cosmic rays and clouds has not been unambiguously established and, moreover, the microphysical mechanism is poorly understood. New experiments are being planned to find out whether cosmic rays can affect cloud formation, and if so how. (37 refs).

  5. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  6. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  7. Scheduling Parallel Jobs Using Migration and Consolidation in the Cloud

    Xiaocheng Liu

    2012-01-01

    Full Text Available An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.

  8. Stratocumulus Cloud Top Radiative Cooling and Cloud Base Updraft Speeds

    Kazil, J.; Feingold, G.; Balsells, J.; Klinger, C.

    2017-12-01

    Cloud top radiative cooling is a primary driver of turbulence in the stratocumulus-topped marine boundary. A functional relationship between cloud top cooling and cloud base updraft speeds may therefore exist. A correlation of cloud top radiative cooling and cloud base updraft speeds has been recently identified empirically, providing a basis for satellite retrieval of cloud base updraft speeds. Such retrievals may enable analysis of aerosol-cloud interactions using satellite observations: Updraft speeds at cloud base co-determine supersaturation and therefore the activation of cloud condensation nuclei, which in turn co-determine cloud properties and precipitation formation. We use large eddy simulation and an off-line radiative transfer model to explore the relationship between cloud-top radiative cooling and cloud base updraft speeds in a marine stratocumulus cloud over the course of the diurnal cycle. We find that during daytime, at low cloud water path (CWP correlated, in agreement with the reported empirical relationship. During the night, in the absence of short-wave heating, CWP builds up (CWP > 50 g m-2) and long-wave emissions from cloud top saturate, while cloud base heating increases. In combination, cloud top cooling and cloud base updrafts become weakly anti-correlated. A functional relationship between cloud top cooling and cloud base updraft speed can hence be expected for stratocumulus clouds with a sufficiently low CWP and sub-saturated long-wave emissions, in particular during daytime. At higher CWPs, in particular at night, the relationship breaks down due to saturation of long-wave emissions from cloud top.

  9. The implications of dust ice nuclei effect on cloud top temperature in a complex mesoscale convective system.

    Li, Rui; Dong, Xue; Guo, Jingchao; Fu, Yunfei; Zhao, Chun; Wang, Yu; Min, Qilong

    2017-10-23

    Mineral dust is the most important natural source of atmospheric ice nuclei (IN) which may significantly mediate the properties of ice cloud through heterogeneous nucleation and lead to crucial impacts on hydrological and energy cycle. The potential dust IN effect on cloud top temperature (CTT) in a well-developed mesoscale convective system (MCS) was studied using both satellite observations and cloud resolving model (CRM) simulations. We combined satellite observations from passive spectrometer, active cloud radar, lidar, and wind field simulations from CRM to identify the place where ice cloud mixed with dust particles. For given ice water path, the CTT of dust-mixed cloud is warmer than that in relatively pristine cloud. The probability distribution function (PDF) of CTT for dust-mixed clouds shifted to the warmer end and showed two peaks at about -45 °C and -25 °C. The PDF for relatively pristine cloud only show one peak at -55 °C. Cloud simulations with different microphysical schemes agreed well with each other and showed better agreement with satellite observations in pristine clouds, but they showed large discrepancies in dust-mixed clouds. Some microphysical schemes failed to predict the warm peak of CTT related to heterogeneous ice formation.

  10. Formation of Massive Molecular Cloud Cores by Cloud-cloud Collision

    Inoue, Tsuyoshi; Fukui, Yasuo

    2013-01-01

    Recent observations of molecular clouds around rich massive star clusters including NGC3603, Westerlund 2, and M20 revealed that the formation of massive stars could be triggered by a cloud-cloud collision. By using three-dimensional, isothermal, magnetohydrodynamics simulations with the effect of self-gravity, we demonstrate that massive, gravitationally unstable, molecular cloud cores are formed behind the strong shock waves induced by the cloud-cloud collision. We find that the massive mol...

  11. Towards an Approach of Semantic Access Control for Cloud Computing

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  12. CLOUD COMPUTING AND INTERNET OF THINGS FOR SMART CITY DEPLOYMENTS

    GEORGE SUCIU

    2013-05-01

    Full Text Available Cloud Computing represents the new method of delivering hardware and software resources to the users, Internet of Things (IoT is currently one of the most popular ICT paradigms. Both concepts can have a major impact on how we build smart or/and smarter cities. Cloud computing represents the delivery of hardware and software resources on-demand over the Internet as a Service. At the same time, the IoT concept envisions a new generation of devices (sensors, both virtual and physical that are connected to the Internet and provide different services for value-added applications. In this paper we present our view on how to deploy Cloud computing and IoT for smart or/and smarter cities. We demonstrate that data gathered from heterogeneous and distributed IoT devices can be automatically managed, handled and reused with decentralized cloud services.

  13. Impact of deforestation in the Amazon basin on cloud climatology.

    Wang, Jingfeng; Chagnon, Frédéric J F; Williams, Earle R; Betts, Alan K; Renno, Nilton O; Machado, Luiz A T; Bisht, Gautam; Knox, Ryan; Bras, Rafael L

    2009-03-10

    Shallow clouds are prone to appear over deforested surfaces whereas deep clouds, much less frequent than shallow clouds, favor forested surfaces. Simultaneous atmospheric soundings at forest and pasture sites during the Rondonian Boundary Layer Experiment (RBLE-3) elucidate the physical mechanisms responsible for the observed correlation between clouds and land cover. We demonstrate that the atmospheric boundary layer over the forested areas is more unstable and characterized by larger values of the convective available potential energy (CAPE) due to greater humidity than that which is found over the deforested area. The shallow convection over the deforested areas is relatively more active than the deep convection over the forested areas. This greater activity results from a stronger lifting mechanism caused by mesoscale circulations driven by deforestation-induced heterogeneities in land cover.

  14. Heterogeneous cellular networks

    Hu, Rose Qingyang

    2013-01-01

    A timely publication providing coverage of radio resource management, mobility management and standardization in heterogeneous cellular networks The topic of heterogeneous cellular networks has gained momentum in industry and the research community, attracting the attention of standardization bodies such as 3GPP LTE and IEEE 802.16j, whose objectives are looking into increasing the capacity and coverage of the cellular networks. This book focuses on recent progresses,  covering the related topics including scenarios of heterogeneous network deployment, interference management i

  15. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI

  16. A service brokering and recommendation mechanism for better selecting cloud services.

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  17. Work and workload of Dutch primary care midwives in 2010.

    Wiegers, Therese A; Warmelink, J Catja; Spelten, Evelien R; Klomp, T; Hutton, Eileen K

    2014-09-01

    to re-assess the work and workload of primary care midwives in the Netherlands. in the Netherlands most midwives work in primary care as independent practitioners in a midwifery practice with two or more colleagues. Each practice provides 24/7 care coverage through office hours and on-call hours of the midwives. In 2006 the results of a time registration project of primary care midwives were published as part of a 4-year monitor study. This time the registration project was repeated, albeit on a smaller scale, in 2010. as part of a larger study (the Deliver study) all midwives working in 20 midwifery practices kept a time register 24 hours a day, for one week. They also filled out questionnaires about their background, work schedules and experiences of workload. A second component of this study collected data from all midwifery practices in the Netherlands and included questions about practice size (number of midwives and number of clients in the previous year). in 2010, primary care midwives actually worked on an average 32.6 hours per week and approximately 67% of their working time (almost 22 hours per week) was spent on client-related activities. On an average a midwife was on-call for 39 hours a week and almost 13 of the 32.6 hours of work took place during on-call-hours. This means that the total hours that an average midwife was involved in her work (either actually working or on-call) was almost 59 hours a week. Compared to 2004 the number of hours an average midwife was actually working increased by 4 hours (from 29 to 32.6 hours) whereas the total number of hours an average midwife was involved with her work decreased by 6 hours (from 65 to 59 hours). In 2010, compared to 2001-2004, the midwives spent proportionally less time on direct client care (67% versus 73%), although in actual number of hours this did not change much (22 versus 21). In 2009 the average workload of a midwife was 99 clients at booking, 56 at the start of labour, 33 at childbirth, and

  18. Making and Breaking Clouds

    Kohler, Susanna

    2017-10-01

    Molecular clouds which youre likely familiar with from stunning popular astronomy imagery lead complicated, tumultuous lives. A recent study has now found that these features must be rapidly built and destroyed.Star-Forming CollapseA Hubble view of a molecular cloud, roughly two light-years long, that has broken off of the Carina Nebula. [NASA/ESA, N. Smith (University of California, Berkeley)/The Hubble Heritage Team (STScI/AURA)]Molecular gas can be found throughout our galaxy in the form of eminently photogenic clouds (as featured throughout this post). Dense, cold molecular gas makes up more than 20% of the Milky Ways total gas mass, and gravitational instabilities within these clouds lead them to collapse under their own weight, resulting in the formation of our galaxys stars.How does this collapse occur? The simplest explanation is that the clouds simply collapse in free fall, with no source of support to counter their contraction. But if all the molecular gas we observe collapsed on free-fall timescales, star formation in our galaxy would churn a rate thats at least an order of magnitude higher than the observed 12 solar masses per year in the Milky Way.Destruction by FeedbackAstronomers have theorized that there may be some mechanism that supports these clouds against gravity, slowing their collapse. But both theoretical studies and observations of the clouds have ruled out most of these potential mechanisms, and mounting evidence supports the original interpretation that molecular clouds are simply gravitationally collapsing.A sub-mm image from ESOs APEX telescope of part of the Taurus molecular cloud, roughly ten light-years long, superimposed on a visible-light image of the region. [ESO/APEX (MPIfR/ESO/OSO)/A. Hacar et al./Digitized Sky Survey 2. Acknowledgment: Davide De Martin]If this is indeed the case, then one explanation for our low observed star formation rate could be that molecular clouds are rapidly destroyed by feedback from the very stars

  19. Cloud Computing: An Overview

    Libor Sarga

    2012-10-01

    Full Text Available As cloud computing is gaining acclaim as a cost-effective alternative to acquiring processing resources for corporations, scientific applications and individuals, various challenges are rapidly coming to the fore. While academia struggles to procure a concise definition, corporations are more interested in competitive advantages it may generate and individuals view it as a way of speeding up data access times or a convenient backup solution. Properties of the cloud architecture largely preclude usage of existing practices while achieving end-users’ and companies’ compliance requires considering multiple infrastructural as well as commercial factors, such as sustainability in case of cloud-side interruptions, identity management and off-site corporate data handling policies. The article overviews recent attempts at formal definitions of cloud computing, summarizes and critically evaluates proposed delimitations, and specifies challenges associated with its further proliferation. Based on the conclusions, future directions in the field of cloud computing are also briefly hypothesized to include deeper focus on community clouds and bolstering innovative cloud-enabled platforms and devices such as tablets, smart phones, as well as entertainment applications.

  20. Cloud Computing Law

    Millard, Christopher

    2013-01-01

    This book is about the legal implications of cloud computing. In essence, ‘the cloud’ is a way of delivering computing resources as a utility service via the internet. It is evolving very rapidly with substantial investments being made in infrastructure, platforms and applications, all delivered ‘as a service’. The demand for cloud resources is enormous, driven by such developments as the deployment on a vast scale of mobile apps and the rapid emergence of ‘Big Data’. Part I of this book explains what cloud computing is and how it works. Part II analyses contractual relationships between cloud service providers and their customers, as well as the complex roles of intermediaries. Drawing on primary research conducted by the Cloud Legal Project at Queen Mary University of London, cloud contracts are analysed in detail, including the appropriateness and enforceability of ‘take it or leave it’ terms of service, as well as the scope for negotiating cloud deals. Specific arrangements for public sect...

  1. MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing

    Rahul Yadav

    2017-01-01

    Full Text Available Mobile cloud computing (MCC provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2 emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.

  2. An Architecture for Cross-Cloud System Management

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  3. Diffuse interstellar clouds

    Black, J.H.

    1987-01-01

    The author defines and discusses the nature of diffuse interstellar clouds. He discusses how they contribute to the general extinction of starlight. The atomic and molecular species that have been identified in the ultraviolet, visible, and near infrared regions of the spectrum of a diffuse cloud are presented. The author illustrates some of the practical considerations that affect absorption line observations of interstellar atoms and molecules. Various aspects of the theoretical description of diffuse clouds required for a full interpretation of the observations are discussed

  4. Cloud Computing Security

    Ngongang, Guy

    2011-01-01

    This project aimed to show how possible it is to use a network intrusion detection system in the cloud. The security in the cloud is a concern nowadays and security professionals are still finding means to make cloud computing more secure. First of all the installation of the ESX4.0, vCenter Server and vCenter lab manager in server hardware was successful in building the platform. This allowed the creation and deployment of many virtual servers. Those servers have operating systems and a...

  5. Aerosols, clouds and radiation

    Twomey, S [University of Arizona, Tucson, AZ (USA). Inst. of Atmospheric Physics

    1991-01-01

    Most of the so-called 'CO{sub 2} effect' is, in fact, an 'H{sub 2}O effect' brought into play by the climate modeler's assumption that planetary average temperature dictates water-vapor concentration (following Clapeyron-Clausius). That assumption ignores the removal process, which cloud physicists know to be influenced by the aerosol, since the latter primarily controls cloud droplet number and size. Droplet number and size are also influential for shortwave (solar) energy. The reflectance of many thin to moderately thick clouds changes when nuclei concentrations change and make shortwave albedo susceptible to aerosol influence.

  6. Trusted cloud computing

    Krcmar, Helmut; Rumpe, Bernhard

    2014-01-01

    This book documents the scientific results of the projects related to the Trusted Cloud Program, covering fundamental aspects of trust, security, and quality of service for cloud-based services and applications. These results aim to allow trustworthy IT applications in the cloud by providing a reliable and secure technical and legal framework. In this domain, business models, legislative circumstances, technical possibilities, and realizable security are closely interwoven and thus are addressed jointly. The book is organized in four parts on "Security and Privacy", "Software Engineering and

  7. Neurobiological heterogeneity in ADHD

    de Zeeuw, P.

    2011-01-01

    Attention-Deficit/Hyperactivity Disorder (ADHD) is a highly heterogeneous disorder clinically. Symptoms take many forms, from subtle but pervasive attention problems or dreaminess up to disruptive and unpredictable behavior. Interestingly, early neuroscientific work on ADHD assumed either a

  8. Heterogeneous Calculation of {epsilon}

    Jonsson, Alf

    1961-02-15

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of {epsilon}. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer.

  9. Heterogeneous Calculation of ε

    Jonsson, Alf

    1961-02-01

    A heterogeneous method of calculating the fast fission factor given by Naudet has been applied to the Carlvik - Pershagen definition of ε. An exact calculation of the collision probabilities is included in the programme developed for the Ferranti - Mercury computer

  10. Is This Work Sustainable? Teacher Turnover and Perceptions of Workload in Charter Management Organizations

    Torres, A. Chris

    2016-01-01

    An unsustainable workload is considered the primary cause of teacher turnover at Charter Management Organizations (CMOs), yet most reports provide anecdotal evidence to support this claim. This study uses 2010-2011 survey data from one large CMO and finds that teachers' perceptions of workload are significantly associated with decisions to leave…

  11. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  12. The Impacts of Different Types of Workload Allocation Models on Academic Satisfaction and Working Life

    Vardi, Iris

    2009-01-01

    Increasing demands on academic work have resulted in many academics working long hours and expressing dissatisfaction with their working life. These concerns have led to a number of faculties and universities adopting workload allocation models to improve satisfaction and better manage workloads. This paper reports on a study which examined the…

  13. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  14. Understanding the Effect of Workload on Automation Use for Younger and Older Adults

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2018-01-01

    Objective This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Background Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Method Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Results Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults’ performance was negatively affected by workload, but their compliance and reliance were unaffected. Conclusion Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Application Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training. PMID:22235529

  15. Simulation-based computation of the workload correlation function in a Lévy-driven queue

    Glynn, P.W.; Mandjes, M.

    2011-01-01

    In this paper we consider a single-server queue with Lévy input, and, in particular, its workload process (Qt)t≥0, focusing on its correlation structure. With the correlation function defined as r(t):= cov(Q0, Qt) / varQ0 (assuming that the workload process is in stationarity at time 0), we first

  16. Simulation-based computation of the workload correlation function in a Levy-driven queue

    P. Glynn; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractIn this paper we consider a single-server queue with Levy input, and in particular its workload process (Q_t), focusing on its correlation structure. With the correlation function defined as r(t) := Cov(Q_0, Q_t)/Var Q_0 (assuming the workload process is in stationarity at time 0), we

  17. Simulation-based computation of the workload correlation function in a Lévy-driven queue

    P. Glynn; M.R.H. Mandjes (Michel)

    2010-01-01

    htmlabstractIn this paper we consider a single-server queue with Levy input, and in particular its workload process (Q_t), focusing on its correlation structure. With the correlation function defined as r(t) := Cov(Q_0,Q_t)/Var(Q_0) (assuming the workload process is in stationarity at time 0), we

  18. Mental workload measurement in operator control room using NASA-TLX

    Sugarindra, M.; Suryoputro, M. R.; Permana, A. I.

    2017-12-01

    The workload, encountered a combination of physical workload and mental workload, is a consequence of the activities for workers. Central control room is one department in the oil processing company, employees tasked with monitoring the processing unit for 24 hours nonstop with a combination of 3 shifts in 8 hours. NASA-TLX (NASA Task Load Index) is one of the subjective mental workload measurement using six factors, namely the Mental demand (MD), Physical demand (PD), Temporal demand (TD), Performance (OP), Effort (EF), frustration levels (FR). Measurement of a subjective mental workload most widely used because it has a high degree of validity. Based on the calculation of the mental workload, there at 5 units (DTU, NPU, HTU, DIST and OPS) at the control chamber (94; 83.33; 94.67; 81, 33 and 94.67 respectively) that categorize as very high mental workload. The high level of mental workload on the operator in the Central Control Room is a requirement to have high accuracy, alertness and can make decisions quickly

  19. Understanding the effect of workload on automation use for younger and older adults.

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2011-12-01

    This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults' performance was negatively affected by workload, but their compliance and reliance were unaffected. Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training.

  20. Driving with varying secondary task levels: mental workload, behavioural effects, and task prioritization

    Schaap, Nina; van Arem, Bart; van der Horst, Richard; Brookhuis, Karel; Alkim, T.P.; Arentze, T.

    2010-01-01

    Advanced Driver Assistance (ADA) Systems may provide a solution for safety-critical traffic situations. But these systems are new additions into the vehicle that might increase drivers’ mental workload. How do drivers behave in situations with high mental workload, and do they actively prioritize

  1. An Investigation of the Workload and Job Satisfaction of North Carolina's Special Education Directors

    Cash, Jennifer Brown

    2013-01-01

    Keywords: special education directors, workload, job satisfaction, special education administration. The purpose of this mixed methods research study was to investigate employee characteristics, workload, and job satisfaction of special education directors employed by local education agencies in North Carolina (N = 115). This study illuminates the…

  2. Mental workload measurement for emergency operating procedures in digital nuclear power plants.

    Gao, Qin; Wang, Yang; Song, Fei; Li, Zhizhong; Dong, Xiaolu

    2013-01-01

    Mental workload is a major consideration for the design of emergency operation procedures (EOPs) in nuclear power plants. Continuous and objective measures are desired. This paper compares seven mental workload measurement methods (pupil size, blink rate, blink duration, heart rate variability, parasympathetic/sympathetic ratio, total power and (Goals, Operations, Methods, and Section Rules)-(Keystroke Level Model) GOMS-KLM-based workload index) with regard to sensitivity, validity and intrusiveness. Eighteen participants performed two computerised EOPs of different complexity levels, and mental workload measures were collected during the experiment. The results show that the blink rate is sensitive to both the difference in the overall task complexity and changes in peak complexity within EOPs, that the error rate is sensitive to the level of arousal and correlate to the step error rate and that blink duration increases over the task period in both low and high complexity EOPs. Cardiac measures were able to distinguish tasks with different overall complexity. The intrusiveness of the physiological instruments is acceptable. Finally, the six physiological measures were integrated using group method of data handling to predict perceived overall mental workload. The study compared seven measures for evaluating the mental workload with emergency operation procedure in nuclear power plants. An experiment with simulated procedures was carried out, and the results show that eye response measures are useful for assessing temporal changes of workload whereas cardiac measures are useful for evaluating the overall workload.

  3. The Influence of Nursing Faculty Workloads on Faculty Retention: A Case Study

    Wood, Jennifer J.

    2013-01-01

    Nursing faculty workloads have come to the forefront of discussion in nursing education. The National League of Nursing (NLN) has made nursing faculty workloads a high priority in nursing education. Included in the priorities are areas of creating reform through innovations in nursing education, evaluating reform through evaluation research, and…

  4. Nonparametric estimation of the stationary M/G/1 workload distribution function

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  5. Driver's mental workload prediction model based on physiological indices.

    Yan, Shengyuan; Tran, Cong Chi; Wei, Yingying; Habiyaremye, Jean Luc

    2017-09-15

    Developing an early warning model to predict the driver's mental workload (MWL) is critical and helpful, especially for new or less experienced drivers. The present study aims to investigate the correlation between new drivers' MWL and their work performance, regarding the number of errors. Additionally, the group method of data handling is used to establish the driver's MWL predictive model based on subjective rating (NASA task load index [NASA-TLX]) and six physiological indices. The results indicate that the NASA-TLX and the number of errors are positively correlated, and the predictive model shows the validity of the proposed model with an R 2 value of 0.745. The proposed model is expected to provide a reference value for the new drivers of their MWL by providing the physiological indices, and the driving lesson plans can be proposed to sustain an appropriate MWL as well as improve the driver's work performance.

  6. Improving pilot mental workload evaluation with combined measures.

    Wanyan, Xiaoru; Zhuang, Damin; Zhang, Huan

    2014-01-01

    Behavioral performance, subjective assessment based on NASA Task Load Index (NASA-TLX), as well as physiological measures indexed by electrocardiograph (ECG), event-related potential (ERP), and eye tracking data were used to assess the mental workload (MW) related to flight tasks. Flight simulation tasks were carried out by 12 healthy participants under different MW conditions. The MW conditions were manipulated by setting the quantity of flight indicators presented on the head-up display (HUD) in the cruise phase. In this experiment, the behavioral performance and NASA-TLX could reflect the changes of MW ideally. For physiological measures, the indices of heart rate variability (HRV), P3a, pupil diameter and eyelid opening were verified to be sensitive to MW changes. Our findings can be applied to the comprehensive evaluation of MW during flight tasks and the further quantitative classification.

  7. Relating physician's workload with errors during radiation therapy planning.

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  8. Linking the Pilot Structural Model and Pilot Workload

    Bachelder, Edward; Hess, Ronald; Aponso, Bimal; Godfroy-Cooper, Martine

    2018-01-01

    Behavioral models are developed that closely reproduced pulsive control response of two pilots using markedly different control techniques while conducting a tracking task. An intriguing find was that the pilots appeared to: 1) produce a continuous, internally-generated stick signal that they integrated in time; 2) integrate the actual stick position; and 3) compare the two integrations to either issue or cease a pulse command. This suggests that the pilots utilized kinesthetic feedback in order to sense and integrate stick position, supporting the hypothesis that pilots can access and employ the proprioceptive inner feedback loop proposed by Hess's pilot Structural Model. A Pilot Cost Index was developed, whose elements include estimated workload, performance, and the degree to which the pilot employs kinesthetic feedback. Preliminary results suggest that a pilot's operating point (parameter values) may be based on control style and index minimization.

  9. Decision Tree Rating Scales for Workload Estimation: Theme and Variations

    Wietwille, W. W.; Skipper, J. H.; Rieger, C. A.

    1984-01-01

    The modified Cooper-Harper (MCH) scale has been shown to be a sensitive indicator of workload in several different types of aircrew tasks. The MCH scale was examined to determine if certain variations of the scale might provide even greater sensitivity and to determine the reasons for the sensitivity of the scale. The MCH scale and five newly devised scales were studied in two different aircraft simulator experiments in which pilot loading was treated as an independent variable. Results indicate that while one of the new scales may be more sensitive in a given experiment, task dependency is a problem. The MCH scale exhibits consistent sensitivity and remains the scale recommended for general use. The results of the rating scale experiments are presented and the questionnaire results which were directed at obtaining a better understanding of the reasons for the relative sensitivity of the MCH scale and its variations are described.

  10. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  11. HETEROGENEOUS INTEGRATION TECHNOLOGY

    2017-08-24

    AFRL-RY-WP-TR-2017-0168 HETEROGENEOUS INTEGRATION TECHNOLOGY Dr. Burhan Bayraktaroglu Devices for Sensing Branch Aerospace Components & Subsystems...Final September 1, 2016 – May 1, 2017 4. TITLE AND SUBTITLE HETEROGENEOUS INTEGRATION TECHNOLOGY 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER N/A...provide a structure for this review. The history and the current status of integration technologies in each category are examined and product examples are

  12. Molecular clouds near supernova remnants

    Wootten, H.A.

    1978-01-01

    The physical properties of molecular clouds near supernova remnants were investigated. Various properties of the structure and kinematics of these clouds are used to establish their physical association with well-known remmnants. An infrared survey of the most massive clouds revealed embedded objects, probably stars whose formation was induced by the supernova blast wave. In order to understand the relationship between these and other molecular clouds, a control group of clouds was also observed. Excitation models for dense regions of all the clouds are constructed to evaluate molecular abundances in these regions. Those clouds that have embedded stars have lower molecular abundances than the clouds that do not. A cloud near the W28 supernova remnant also has low abundances. Molecular abundances are used to measure an important parameter, the electron density, which is not directly observable. In some clouds extensive deuterium fractionation is observed which confirms electron density measurements in those clouds. Where large deuterium fractionation is observed, the ionization rate in the cloud interior can also be measured. The electron density and ionization rate in the cloud near W28 are higher than in most clouds. The molecular abundances and electron densities are functions of the chemical and dynamical state of evolution of the cloud. Those clouds with lowest abundances are probably the youngest clouds. As low-abundance clouds, some clouds near supernova remnants may have been recently swept from the local interstellar material. Supernova remnants provide sites for star formation in ambient clouds by compressing them, and they sweep new clouds from more diffuse local matter

  13. On-call emergency workload of a general surgical team.

    Jawaid, Masood; Raza, Syed Muhammad; Alam, Shams Nadeem; Manzar, S

    2009-01-01

    To examine the on-call emergency workload of a general surgical team at a tertiary care teaching hospital to guide planning and provision of better surgical services. During six months period from August to January 2007; all emergency calls attended by general surgical team of Surgical Unit II in Accident and Emergency department (A and E) and in other units of Civil, Hospital Karachi, Pakistan were prospectively recorded. Data recorded includes timing of call, diagnosis, operation performed and outcome apart from demography. Total 456 patients (326 males and 130 females) were attended by on-call general surgery team during 30 emergency days. Most of the calls, 191 (41.9%) were received from 8 am to 5 pm. 224 (49.1%) calls were of abdominal pain, with acute appendicitis being the most common specific pathology in 41 (9.0%) patients. Total 73 (16.0%) calls were received for trauma. Total 131 (28.7%) patients were admitted in the surgical unit for urgent operation or observation while 212 (46.5%) patients were discharged from A and E. 92 (20.1%) patients were referred to other units with medical referral accounts for 45 (9.8%) patients. Total 104 (22.8%) emergency surgeries were done and the most common procedure performed was appendicectomy in 34 (32.7%) patients. Major workload of on-call surgical emergency team is dealing with the acute conditions of abdomen. However, significant proportion of patients are suffering from other conditions including trauma that require a holistic approach to care and a wide range of skills and experience. These results have important implications in future healthcare planning and for the better training of general surgical residents.

  14. On-call emergency workload of a general surgical team

    Jawaid Masood

    2009-01-01

    Full Text Available Background: To examine the on-call emergency workload of a general surgical team at a tertiary care teaching hospital to guide planning and provision of better surgical services. Patients and Methods: During six months period from August to January 2007; all emergency calls attended by general surgical team of Surgical Unit II in Accident and Emergency department (A and E and in other units of Civil, Hospital Karachi, Pakistan were prospectively recorded. Data recorded includes timing of call, diagnosis, operation performed and outcome apart from demography. Results: Total 456 patients (326 males and 130 females were attended by on-call general surgery team during 30 emergency days. Most of the calls, 191 (41.9% were received from 8 am to 5 pm. 224 (49.1% calls were of abdominal pain, with acute appendicitis being the most common specific pathology in 41 (9.0% patients. Total 73 (16.0% calls were received for trauma. Total 131 (28.7% patients were admitted in the surgical unit for urgent operation or observation while 212 (46.5% patients were discharged from A and E. 92 (20.1% patients were referred to other units with medical referral accounts for 45 (9.8% patients. Total 104 (22.8% emergency surgeries were done and the most common procedure performed was appendicectomy in 34 (32.7% patients. Conclusion: Major workload of on-call surgical emergency team is dealing with the acute conditions of abdomen. However, significant proportion of patients are suffering from other conditions including trauma that require a holistic approach to care and a wide range of skills and experience. These results have important implications in future healthcare planning and for the better training of general surgical residents.

  15. Severity and workload related to adverse events in the ICU.

    Serafim, Clarita Terra Rodrigues; Dell'Acqua, Magda Cristina Queiroz; Castro, Meire Cristina Novelli E; Spiri, Wilza Carla; Nunes, Hélio Rubens de Carvalho

    2017-01-01

    To analyze whether an increase in patient severity and nursing workload are correlated to a greater incidence of adverse events (AEs) in critical patients. A prospective single cohort study was performed on a sample of 138 patients hospitalized in an intensive care unit (ICU). A total of 166 AEs, occurred, affecting 50.7% of the patients. Increased patient severity presented a direct relationship to the probability of AEs occurring. However, nursing workload did not present a statistically significant relationship with the occurrence of AEs. The results cast light on the importance of using evaluation tools by the nursing personnel in order to optimize their daily activities and focus on patient safety. Analisar se o aumento da gravidade do paciente e a carga de trabalho de enfermagem está relacionado à maior incidência de Eventos Adversos (EAs) em pacientes críticos. Estudo de coorte única, prospectivo, com amostra de 138 pacientes internados em uma Unidade de Terapia Intensiva (UTI). Ao todo, foram evidenciados 166 EAs, que acometeram 50,7% dos pacientes. O aumento da gravidade do paciente apresentou relação direta com a chance de ocorrência de EAs. Entretanto, a carga de trabalho de enfermagem não apresentou relação estatisticamente significativa, na ocorrência de EAs. Os resultados permitem refletir acerca da importância da equipe de enfermagem, em utilizar instrumentos de avaliação, com o objetivo de melhorar e planejar suas ações diárias, com foco na segurança do paciente.

  16. Taxonomy of cloud computing services

    Hoefer, C.N.; Karagiannis, Georgios

    2010-01-01

    Cloud computing is a highly discussed topic, and many big players of the software industry are entering the development of cloud services. Several companies want to explore the possibilities and benefits of cloud computing, but with the amount of cloud computing services increasing quickly, the need

  17. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    Xiangyu Li

    2017-02-01

    Full Text Available This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430, and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  18. A self-analysis of the NASA-TLX workload measure.

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  19. Effects of workload on teachers' functioning: A moderated mediation model including sleeping problems and overcommitment.

    Huyghebaert, Tiphaine; Gillet, Nicolas; Beltou, Nicolas; Tellier, Fanny; Fouquereau, Evelyne

    2018-06-14

    This study investigated the mediating role of sleeping problems in the relationship between workload and outcomes (emotional exhaustion, presenteeism, job satisfaction, and performance), and overcommitment was examined as a moderator in the relationship between workload and sleeping problems. We conducted an empirical study using a sample of 884 teachers. Consistent with our predictions, results revealed that the positive indirect effects of workload on emotional exhaustion and presenteeism, and the negative indirect effects of workload on job satisfaction and performance, through sleeping problems, were only significant among overcommitted teachers. Workload and overcommitment were also directly related to all four outcomes, precisely, they both positively related to emotional exhaustion and presenteeism and negatively related to job satisfaction and performance. Theoretical contributions and perspectives and implications for practice are discussed. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Cloud Computing (1/2)

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.