WorldWideScience

Sample records for heterogeneous cloud workloads

  1. Cloudweaver: Adaptive and Data-Driven Workload Manager for Generic Clouds

    Science.gov (United States)

    Li, Rui; Chen, Lei; Li, Wen-Syan

    Cloud computing denotes the latest trend in application development for parallel computing on massive data volumes. It relies on clouds of servers to handle tasks that used to be managed by an individual server. With cloud computing, software vendors can provide business intelligence and data analytic services for internet scale data sets. Many open source projects, such as Hadoop, offer various software components that are essential for building a cloud infrastructure. Current Hadoop (and many others) requires users to configure cloud infrastructures via programs and APIs and such configuration is fixed during the runtime. In this chapter, we propose a workload manager (WLM), called CloudWeaver, which provides automated configuration of a cloud infrastructure for runtime execution. The workload management is data-driven and can adapt to dynamic nature of operator throughput during different execution phases. CloudWeaver works for a single job and a workload consisting of multiple jobs running concurrently, which aims at maximum throughput using a minimum set of processors.

  2. Clean Energy Use for Cloud Computing Federation Workloads

    Directory of Open Access Journals (Sweden)

    Yahav Biran

    2017-08-01

    Full Text Available Cloud providers seek to maximize their market share. Traditionally, they deploy datacenters with sufficient capacity to accommodate their entire computing demand while maintaining geographical affinity to its customers. Achieving these goals by a single cloud provider is increasingly unrealistic from a cost of ownership perspective. Moreover, the carbon emissions from underutilized datacenters place an increasing demand on electricity and is a growing factor in the cost of cloud provider datacenters. Cloud-based systems may be classified into two categories: serving systems and analytical systems. We studied two primary workload types, on-demand video streaming as a serving system and MapReduce jobs as an analytical systems and suggested two unique energy mix usage for processing that workloads. The recognition that on-demand video streaming now constitutes the bulk portion of traffic to Internet consumers provides a path to mitigate rising energy demand. On-demand video is usually served through Content Delivery Networks (CDN, often scheduled in backend and edge datacenters. This publication describes a CDN deployment solution that utilizes green energy to supply on-demand streaming workload. A cross-cloud provider collaboration will allow cloud providers to both operate near their customers and reduce operational costs, primarily by lowering the datacenter deployments per provider ratio. Our approach optimizes cross-datacenters deployment. Specifically, we model an optimized CDN-edge instance allocation system that maximizes, under a set of realistic constraints, green energy utilization. The architecture of this cross-cloud coordinator service is based on Ubernetes, an open source container cluster manager that is a federation of Kubernetes clusters. It is shown how, under reasonable constraints, it can reduce the projected datacenter’s carbon emissions growth by 22% from the currently reported consumption. We also suggest operating

  3. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    Science.gov (United States)

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  4. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  5. Workload Classification & Software Energy Measurement for Efficient Scheduling on Private Cloud Platforms

    OpenAIRE

    Smith, James W.; Sommerville, Ian

    2011-01-01

    At present there are a number of barriers to creating an energy efficient workload scheduler for a Private Cloud based data center. Firstly, the relationship between different workloads and power consumption must be investigated. Secondly, current hardware-based solutions to providing energy usage statistics are unsuitable in warehouse scale data centers where low cost and scalability are desirable properties. In this paper we discuss the effect of different workloads on server power consumpt...

  6. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; Buncic, P; De, K; Oleynik, D; Petrosyan, A; Jha, S; Mount, R; Porter, R J; Read, K F; Wells, J C; Vaniachine, A

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(10 2 ) sites, O(10 5 ) cores, O(10 8 ) jobs per year, O(10 3 ) users, and ATLAS data volume is O(10 17 ) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center 'Kurchatov Institute' together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the

  7. Cloud-Based Parameter-Driven Statistical Services and Resource Allocation in a Heterogeneous Platform on Enterprise Environment

    Directory of Open Access Journals (Sweden)

    Sungju Lee

    2016-09-01

    Full Text Available A fundamental key for enterprise users is a cloud-based parameter-driven statistical service and it has become a substantial impact on companies worldwide. In this paper, we demonstrate the statistical analysis for some certain criteria that are related to data and applied to the cloud server for a comparison of results. In addition, we present a statistical analysis and cloud-based resource allocation method for a heterogeneous platform environment by performing a data and information analysis with consideration of the application workload and the server capacity, and subsequently propose a service prediction model using a polynomial regression model. In particular, our aim is to provide stable service in a given large-scale enterprise cloud computing environment. The virtual machines (VMs for cloud-based services are assigned to each server with a special methodology to satisfy the uniform utilization distribution model. It is also implemented between users and the platform, which is a main idea of our cloud computing system. Based on the experimental results, we confirm that our prediction model can provide sufficient resources for statistical services to large-scale users while satisfying the uniform utilization distribution.

  8. Online Cloud Offloading Using Heterogeneous Enhanced Remote Radio Heads

    KAUST Repository

    Shnaiwer, Yousef N.

    2018-02-12

    This paper studies the cloud offloading gains of using heterogeneous enhanced remote radio heads (eRRHs) and dual-interface clients in fog radio access networks (F-RANs). First, the cloud offloading problem is formulated as a collection of independent sets selection problem over a network coding graph, and its NP-hardness is shown. Therefore, a computationally simple online heuristic algorithm is proposed, that maximizes cloud offloading by finding an efficient schedule of coded file transmissions from the eRRHs and the cloud base station (CBS). Furthermore, a lower bound on the average number of required CBS channels to serve all clients is derived. Simulation results show that our proposed framework that uses both network coding and a heterogeneous F-RAN setting enhances cloud offloading as compared to conventional homogeneous F-RANs with network coding.

  9. Online Cloud Offloading Using Heterogeneous Enhanced Remote Radio Heads

    KAUST Repository

    Shnaiwer, Yousef N.; Sorour, Sameh; Sadeghi, Parastoo; Al-Naffouri, Tareq Y.

    2018-01-01

    This paper studies the cloud offloading gains of using heterogeneous enhanced remote radio heads (eRRHs) and dual-interface clients in fog radio access networks (F-RANs). First, the cloud offloading problem is formulated as a collection

  10. Development of a Survivable Cloud Multi-Robot Framework for Heterogeneous Environments

    Directory of Open Access Journals (Sweden)

    Isaac Osunmakinde

    2014-10-01

    Full Text Available Cloud robotics is a paradigm that allows for robots to offload computationally intensive and data storage requirements into the cloud by providing a secure and customizable environment. The challenge for cloud robotics is the inherent problem of cloud disconnection. A major assumption made in the development of the current cloud robotics frameworks is that the connection between the cloud and the robot is always available. However, for multi-robots working in heterogeneous environments, the connection between the cloud and the robots cannot always be guaranteed. This work serves to assist with the challenge of disconnection in cloud robotics by proposing a survivable cloud multi-robotics (SCMR framework for heterogeneous environments. The SCMR framework leverages the combination of a virtual ad hoc network formed by robot-to-robot communication and a physical cloud infrastructure formed by robot-to-cloud communications. The quality of service (QoS on the SCMR framework was tested and validated by determining the optimal energy utilization and time of response (ToR on drivability analysis with and without cloud connection. The design trade-off, including the result, is between the computation energy for the robot execution and the offloading energy for the cloud execution.

  11. StackInsights: Cognitive Learning for Hybrid Cloud Readiness

    OpenAIRE

    Qiao, Mu; Bathen, Luis; Génot, Simon-Pierre; Lee, Sunhwan; Routray, Ramani

    2017-01-01

    Hybrid cloud is an integrated cloud computing environment utilizing a mix of public cloud, private cloud, and on-premise traditional IT infrastructures. Workload awareness, defined as a detailed full range understanding of each individual workload, is essential in implementing the hybrid cloud. While it is critical to perform an accurate analysis to determine which workloads are appropriate for on-premise deployment versus which workloads can be migrated to a cloud off-premise, the assessment...

  12. Services Recommendation System based on Heterogeneous Network Analysis in Cloud Computing

    OpenAIRE

    Junping Dong; Qingyu Xiong; Junhao Wen; Peng Li

    2014-01-01

    Resources are provided mainly in the form of services in cloud computing. In the distribute environment of cloud computing, how to find the needed services efficiently and accurately is the most urgent problem in cloud computing. In cloud computing, services are the intermediary of cloud platform, services are connected by lots of service providers and requesters and construct the complex heterogeneous network. The traditional recommendation systems only consider the functional and non-functi...

  13. The impact of horizontal heterogeneities, cloud fraction, and cloud dynamics on warm cloud effective radii and liquid water path from CERES-like Aqua MODIS retrievals

    OpenAIRE

    D. Painemal; P. Minnis; S. Sun-Mack

    2013-01-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES Edition 4 algorithms are averaged at the CERES footprint resolution (~ 20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean...

  14. Leveraging Cloud Heterogeneity for Cost-Efficient Execution of Parallel Applications

    OpenAIRE

    Roloff, Eduardo; Diener, Matthias; Diaz Carreño, Emmanuell; Gaspary, Luciano Paschoal; Navaux, Philippe O.A.

    2017-01-01

    Public cloud providers offer a wide range of instance types, with different processing and interconnection speeds, as well as varying prices. Furthermore, the tasks of many parallel applications show different computational demands due to load imbalance. These differences can be exploited for improving the cost efficiency of parallel applications in many cloud environments by matching application requirements to instance types. In this paper, we introduce the concept of heterogeneous cloud sy...

  15. KONGMING: Performance Prediction in the Cloud via Multidimensional Interference Surrogates

    Energy Technology Data Exchange (ETDEWEB)

    Bowen, Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Casas-Guix, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bagchi, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-15

    As more and more applications are deployed in the cloud, it is important for both the user and the operator of the cloud that the resources of the cloud are utilized efficiently. Virtualization and workload consolidation techniques are pervasively applied in the cloud to increase resource utilization while providing isolated execution environments for different users. While virtualization hides the architectural details of the underlying hardware, it can also increase the variability in application execution times due to heterogeneity in available hardware, and interference from other applications sharing the same hardware resources. This reduces both the productivity of cloud platforms and limits the degree to which software colocation can be used to increase its efficiency.

  16. Resource allocation in heterogeneous cloud radio access networks: advances and challenges

    KAUST Repository

    Dahrouj, Hayssam; Douik, Ahmed S.; Dhifallah, Oussama Najeeb; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    , becomes a necessity. By connecting all the base stations from different tiers to a central processor (referred to as the cloud) through wire/wireline backhaul links, the heterogeneous cloud radio access network, H-CRAN, provides an open, simple

  17. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Science.gov (United States)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  18. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    International Nuclear Information System (INIS)

    Llamas, Ramón Medrano; Megino, Fernando Harald Barreiro; Cinquilli, Mattia; Kucharczyk, Katarzyna; Denis, Marek Kamil

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  19. Real-time video streaming in mobile cloud over heterogeneous wireless networks

    Science.gov (United States)

    Abdallah-Saleh, Saleh; Wang, Qi; Grecos, Christos

    2012-06-01

    Recently, the concept of Mobile Cloud Computing (MCC) has been proposed to offload the resource requirements in computational capabilities, storage and security from mobile devices into the cloud. Internet video applications such as real-time streaming are expected to be ubiquitously deployed and supported over the cloud for mobile users, who typically encounter a range of wireless networks of diverse radio access technologies during their roaming. However, real-time video streaming for mobile cloud users across heterogeneous wireless networks presents multiple challenges. The network-layer quality of service (QoS) provision to support high-quality mobile video delivery in this demanding scenario remains an open research question, and this in turn affects the application-level visual quality and impedes mobile users' perceived quality of experience (QoE). In this paper, we devise a framework to support real-time video streaming in this new mobile video networking paradigm and evaluate the performance of the proposed framework empirically through a lab-based yet realistic testing platform. One particular issue we focus on is the effect of users' mobility on the QoS of video streaming over the cloud. We design and implement a hybrid platform comprising of a test-bed and an emulator, on which our concept of mobile cloud computing, video streaming and heterogeneous wireless networks are implemented and integrated to allow the testing of our framework. As representative heterogeneous wireless networks, the popular WLAN (Wi-Fi) and MAN (WiMAX) networks are incorporated in order to evaluate effects of handovers between these different radio access technologies. The H.264/AVC (Advanced Video Coding) standard is employed for real-time video streaming from a server to mobile users (client nodes) in the networks. Mobility support is introduced to enable continuous streaming experience for a mobile user across the heterogeneous wireless network. Real-time video stream packets

  20. Coarse-Grain QoS-Aware Dynamic Instance Provisioning for Interactive Workload in the Cloud

    Directory of Open Access Journals (Sweden)

    Jianxiong Wan

    2014-01-01

    Full Text Available Cloud computing paradigm renders the Internet service providers (ISPs with a new approach to deliver their service with less cost. ISPs can rent virtual machines from the Infrastructure-as-a-Service (IaaS provided by the cloud rather than purchasing them. In addition, commercial cloud providers (CPs offer diverse VM instance rental services in various time granularities, which provide another opportunity for ISPs to reduce cost. We investigate a Coarse-grain QoS-aware Dynamic Instance Provisioning (CDIP problem for interactive workload in the cloud from the perspective of ISPs. We formulate the CDIP problem as an optimization problem where the objective is to minimize the VM instance rental cost and the constraint is the percentile delay bound. Since the Internet traffic shows a strong self-similar property, it is hard to get an analytical form of the percentile delay constraint. To address this issue, we purpose a lookup table structure together with a learning algorithm to estimate the performance of the instance provisioning policy. This approach is further extended with two function approximations to enhance the scalability of the learning algorithm. We also present an efficient dynamic instance provisioning algorithm, which takes full advantage of the rental service diversity, to determine the instance rental policy. Extensive simulations are conducted to validate the effectiveness of the proposed algorithms.

  1. The impact of horizontal heterogeneities, cloud fraction, and cloud dynamics on warm cloud effective radii and liquid water path from CERES-like Aqua MODIS retrievals

    Science.gov (United States)

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-05-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES Edition 4 algorithms are averaged at the CERES footprint resolution (~ 20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8 - re2.1 differences are positive ( 50 g m-2, and negative (up to -4 μm) for larger Hσ. Thus, re3.8 - re2.1 differences are more likely to reflect biases associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  2. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    OpenAIRE

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-01-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. ...

  3. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  4. Heuristic Data Placement for Data-Intensive Applications in Heterogeneous Cloud

    Directory of Open Access Journals (Sweden)

    Qing Zhao

    2016-01-01

    Full Text Available Data placement is an important issue which aims at reducing the cost of internode data transfers in cloud especially for data-intensive applications, in order to improve the performance of the entire cloud system. This paper proposes an improved data placement algorithm for heterogeneous cloud environments. In the initialization phase, a data clustering algorithm based on data dependency clustering and recursive partitioning has been presented, and both the factor of data size and fixed position are incorporated. And then a heuristic tree-to-tree data placement strategy is advanced in order to make frequent data movements occur on high-bandwidth channels. Simulation results show that, compared with two classical strategies, this strategy can effectively reduce the amount of data transmission and its time consumption during execution.

  5. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    Science.gov (United States)

    Painemal, D.; Minnis, P.; Sun-Mack, S.

    2013-10-01

    The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E), and cloud fraction (CF) on MODIS cloud effective radius (re), retrieved from the 2.1 μm (re2.1) and 3.8 μm (re3.8) channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km), while heterogeneities (Hσ) are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8-re2.1 differences are positive ( 45 gm-2, and negative (up to -4 μm) for larger Hσ. While re3.8-re2.1 differences in homogeneous scenes are qualitatively consistent with in situ microphysical observations over the region of study, negative differences - particularly evinced in mean regional maps - are more likely to reflect the dominant bias associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  6. The impact of horizontal heterogeneities, cloud fraction, and liquid water path on warm cloud effective radii from CERES-like Aqua MODIS retrievals

    Directory of Open Access Journals (Sweden)

    D. Painemal

    2013-10-01

    Full Text Available The impact of horizontal heterogeneities, liquid water path (LWP from AMSR-E, and cloud fraction (CF on MODIS cloud effective radius (re, retrieved from the 2.1 μm (re2.1 and 3.8 μm (re3.8 channels, is investigated for warm clouds over the southeast Pacific. Values of re retrieved using the CERES algorithms are averaged at the CERES footprint resolution (∼20 km, while heterogeneities (Hσ are calculated as the ratio between the standard deviation and mean 0.64 μm reflectance. The value of re2.1 strongly depends on CF, with magnitudes up to 5 μm larger than those for overcast scenes, whereas re3.8 remains insensitive to CF. For cloudy scenes, both re2.1 and re3.8 increase with Hσ for any given AMSR-E LWP, but re2.1 changes more than for re3.8. Additionally, re3.8–re2.1 differences are positive (Hσ 45 gm−2, and negative (up to −4 μm for larger Hσ. While re3.8–re2.1 differences in homogeneous scenes are qualitatively consistent with in situ microphysical observations over the region of study, negative differences – particularly evinced in mean regional maps – are more likely to reflect the dominant bias associated with cloud heterogeneities rather than information about the cloud vertical structure. The consequences for MODIS LWP are also discussed.

  7. Parameterizing the competition between homogeneous and heterogeneous freezing in ice cloud formation – polydisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-08-01

    Full Text Available This study presents a comprehensive ice cloud formation parameterization that computes the ice crystal number, size distribution, and maximum supersaturation from precursor aerosol and ice nuclei. The parameterization provides an analytical solution of the cloud parcel model equations and accounts for the competition effects between homogeneous and heterogeneous freezing, and, between heterogeneous freezing in different modes. The diversity of heterogeneous nuclei is described through a nucleation spectrum function which is allowed to follow any form (i.e., derived from classical nucleation theory or from observations. The parameterization reproduces the predictions of a detailed numerical parcel model over a wide range of conditions, and several expressions for the nucleation spectrum. The average error in ice crystal number concentration was −2.0±8.5% for conditions of pure heterogeneous freezing, and, 4.7±21% when both homogeneous and heterogeneous freezing were active. The formulation presented is fast and free from requirements of numerical integration.

  8. Context-aware distributed cloud computing using CloudScheduler

    Science.gov (United States)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  9. Technology Trends in Cloud Infrastructure

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Cloud computing is growing at an exponential pace with an increasing number of workloads being hosted in mega-scale public clouds such as Microsoft Azure. Designing and operating such large infrastructures requires not only a significant capital spend for provisioning datacenters, servers, networking and operating systems, but also R&D investments to capitalize on disruptive technology trends and emerging workloads such as AI/ML. This talk will cover the various infrastructure innovations being implemented in large scale public clouds and opportunities/challenges ahead to deliver the next generation of scale computing. About the speaker Kushagra Vaid is the general manager and distinguished engineer for Hardware Infrastructure in the Microsoft Azure division. He is accountable for the architecture and design of compute and storage platforms, which are the foundation for Microsoft’s global cloud-scale services. He and his team have successfully delivered four generations of hyperscale cloud hardwar...

  10. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    CERN Document Server

    Medrano Llamas, Ramón; Kucharczyk, Katarzyna; Denis, Marek Kamil; Cinquilli, Mattia

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain th...

  11. Resource allocation in heterogeneous cloud radio access networks: advances and challenges

    KAUST Repository

    Dahrouj, Hayssam

    2015-06-01

    Base station densification is increasingly used by network operators to provide better throughput and coverage performance to mobile subscribers in dense data traffic areas. Such densification is progressively diffusing the move from traditional macrocell base stations toward heterogeneous networks with diverse cell sizes (e.g., microcell, picocell, femotcell) and diverse radio access technologies (e.g., GSM, CDMA), and LTE). The coexistence of the different network entities brings an additional set of challenges, particularly in terms of the provisioning of high-speed communications and the management of wireless interference. Resource sharing between different entities, largely incompatible in conventional systems due to the lack of interconnections, becomes a necessity. By connecting all the base stations from different tiers to a central processor (referred to as the cloud) through wire/wireline backhaul links, the heterogeneous cloud radio access network, H-CRAN, provides an open, simple, controllable, and flexible paradigm for resource allocation. This article discusses challenges and recent developments in H-CRAN design. It proposes promising resource allocation schemes in H-CRAN: coordinated scheduling, hybrid backhauling, and multicloud association. Simulations results show how the proposed strategies provide appreciable performance improvement compared to methods from recent literature. © 2015 IEEE.

  12. Replicated Computations Results (RCR) report for “A holistic approach for collaborative workload execution in volunteer clouds”

    DEFF Research Database (Denmark)

    Vandin, Andrea

    2018-01-01

    “A Holistic Approach for Collaborative Workload Execution in Volunteer Clouds” [3] proposes a novel approach to task scheduling in volunteer clouds. Volunteer clouds are decentralized cloud systems based on collaborative task execution, where clients voluntarily share their own unused computational...

  13. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  14. Software Defined Resource Orchestration System for Multitask Application in Heterogeneous Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Qi Qi

    2016-01-01

    Full Text Available The mobile cloud computing (MCC that combines mobile computing and cloud concept takes wireless access network as the transmission medium and uses mobile devices as the client. When offloading the complicated multitask application to the MCC environment, each task executes individually in terms of its own computation, storage, and bandwidth requirement. Due to user’s mobility, the provided resources contain different performance metrics that may affect the destination choice. Nevertheless, these heterogeneous MCC resources lack integrated management and can hardly cooperate with each other. Thus, how to choose the appropriate offload destination and orchestrate the resources for multitask is a challenge problem. This paper realizes a programming resource provision for heterogeneous energy-constrained computing environments, where a software defined controller is responsible for resource orchestration, offload, and migration. The resource orchestration is formulated as multiobjective optimal problem that contains the metrics of energy consumption, cost, and availability. Finally, a particle swarm algorithm is used to obtain the approximate optimal solutions. Simulation results show that the solutions for all of our studied cases almost can hit Pareto optimum and surpass the comparative algorithm in approximation, coverage, and execution time.

  15. Heterogeneous condensation of ice mantle around silicate core grain in molecular cloud

    International Nuclear Information System (INIS)

    Hasegawa, H.

    1984-01-01

    Interstellar water ice grains are observed in the cold and dense regions such as molecular clouds, HII regions and protostellar objects. The water ice is formed from gas phase during the cooling stage of cosmic gas with solid grain surfaces of high temperature silicate minerals. It is a question whether the ice is formed through the homogeneous condensation process (as the ice alone) or the heterogeneous one (as the ice around the pre-existing high temperature mineral grains). (author)

  16. Characterizing Energy per Job in Cloud Applications

    Directory of Open Access Journals (Sweden)

    Thi Thao Nguyen Ho

    2016-12-01

    Full Text Available Energy efficiency is a major research focus in sustainable development and is becoming even more critical in information technology (IT with the introduction of new technologies, such as cloud computing and big data, that attract more business users and generate more data to be processed. While many proposals have been presented to optimize power consumption at a system level, the increasing heterogeneity of current workloads requires a finer analysis in the application level to enable adaptive behaviors and in order to reduce the global energy usage. In this work, we focus on batch applications running on virtual machines in the context of data centers. We analyze the application characteristics, model their energy consumption and quantify the energy per job. The analysis focuses on evaluating the efficiency of applications in terms of performance and energy consumed per job, in particular when shared resources are used and the hosts on which the virtual machines are running are heterogeneous in terms of energy profiles, with the aim of identifying the best combinations in the use of resources.

  17. Hidden in the Clouds: New Ideas in Cloud Computing

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the fullest degree. Please bring questions and opinions, and be ready to share both!   Bio: S...

  18. Towards the Automatic Detection of Efficient Computing Assets in a Heterogeneous Cloud Environment

    OpenAIRE

    Iglesias, Jesus Omana; Stokes, Nicola; Ventresque, Anthony; Murphy, Liam, B.E.; Thorburn, James

    2013-01-01

    peer-reviewed In a heterogeneous cloud environment, the manual grading of computing assets is the first step in the process of configuring IT infrastructures to ensure optimal utilization of resources. Grading the efficiency of computing assets is however, a difficult, subjective and time consuming manual task. Thus, an automatic efficiency grading algorithm is highly desirable. In this paper, we compare the effectiveness of the different criteria used in the manual gr...

  19. A Heuristic Task Scheduling Algorithm for Heterogeneous Virtual Clusters

    Directory of Open Access Journals (Sweden)

    Weiwei Lin

    2016-01-01

    Full Text Available Cloud computing provides on-demand computing and storage services with high performance and high scalability. However, the rising energy consumption of cloud data centers has become a prominent problem. In this paper, we first introduce an energy-aware framework for task scheduling in virtual clusters. The framework consists of a task resource requirements prediction module, an energy estimate module, and a scheduler with a task buffer. Secondly, based on this framework, we propose a virtual machine power efficiency-aware greedy scheduling algorithm (VPEGS. As a heuristic algorithm, VPEGS estimates task energy by considering factors including task resource demands, VM power efficiency, and server workload before scheduling tasks in a greedy manner. We simulated a heterogeneous VM cluster and conducted experiment to evaluate the effectiveness of VPEGS. Simulation results show that VPEGS effectively reduced total energy consumption by more than 20% without producing large scheduling overheads. With the similar heuristic ideology, it outperformed Min-Min and RASA with respect to energy saving by about 29% and 28%, respectively.

  20. Heterogeneous ice nucleation activity of bacteria: new laboratory experiments at simulated cloud conditions

    Directory of Open Access Journals (Sweden)

    O. Möhler

    2008-10-01

    Full Text Available The ice nucleation activities of five different Pseudomonas syringae, Pseudomonas viridiflava and Erwinia herbicola bacterial species and of Snomax™ were investigated in the temperature range between −5 and −15°C. Water suspensions of these bacteria were directly sprayed into the cloud chamber of the AIDA facility of Forschungszentrum Karlsruhe at a temperature of −5.7°C. At this temperature, about 1% of the Snomax™ cells induced immersion freezing of the spray droplets before the droplets evaporated in the cloud chamber. The living cells didn't induce any detectable immersion freezing in the spray droplets at −5.7°C. After evaporation of the spray droplets the bacterial cells remained as aerosol particles in the cloud chamber and were exposed to typical cloud formation conditions in experiments with expansion cooling to about −11°C. During these experiments, the bacterial cells first acted as cloud condensation nuclei to form cloud droplets. Then, only a minor fraction of the cells acted as heterogeneous ice nuclei either in the condensation or the immersion mode. The results indicate that the bacteria investigated in the present study are mainly ice active in the temperature range between −7 and −11°C with an ice nucleation (IN active fraction of the order of 10−4. In agreement to previous literature results, the ice nucleation efficiency of Snomax™ cells was much larger with an IN active fraction of 0.2 at temperatures around −8°C.

  1. Hipster: hybrid task manager for latency-critical cloud workloads

    OpenAIRE

    Nishtala, Rajiv; Carpenter, Paul M.; Petrucci, Vinicius; Martorell Bofill, Xavier

    2017-01-01

    In 2013, U. S. data centers accounted for 2.2% of the country's total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important workloads are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to reduce power consumption due to increasing performance demands. This paper introduces Hipster, a technique that combines heuristics and reinforcement learning to manage latency-crit...

  2. Have the 'black clouds' cleared with new residency programme regulations?

    Science.gov (United States)

    Schissler, A J; Einstein, A J

    2016-06-01

    For decades, residents believed to work harder have been referred to as having a 'black cloud'. Residency training programmes recently instituted changes to improve physician wellness and achieve comparable clinical workload. All Internal Medicine residents in the internship class of 2014 at Columbia were surveyed to assess for the ongoing presence of 'black cloud' trainees. While some residents are still thought to have this designation, they did not have a greater workload when compared to their peers. © 2016 Royal Australasian College of Physicians.

  3. Workload management in the EMI project

    International Nuclear Information System (INIS)

    Andreetto, Paolo; Bertocco, Sara; Dorigo, Alvise; Frizziero, Eric; Gianelle, Alessio; Sgaravatto, Massimo; Zangrando, Luigi; Capannini, Fabio; Cecchi, Marco; Mezzadri, Massimo; Prelz, Francesco; Rebatto, David; Monforte, Salvatore; Kretsis, Aristotelis

    2012-01-01

    The EU-funded project EMI, now at its second year, aims at providing a unified, high quality middleware distribution for e-Science communities. Several aspects about workload management over diverse distributed computing environments are being challenged by the EMI roadmap: enabling seamless access to both HTC and HPC computing services, implementing a commonly agreed framework for the execution of parallel computations and supporting interoperability models between Grids and Clouds. Besides, a rigourous requirements collection process, involving the WLCG and various NGIs across Europe, assures that the EMI stack is always committed to serving actual needs. With this background, the gLite Workload Management System (WMS), the meta-scheduler service delivered by EMI, is augmenting its functionality and scheduling models according to the aforementioned project roadmap and the numerous requirements collected over the first project year. This paper is about present and future work of the EMI WMS, reporting on design changes, implementation choices and longterm vision.

  4. gLExec Integration with the ATLAS PanDA Workload Management System

    CERN Document Server

    Edward Karavakis; The ATLAS collaboration; Campana, Simone; De, Kaushik; Di Girolamo, Alessandro; Maarten Litmaath; Maeno, Tadashi; Medrano Llamas, Ramon; Nilsson, Paul; Wenaus, Torre

    2015-01-01

    The ATLAS Experiment at the Large Hadron Collider has collected data during Run 1 and is ready to collect data in Run 2. The ATLAS data are distributed, processed and analysed at more than 130 grid and cloud sites across the world. At any given time, there are more than 150,000 concurrent jobs running and about a million jobs are submitted on a daily basis on behalf of thousands of physicists within the ATLAS collaboration. The Production and Distributed Analysis (PanDA) workload management system has proved to be a key component of ATLAS and plays a crucial role in the success of the large-scale distributed computing as it is the sole system for distributed processing of Grid jobs across the collaboration since October 2007. ATLAS user jobs are executed on worker nodes by pilots sent to the sites by pilot factories. This pilot architecture has greatly improved job reliability and although it has clear advantages, such as making the working environment homogeneous by hiding any potential heterogeneities, the ...

  5. Impacts of Subgrid Heterogeneous Mixing between Cloud Liquid and Ice on the Wegner-Bergeron-Findeisen Process and Mixed-phase Clouds in NCAR CAM5

    Science.gov (United States)

    Liu, X.; Zhang, M.; Zhang, D.; Wang, Z.; Wang, Y.

    2017-12-01

    Mixed-phase clouds are persistently observed over the Arctic and the phase partitioning between cloud liquid and ice hydrometeors in mixed-phase clouds has important impacts on the surface energy budget and Arctic climate. In this study, we test the NCAR Community Atmosphere Model Version 5 (CAM5) with the single-column and weather forecast configurations and evaluate the model performance against observation data from the DOE Atmospheric Radiation Measurement (ARM) Program's M-PACE field campaign in October 2004 and long-term ground-based multi-sensor remote sensing measurements. Like most global climate models, we find that CAM5 also poorly simulates the phase partitioning in mixed-phase clouds by significantly underestimating the cloud liquid water content. Assuming pocket structures in the distribution of cloud liquid and ice in mixed-phase clouds as suggested by in situ observations provides a plausible solution to improve the model performance by reducing the Wegner-Bergeron-Findeisen (WBF) process rate. In this study, the modification of the WBF process in the CAM5 model has been achieved with applying a stochastic perturbation to the time scale of the WBF process relevant to both ice and snow to account for the heterogeneous mixture of cloud liquid and ice. Our results show that this modification of WBF process improves the modeled phase partitioning in the mixed-phase clouds. The seasonal variation of mixed-phase cloud properties is also better reproduced in the model in comparison with the long-term ground-based remote sensing observations. Furthermore, the phase partitioning is insensitive to the reassignment time step of perturbations.

  6. Heterogeneous Formation of Polar Stratospheric Clouds- Part 1: Nucleation of Nitric Acid Trihydrate (NAT)

    Science.gov (United States)

    Hoyle, C. R.; Engel, I.; Luo, B. P.; Pitts, M. C.; Poole, L. R.; Grooss, J.-U.; Peter, T.

    2013-01-01

    Satellite-based observations during the Arctic winter of 2009/2010 provide firm evidence that, in contrast to the current understanding, the nucleation of nitric acid trihydrate (NAT) in the polar stratosphere does not only occur on preexisting ice particles. In order to explain the NAT clouds observed over the Arctic in mid-December 2009, a heterogeneous nucleation mechanism is required, occurring via immersion freezing on the surface of solid particles, likely of meteoritic origin. For the first time, a detailed microphysical modelling of this NAT formation pathway has been carried out. Heterogeneous NAT formation was calculated along more than sixty thousand trajectories, ending at Cloud Aerosol Lidar with Orthogonal Polarization (CALIOP) observation points. Comparing the optical properties of the modelled NAT with these observations enabled a thorough validation of a newly developed NAT nucleation parameterisation, which has been built into the Zurich Optical and Microphysical box Model (ZOMM). The parameterisation is based on active site theory, is simple to implement in models and provides substantial advantages over previous approaches which involved a constant rate of NAT nucleation in a given volume of air. It is shown that the new method is capable of reproducing observed polar stratospheric clouds (PSCs) very well, despite the varied conditions experienced by air parcels travelling along the different trajectories. In a companion paper, ZOMM is applied to a later period of the winter, when ice PSCs are also present, and it is shown that the observed PSCs are also represented extremely well under these conditions.

  7. PanDA Beyond ATLAS: Workload Management for Data Intensive Science

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    The PanDA Production ANd Distributed Analysis system has been developed by ATLAS to meet the experiment's requirements for a data-driven workload management system for production and distributed analysis processing capable of operating at LHC data processing scale. After 7 years of impressively successful PanDA operation in ATLAS there are also other experiments which can benefit from PanDA in the Big Data challenge, with several at various stages of evaluation and adoption. The new project "Next Generation Workload Management and Analysis System for Big Data" is extending PanDA to meet the needs of other data intensive scientific applications in HEP, astro-particle and astrophysics communities, bio-informatics and other fields as a general solution to large scale workload management. PanDA can utilize dedicated or opportunistic computing resources such as grids, clouds, and High Performance Computing facilities, and is being extended to leverage next generation intelligent networks in automated workflow mana...

  8. Contributions of Heterogeneous Ice Nucleation, Large-Scale Circulation, and Shallow Cumulus Detrainment to Cloud Phase Transition in Mixed-Phase Clouds with NCAR CAM5

    Science.gov (United States)

    Liu, X.; Wang, Y.; Zhang, D.; Wang, Z.

    2016-12-01

    Mixed-phase clouds consisting of both liquid and ice water occur frequently at high-latitudes and in mid-latitude storm track regions. This type of clouds has been shown to play a critical role in the surface energy balance, surface air temperature, and sea ice melting in the Arctic. Cloud phase partitioning between liquid and ice water determines the cloud optical depth of mixed-phase clouds because of distinct optical properties of liquid and ice hydrometeors. The representation and simulation of cloud phase partitioning in state-of-the-art global climate models (GCMs) are associated with large biases. In this study, the cloud phase partition in mixed-phase clouds simulated from the NCAR Community Atmosphere Model version 5 (CAM5) is evaluated against satellite observations. Observation-based supercooled liquid fraction (SLF) is calculated from CloudSat, MODIS and CPR radar detected liquid and ice water paths for clouds with cloud-top temperatures between -40 and 0°C. Sensitivity tests with CAM5 are conducted for different heterogeneous ice nucleation parameterizations with respect to aerosol influence (Wang et al., 2014), different phase transition temperatures for detrained cloud water from shallow convection (Kay et al., 2016), and different CAM5 model configurations (free-run versus nudged winds and temperature, Zhang et al., 2015). A classical nucleation theory-based ice nucleation parameterization in mixed-phase clouds increases the SLF especially at temperatures colder than -20°C, and significantly improves the model agreement with observations in the Arctic. The change of transition temperature for detrained cloud water increases the SLF at higher temperatures and improves the SLF mostly over the Southern Ocean. Even with the improved SLF from the ice nucleation and shallow cumulus detrainment, the low SLF biases in some regions can only be improved through the improved circulation with the nudging technique. Our study highlights the challenges of

  9. A Cross-Entropy-Based Admission Control Optimization Approach for Heterogeneous Virtual Machine Placement in Public Clouds

    Directory of Open Access Journals (Sweden)

    Li Pan

    2016-03-01

    Full Text Available Virtualization technologies make it possible for cloud providers to consolidate multiple IaaS provisions into a single server in the form of virtual machines (VMs. Additionally, in order to fulfill the divergent service requirements from multiple users, a cloud provider needs to offer several types of VM instances, which are associated with varying configurations and performance, as well as different prices. In such a heterogeneous virtual machine placement process, one significant problem faced by a cloud provider is how to optimally accept and place multiple VM service requests into its cloud data centers to achieve revenue maximization. To address this issue, in this paper, we first formulate such a revenue maximization problem during VM admission control as a multiple-dimensional knapsack problem, which is known to be NP-hard to solve. Then, we propose to use a cross-entropy-based optimization approach to address this revenue maximization problem, by obtaining a near-optimal eligible set for the provider to accept into its data centers, from the waiting VM service requests in the system. Finally, through extensive experiments and measurements in a simulated environment with the settings of VM instance classes derived from real-world cloud systems, we show that our proposed cross-entropy-based admission control optimization algorithm is efficient and effective in maximizing cloud providers’ revenue in a public cloud computing environment.

  10. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  11. Dynamic Extensions of Batch Systems with Cloud Resources

    International Nuclear Information System (INIS)

    Hauth, T; Quast, G; Büge, V; Scheurer, A; Kunze, M; Baun, C

    2011-01-01

    Compute clusters use Portable Batch Systems (PBS) to distribute workload among individual cluster machines. To extend standard batch systems to Cloud infrastructures, a new service monitors the number of queued jobs and keeps track of the price of available resources. This meta-scheduler dynamically adapts the number of Cloud worker nodes according to the requirement profile. Two different worker node topologies are presented and tested on the Amazon EC2 Cloud service.

  12. Adaptation in cloud resource configuration:a survey

    OpenAIRE

    Hummaida, Abdul R.; Paton, Norman W.; Sakellariou, Rizos

    2016-01-01

    With increased demand for computing resources at a lower cost by end-users, cloud infrastructure providers need to find ways to protect their revenue. To achieve this, infrastructure providers aim to increase revenue and lower operational costs. A promising approach to addressing these challenges is to modify the assignment of resources to workloads. This can be used, for example, to consolidate existing workloads; the new capability can be used to serve new requests or alternatively unused r...

  13. TideWatch: Fingerprinting the cyclicality of big data workloads

    KAUST Repository

    Williams, Daniel W.

    2014-04-01

    Intrinsic to \\'big data\\' processing workloads (e.g., iterative MapReduce, Pregel, etc.) are cyclical resource utilization patterns that are highly synchronized across different resource types as well as the workers in a cluster. In Infrastructure as a Service settings, cloud providers do not exploit this characteristic to better manage VMs because they view VMs as \\'black boxes.\\' We present TideWatch, a system that automatically identifies cyclicality and similarity in running VMs. TideWatch predicts period lengths of most VMs in Hadoop workloads within 9% of actual iteration boundaries and successfully classifies up to 95% of running VMs as participating in the appropriate Hadoop cluster. Furthermore, we show how TideWatch can be used to improve the timing of VM migrations, reducing both migration time and network impact by over 50% when compared to a random approach. © 2014 IEEE.

  14. Enhanced machine learning scheme for energy efficient resource allocation in 5G heterogeneous cloud radio access networks

    KAUST Repository

    Alqerm, Ismail

    2018-02-15

    Heterogeneous cloud radio access networks (H-CRAN) is a new trend of 5G that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users\\' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users\\' QoS requirements.

  15. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  16. Parameterizing the competition between homogeneous and heterogeneous freezing in cirrus cloud formation – monodisperse ice nuclei

    Directory of Open Access Journals (Sweden)

    D. Barahona

    2009-01-01

    Full Text Available We present a parameterization of cirrus cloud formation that computes the ice crystal number and size distribution under the presence of homogeneous and heterogeneous freezing. The parameterization is very simple to apply and is derived from the analytical solution of the cloud parcel equations, assuming that the ice nuclei population is monodisperse and chemically homogeneous. In addition to the ice distribution, an analytical expression is provided for the limiting ice nuclei number concentration that suppresses ice formation from homogeneous freezing. The parameterization is evaluated against a detailed numerical parcel model, and reproduces numerical simulations over a wide range of conditions with an average error of 6±33%. The parameterization also compares favorably against other formulations that require some form of numerical integration.

  17. HPC Cloud for Scientific and Business Applications: Taxonomy, Vision, and Research Challenges

    OpenAIRE

    Netto, Marco A. S.; Calheiros, Rodrigo N.; Rodrigues, Eduardo R.; Cunha, Renato L. F.; Buyya, Rajkumar

    2017-01-01

    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics services. Most research efforts in HPC cloud aim to understand the cost-benefit of moving resource-intensive applications from on-premise environments to public cloud platforms. Industry trends show hybrid environments are the natural path to get the best of the on-premise and cloud resources---steady (and sensitive) workloads can run on on-pr...

  18. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  19. Cloud Provider Capacity Augmentation Through Automated Resource Bartering

    OpenAIRE

    Gohera, Syeda ZarAfshan; Bloodsworth, Peter; Rasool, Raihan Ur; McClatchey, Richard

    2018-01-01

    Growing interest in Cloud Computing places a heavy workload on cloud providers which is becoming increasingly difficult for them to manage with their primary datacenter infrastructures. Resource limitations can make providers vulnerable to significant reputational damage and it often forces customers to select services from the larger, more established companies, sometimes at a higher price. Funding limitations, however, commonly prevent emerging and even established providers from making con...

  20. Heterogeneous Data Storage Management with Deduplication in Cloud Computing

    OpenAIRE

    Yan, Zheng; Zhang, Lifang; Ding, Wenxiu; Zheng, Qinghua

    2017-01-01

    Cloud storage as one of the most important services of cloud computing helps cloud users break the bottleneck of restricted resources and expand their storage without upgrading their devices. In order to guarantee the security and privacy of cloud users, data are always outsourced in an encrypted form. However, encrypted data could incur much waste of cloud storage and complicate data sharing among authorized users. We are still facing challenges on encrypted data storage and management with ...

  1. CERN Computing Colloquium | Hidden in the Clouds: New Ideas in Cloud Computing | 30 May

    CERN Multimedia

    2013-01-01

    by Dr. Shevek (NEBULA) Thursday 30 May 2013 from 2 p.m. to 4 p.m. at CERN ( 40-S2-D01 - Salle Dirac ) Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the ful...

  2. Management of Virtual Machine as an Energy Conservation in Private Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Fauzi Akhmad

    2016-01-01

    Full Text Available Cloud computing is a service model that is packaged in a base computing resources that can be accessed through the Internet on demand and placed in the data center. Data center architecture in cloud computing environments are heterogeneous and distributed, composed of a cluster of network servers with different capacity computing resources in different physical servers. The problems on the demand and availability of cloud services can be solved by fluctuating data center cloud through abstraction with virtualization technology. Virtual machine (VM is a representation of the availability of computing resources that can be dynamically allocated and reallocated on demand. In this study the consolidation of VM as energy conservation in Private Cloud Computing Systems with the target of process optimization selection policy and migration of the VM on the procedure consolidation. VM environment cloud data center to consider hosting a type of service a particular application at the instance VM requires a different level of computing resources. The results of the use of computing resources on a VM that is not balanced in physical servers can be reduced by using a live VM migration to achieve workload balancing. A practical approach used in developing OpenStack-based cloud computing environment by integrating Cloud VM and VM Placement selection procedure using OpenStack Neat VM consolidation. Following the value of CPU Time used as a fill to get the average value in MHz CPU utilization within a specific time period. The average value of a VM’s CPU utilization in getting from the current CPU_time reduced by CPU_time from the previous data retrieval multiplied by the maximum frequency of the CPU. The calculation result is divided by the making time CPU_time when it is reduced to the previous taking time CPU_time multiplied by milliseconds.

  3. Model simulations with COSMO-SPECS: impact of heterogeneous freezing modes and ice nucleating particle types on ice formation and precipitation in a deep convective cloud

    Directory of Open Access Journals (Sweden)

    K. Diehl

    2018-03-01

    Full Text Available In deep convective clouds, heavy rain is often formed involving the ice phase. Simulations were performed using the 3-D cloud resolving model COSMO-SPECS with detailed spectral microphysics including parameterizations of homogeneous and three heterogeneous freezing modes. The initial conditions were selected to result in a deep convective cloud reaching 14 km of altitude with strong updrafts up to 40 m s−1. At such altitudes with corresponding temperatures below −40 °C the major fraction of liquid drops freezes homogeneously. The goal of the present model simulations was to investigate how additional heterogeneous freezing will affect ice formation and precipitation although its contribution to total ice formation may be rather low. In such a situation small perturbations that do not show significant effects at first sight may trigger cloud microphysical responses. Effects of the following small perturbations were studied: (1 additional ice formation via immersion, contact, and deposition modes in comparison to solely homogeneous freezing, (2 contact and deposition freezing in comparison to immersion freezing, and (3 small fractions of biological ice nucleating particles (INPs in comparison to higher fractions of mineral dust INP. The results indicate that the modification of precipitation proceeds via the formation of larger ice particles, which may be supported by direct freezing of larger drops, the growth of pristine ice particles by riming, and by nucleation of larger drops by collisions with pristine ice particles. In comparison to the reference case with homogeneous freezing only, such small perturbations due to additional heterogeneous freezing rather affect the total precipitation amount. It is more likely that the temporal development and the local distribution of precipitation are affected by such perturbations. This results in a gradual increase in precipitation at early cloud stages instead of a strong increase at

  4. Image selection as a service for cloud computing environments

    KAUST Repository

    Filepp, Robert; Shwartz, Larisa; Ward, Christopher; Kearney, Robert D.; Cheng, Karen; Young, Christopher C.; Ghosheh, Yanal

    2010-01-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Science.gov (United States)

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    Directory of Open Access Journals (Sweden)

    Mohammed Abdullahi

    Full Text Available Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS has been shown to perform competitively with Particle Swarm Optimization (PSO. The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA based SOS (SASOS in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  7. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    Science.gov (United States)

    Barreiro Megino, Fernando H.; Jones, Robert; Kucharczyk, Katarzyna; Medrano Llamas, Ramón; van der Ster, Daniel

    2014-06-01

    The recent paradigm shift toward cloud computing in IT, and general interest in "Big Data" in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R&D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula - the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  8. OCCI-Compliant Cloud Configuration Simulation

    OpenAIRE

    Ahmed-Nacer , Mehdi; Gaaloul , Walid; Tata , Samir

    2017-01-01

    In recent years many organizations such as, Amazon, Google, Microsoft, have accelerated the development of their cloud computing ecosystem. This rapid development has created a plethora of cloud resource management interfaces for provisioning, supervising, and managing cloud resources. Thus, there is an obvious need for the standardization of cloud resource management interfaces to cope with the prevalent issues of heterogeneity, integration, and portability issues.To this end, Open Cloud Com...

  9. Redundant VoD Streaming Service in a Private Cloud: Availability Modeling and Sensitivity Analysis

    OpenAIRE

    Rosangela Maria De Melo; Maria Clara Bezerra; Jamilson Dantas; Rubens Matos; Ivanildo José De Melo Filho; Paulo Maciel

    2014-01-01

    For several years cloud computing has been generating considerable debate and interest within IT corporations. Since cloud computing environments provide storage and processing systems that are adaptable, efficient, and straightforward, thereby enabling rapid infrastructure modifications to be made according to constantly varying workloads, organizations of every size and type are migrating to web-based cloud supported solutions. Due to the advantages of the pay-per-use ...

  10. Enhancing Security by System-Level Virtualization in Cloud Computing Environments

    Science.gov (United States)

    Sun, Dawei; Chang, Guiran; Tan, Chunguang; Wang, Xingwei

    Many trends are opening up the era of cloud computing, which will reshape the IT industry. Virtualization techniques have become an indispensable ingredient for almost all cloud computing system. By the virtual environments, cloud provider is able to run varieties of operating systems as needed by each cloud user. Virtualization can improve reliability, security, and availability of applications by using consolidation, isolation, and fault tolerance. In addition, it is possible to balance the workloads by using live migration techniques. In this paper, the definition of cloud computing is given; and then the service and deployment models are introduced. An analysis of security issues and challenges in implementation of cloud computing is identified. Moreover, a system-level virtualization case is established to enhance the security of cloud computing environments.

  11. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  12. Towards Media Intercloud Standardization Evaluating Impact of Cloud Storage Heterogeneity

    OpenAIRE

    Aazam, Mohammad; StHilaire, Marc; Huh, EuiNam

    2016-01-01

    Digital media has been increasing very rapidly, resulting in cloud computing's popularity gain. Cloud computing provides ease of management of large amount of data and resources. With a lot of devices communicating over the Internet and with the rapidly increasing user demands, solitary clouds have to communicate to other clouds to fulfill the demands and discover services elsewhere. This scenario is called intercloud computing or cloud federation. Intercloud computing still lacks standard ar...

  13. Helix Nebula and CERN: A Symbiotic approach to exploiting commercial clouds

    International Nuclear Information System (INIS)

    Megino, Fernando H Barreiro; Jones, Robert; Llamas, Ramón Medrano; Ster, Daniel van der; Kucharczyk, Katarzyna

    2014-01-01

    The recent paradigm shift toward cloud computing in IT, and general interest in 'Big Data' in particular, have demonstrated that the computing requirements of HEP are no longer globally unique. Indeed, the CERN IT department and LHC experiments have already made significant R and D investments in delivering and exploiting cloud computing resources. While a number of technical evaluations of interesting commercial offerings from global IT enterprises have been performed by various physics labs, further technical, security, sociological, and legal issues need to be address before their large-scale adoption by the research community can be envisaged. Helix Nebula – the Science Cloud is an initiative that explores these questions by joining the forces of three European research institutes (CERN, ESA and EMBL) with leading European commercial IT enterprises. The goals of Helix Nebula are to establish a cloud platform federating multiple commercial cloud providers, along with new business models, which can sustain the cloud marketplace for years to come. This contribution will summarize the participation of CERN in Helix Nebula. We will explain CERN's flagship use-case and the model used to integrate several cloud providers with an LHC experiment's workload management system. During the first proof of concept, this project contributed over 40.000 CPU-days of Monte Carlo production throughput to the ATLAS experiment with marginal manpower required. CERN's experience, together with that of ESA and EMBL, is providing a great insight into the cloud computing industry and highlighted several challenges that are being tackled in order to ease the export of the scientific workloads to the cloud environments.

  14. The workload of fishermen

    DEFF Research Database (Denmark)

    Østergaard, Helle; Jepsen, Jørgen Riis; Berg-Beckhoff, Gabriele

    2016-01-01

    -reported occupational and health data. Questions covering the physical workload were related to seven different work situations and a score summing up the workload was developed for the analysis of the relative impact on different groups of fishermen. Results: Almost all fishermen (96.2%) were familiar to proper...... health. To address the specific areas of fishing with the highest workload, future investments in assistive devices to ease the demanding work and reduce the workload, should particularly address deckhands and less mechanized vessels....

  15. Dynamic workload balancing of parallel applications with user-level scheduling on the Grid

    CERN Document Server

    Korkhov, Vladimir V; Krzhizhanovskaya, Valeria V

    2009-01-01

    This paper suggests a hybrid resource management approach for efficient parallel distributed computing on the Grid. It operates on both application and system levels, combining user-level job scheduling with dynamic workload balancing algorithm that automatically adapts a parallel application to the heterogeneous resources, based on the actual resource parameters and estimated requirements of the application. The hybrid environment and the algorithm for automated load balancing are described, the influence of resource heterogeneity level is measured, and the speedup achieved with this technique is demonstrated for different types of applications and resources.

  16. DDM Workload Emulation

    CERN Document Server

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  17. DDM Workload Emulation

    CERN Document Server

    Vigne, R; The ATLAS collaboration; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from vario...

  18. A Holistic Approach for Collaborative Workload Execution in Volunteer Clouds

    DEFF Research Database (Denmark)

    Sebastio, Stefano; Amoretti, Michele; Lluch Lafuente, Alberto

    2018-01-01

    The demand for provisioning, using, and maintaining distributed computational resources is growing hand in hand with the quest for ubiquitous services. Centralized infrastructures such as cloud computing systems provide suitable solutions for many applications, but their scalability could be limi...

  19. The research of the availability at cloud service systems

    Science.gov (United States)

    Demydov, Ivan; Klymash, Mykhailo; Kharkhalis, Zenoviy; Strykhaliuk, Bohdan; Komada, Paweł; Shedreyeva, Indira; Targeusizova, Aliya; Iskakova, Aigul

    2017-08-01

    This paper is devoted to the numerical investigation of the availability at cloud service systems. In this paper criteria and constraints calculations were performed and obtained results were analyzed for synthesis purposes of distributed service platforms based on the cloud service-oriented architecture such as availability and system performance index variations by defined set of the main parameters. The method of synthesis has been numerically generalized considering the type of service workload in statistical form by Hurst parameter application for each integrated service that requires implementation within the service delivery platform, which is synthesized by structural matching of virtual machines using combination of elementary servicing components up to functionality into a best-of-breed solution. As a result of restrictions from Amdahl's Law the necessity of cloud-networks clustering was shown, which makes it possible to break the complex dynamic network into separate segments that simplifies access to the resources of virtual machines and, in general, to the "clouds" and respectively simplifies complex topological structure, enhancing the overall system performance. In overall, proposed approaches and obtained results numerically justifying and algorithmically describing the process of structural and functional synthesis of efficient distributed service platforms, which under process of their configuring and exploitation provides an opportunity to act on the dynamic environment in terms of comprehensive services range and nomadic users' workload pulsing.

  20. Are Cloud Environments Ready for Scientific Applications?

    Science.gov (United States)

    Mehrotra, P.; Shackleford, K.

    2011-12-01

    Cloud computing environments are becoming widely available both in the commercial and government sectors. They provide flexibility to rapidly provision resources in order to meet dynamic and changing computational needs without the customers incurring capital expenses and/or requiring technical expertise. Clouds also provide reliable access to resources even though the end-user may not have in-house expertise for acquiring or operating such resources. Consolidation and pooling in a cloud environment allow organizations to achieve economies of scale in provisioning or procuring computing resources and services. Because of these and other benefits, many businesses and organizations are migrating their business applications (e.g., websites, social media, and business processes) to cloud environments-evidenced by the commercial success of offerings such as the Amazon EC2. In this paper, we focus on the feasibility of utilizing cloud environments for scientific workloads and workflows particularly of interest to NASA scientists and engineers. There is a wide spectrum of such technical computations. These applications range from small workstation-level computations to mid-range computing requiring small clusters to high-performance simulations requiring supercomputing systems with high bandwidth/low latency interconnects. Data-centric applications manage and manipulate large data sets such as satellite observational data and/or data previously produced by high-fidelity modeling and simulation computations. Most of the applications are run in batch mode with static resource requirements. However, there do exist situations that have dynamic demands, particularly ones with public-facing interfaces providing information to the general public, collaborators and partners, as well as to internal NASA users. In the last few months we have been studying the suitability of cloud environments for NASA's technical and scientific workloads. We have ported several applications to

  1. Cirrus cloud mimic surfaces in the laboratory: organic acids, bases and NOx heterogeneous reactions

    Science.gov (United States)

    Sodeau, J.; Oriordan, B.

    2003-04-01

    CIRRUS CLOUD MIMIC SURFACES IN THE LABORATORY:ORGANIC ACIDS, BASES AND NOX HETEROGENEOUS REACTIONS. B. ORiordan, J. Sodeau Department of Chemistry and Environment Research Institute, University College Cork, Ireland j.sodeau@ucc.ie /Fax: +353-21-4902680 There are a variety of biogenic and anthropogenic sources for the simple carboxylic acids to be found in the troposphere giving rise to levels as high as 45 ppb in certain urban areas. In this regard it is of note that ants of genus Formica produce some 10Tg of formic acid each year; some ten times that produced by industry. The expected sinks are those generally associated with tropospheric chemistry: the major routes studied, to date, being wet and dry deposition. No studies have been carried out hitherto on the role of water-ice surfaces in the atmospheric chemistry of carboxylic acids and the purpose of this paper is to indicate their potential function in the heterogeneous release of atmospheric species such as HONO. The deposition of formic acid on a water-ice surface was studied using FT-RAIR spectroscopy over a range of temperatures between 100 and 165K. In all cases ionization to the formate (and oxonium) ions was observed. The results were confirmed by TPD (Temperature Programmed Desorption) measurements, which indicated that two distinct surface species adsorb to the ice. Potential reactions between the formic acid/formate ion surface and nitrogen dioxide were subsequently investigated by FT-RAIRS. Co-deposition experiments showed that N2O3 and the NO+ ion (associated with water) were formed as products. A mechanism is proposed to explain these results, which involves direct reaction between the organic acid and nitrogen dioxide. Similar experiments involving acetic acid also indicate ionization on a water-ice surface. The results are put into the context of atmospheric chemistry potentially occuring on cirrus cloud surfaces.

  2. Influences of cloud heterogeneity on cirrus optical properties retrieved from the visible and near-infrared channels of MODIS/SEVIRI for flat and optically thick cirrus clouds

    International Nuclear Information System (INIS)

    Zhou, Yongbo; Sun, Xuejin; Zhang, Riwei; Zhang, Chuanliang; Li, Haoran; Zhou, Junhao; Li, Shaohui

    2017-01-01

    The influences of three-dimensional radiative effects and horizontal heterogeneity effects on the retrieval of cloud optical thickness (COT) and effective diameter (De) for cirrus clouds are explored by the SHDOM radiative transfer model. The stochastic cirrus clouds are generated by the Cloudgen model based on the Atmospheric Radiation Measurement program data. Incorporating a new ice cloud spectral model, we evaluate the retrieval errors for two solar zenith angles (SZAs) (30° and 60°), four solar azimuth angles (0°, 45°, 90°, and 180°), and two sensor settings (Moderate Resolution Imaging Spectrometer (MODIS) onboard Aqua and Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard METEOSAT-8). The domain-averaged relative error of COT (μ) ranges from −24.1 % to -1.0 % (SZA = 30°) and from −11.6 % to 3.3 % (SZA = 60°), with the uncertainty within 7.5 % to –12.5 % (SZA = 30°) and 20.0 % - 27.5 % (SZA = 60°). For the SZA of 60° only, the relative error and uncertainty are parameterized by the retrieved COT by linear functions, providing bases to correct the retrieved COT and estimate their uncertainties. Besides, De is overestimated by 0.7–15.0 μm on the domain average, with the corresponding uncertainty within 6.7–26.5 μm. The retrieval errors show no discernible dependence on solar azimuth angle due to the flat tops and full coverage of the cirrus samples. The results are valid only for the two samples and for the specific spatial resolution of the radiative transfer simulations. - Highlights: • The retrieved cloud optical properties for 3-D cirrus clouds are evaluated. • The cloud optical thickness and uncertainty could be corrected and estimated. • On the domain average, the effective diameter of ice crystal is overestimated. • The optical properties show non-obvious dependence on the solar azimuth angle.

  3. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  4. Application of physical adsorption thermodynamics to heterogeneous chemistry on polar stratospheric clouds

    Science.gov (United States)

    Elliott, Scott; Turco, Richard P.; Toon, Owen B.; Hamill, Patrick

    1991-01-01

    Laboratory isotherms for the binding of several nonheterogeneously active atmospheric gases and for HCl to water ice are translated into adsorptive equilibrium constants and surface enthalpies. Extrapolation to polar conditions through the Clausius Clapeyron relation yields coverage estimates below the percent level for N2, Ar, CO2, and CO, suggesting that the crystal faces of type II stratospheric cloud particles may be regarded as clean with respect to these species. For HCl, and perhaps HF and HNO3, estimates rise to several percent, and the adsorbed layer may offer acid or proton sources alternate to the bulk solid for heterogeneous reactions with stratospheric nitrates. Measurements are lacking for many key atmospheric molecules on water ice, and almost entirely for nitric acid trihydrate as substrate. Adsorptive equilibria enter into gas to particle mass flux descriptions, and the binding energy determines rates for desorption of, and encounter between, potential surface reactants.

  5. Efficient workload management in geographically distributed data centers leveraging autoregressive models

    Science.gov (United States)

    Altomare, Albino; Cesario, Eugenio; Mastroianni, Carlo

    2016-10-01

    The opportunity of using Cloud resources on a pay-as-you-go basis and the availability of powerful data centers and high bandwidth connections are speeding up the success and popularity of Cloud systems, which is making on-demand computing a common practice for enterprises and scientific communities. The reasons for this success include natural business distribution, the need for high availability and disaster tolerance, the sheer size of their computational infrastructure, and/or the desire to provide uniform access times to the infrastructure from widely distributed client sites. Nevertheless, the expansion of large data centers is resulting in a huge rise of electrical power consumed by hardware facilities and cooling systems. The geographical distribution of data centers is becoming an opportunity: the variability of electricity prices, environmental conditions and client requests, both from site to site and with time, makes it possible to intelligently and dynamically (re)distribute the computational workload and achieve as diverse business goals as: the reduction of costs, energy consumption and carbon emissions, the satisfaction of performance constraints, the adherence to Service Level Agreement established with users, etc. This paper proposes an approach that helps to achieve the business goals established by the data center administrators. The workload distribution is driven by a fitness function, evaluated for each data center, which weighs some key parameters related to business objectives, among which, the price of electricity, the carbon emission rate, the balance of load among the data centers etc. For example, the energy costs can be reduced by using a "follow the moon" approach, e.g. by migrating the workload to data centers where the price of electricity is lower at that time. Our approach uses data about historical usage of the data centers and data about environmental conditions to predict, with the help of regressive models, the values of the

  6. DDM Workload Emulation

    Science.gov (United States)

    Vigne, R.; Schikuta, E.; Garonne, V.; Stewart, G.; Barisits, M.; Beermann, T.; Lassnig, M.; Serfon, C.; Goossens, L.; Nairz, A.; Atlas Collaboration

    2014-06-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  7. DDM workload emulation

    International Nuclear Information System (INIS)

    Vigne, R; Schikuta, E; Garonne, V; Stewart, G; Barisits, M; Beermann, T; Lassnig, M; Serfon, C; Goossens, L; Nairz, A

    2014-01-01

    Rucio is the successor of the current Don Quijote 2 (DQ2) system for the distributed data management (DDM) system of the ATLAS experiment. The reasons for replacing DQ2 are manifold, but besides high maintenance costs and architectural limitations, scalability concerns are on top of the list. Current expectations are that the amount of data will be three to four times as it is today by the end of 2014. Further is the availability of more powerful computing resources pushing additional pressure on the DDM system as it increases the demands on data provisioning. Although DQ2 is capable of handling the current workload, it is already at its limits. To ensure that Rucio will be up to the expected workload, a way to emulate it is needed. To do so, first the current workload, observed in DQ2, must be understood in order to scale it up to future expectations. The paper discusses how selected core concepts are applied to the workload of the experiment and how knowledge about the current workload is derived from various sources (e.g. analysing the central file catalogue logs). Finally a description of the implemented emulation framework, used for stress-testing Rucio, is given.

  8. ELASTIC CLOUD COMPUTING ARCHITECTURE AND SYSTEM FOR HETEROGENEOUS SPATIOTEMPORAL COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Shi

    2017-10-01

    Full Text Available Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs, while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  9. Elastic Cloud Computing Architecture and System for Heterogeneous Spatiotemporal Computing

    Science.gov (United States)

    Shi, X.

    2017-10-01

    Spatiotemporal computation implements a variety of different algorithms. When big data are involved, desktop computer or standalone application may not be able to complete the computation task due to limited memory and computing power. Now that a variety of hardware accelerators and computing platforms are available to improve the performance of geocomputation, different algorithms may have different behavior on different computing infrastructure and platforms. Some are perfect for implementation on a cluster of graphics processing units (GPUs), while GPUs may not be useful on certain kind of spatiotemporal computation. This is the same situation in utilizing a cluster of Intel's many-integrated-core (MIC) or Xeon Phi, as well as Hadoop or Spark platforms, to handle big spatiotemporal data. Furthermore, considering the energy efficiency requirement in general computation, Field Programmable Gate Array (FPGA) may be a better solution for better energy efficiency when the performance of computation could be similar or better than GPUs and MICs. It is expected that an elastic cloud computing architecture and system that integrates all of GPUs, MICs, and FPGAs could be developed and deployed to support spatiotemporal computing over heterogeneous data types and computational problems.

  10. Defining Inter-Cloud Architecture for Interoperability and Integration

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.J.; Laat, C. de

    2012-01-01

    This paper presents on-going research to develop the Inter-Cloud Architecture that should address problems in multi-provider multi-domain heterogeneous Cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure services. Cloud

  11. Lxcloud: a prototype for an internal cloud in HEP. Experiences and lessons learned

    International Nuclear Information System (INIS)

    Goasguen, Sebastien; Moreira, Belmiro; Roche, Ewan; Schwickerath, Ulrich

    2012-01-01

    Born out of the desire to virtualize our batch compute farm CERN has developed an internal cloud known as lxcloud. Since December 2010 it has been used to run a small but sufficient part of our batch workload thus allowing operational and development experience to be gained. Recently, this service has evolved to a public cloud allowing selected physics users an alternate way of accessing resources.

  12. School Nurse Workload.

    Science.gov (United States)

    Endsley, Patricia

    2017-02-01

    The purpose of this scoping review was to survey the most recent (5 years) acute care, community health, and mental health nursing workload literature to understand themes and research avenues that may be applicable to school nursing workload research. The search for empirical and nonempirical literature was conducted using search engines such as Google Scholar, PubMed, CINAHL, and Medline. Twenty-nine empirical studies and nine nonempirical articles were selected for inclusion. Themes that emerged consistent with school nurse practice include patient classification systems, environmental factors, assistive personnel, missed nursing care, and nurse satisfaction. School nursing is a public health discipline and population studies are an inherent research priority but may overlook workload variables at the clinical level. School nurses need a consistent method of population assessment, as well as evaluation of appropriate use of assistive personnel and school environment factors. Assessment of tasks not directly related to student care and professional development must also be considered in total workload.

  13. A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System

    OpenAIRE

    Žumer, Viljem; Brest, Janez

    2002-01-01

    A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.

  14. Eleven quick tips for architecting biomedical informatics workflows with cloud computing

    Science.gov (United States)

    Moore, Jason H.

    2018-01-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world’s largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction. PMID:29596416

  15. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    Science.gov (United States)

    Cole, Brian S; Moore, Jason H

    2018-03-01

    Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  16. Eleven quick tips for architecting biomedical informatics workflows with cloud computing.

    Directory of Open Access Journals (Sweden)

    Brian S Cole

    2018-03-01

    Full Text Available Cloud computing has revolutionized the development and operations of hardware and software across diverse technological arenas, yet academic biomedical research has lagged behind despite the numerous and weighty advantages that cloud computing offers. Biomedical researchers who embrace cloud computing can reap rewards in cost reduction, decreased development and maintenance workload, increased reproducibility, ease of sharing data and software, enhanced security, horizontal and vertical scalability, high availability, a thriving technology partner ecosystem, and much more. Despite these advantages that cloud-based workflows offer, the majority of scientific software developed in academia does not utilize cloud computing and must be migrated to the cloud by the user. In this article, we present 11 quick tips for architecting biomedical informatics workflows on compute clouds, distilling knowledge gained from experience developing, operating, maintaining, and distributing software and virtualized appliances on the world's largest cloud. Researchers who follow these tips stand to benefit immediately by migrating their workflows to cloud computing and embracing the paradigm of abstraction.

  17. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  18. The workload analysis in welding workshop

    Science.gov (United States)

    Wahyuni, D.; Budiman, I.; Tryana Sembiring, M.; Sitorus, E.; Nasution, H.

    2018-03-01

    This research was conducted in welding workshop which produces doors, fences, canopies, etc., according to customer’s order. The symptoms of excessive workload were seen from the fact of employees complaint, requisition for additional employees, the lateness of completion time (there were 11 times of lateness from 28 orders, and 7 customers gave complaints). The top management of the workshop assumes that employees’ workload was still a tolerable limit. Therefore, it was required workload analysis to determine the number of employees required. The Workload was measured by using a physiological method and workload analysis. The result of this research can be utilized by the workshop for a better workload management.

  19. A Parameterization for Land-Atmosphere-Cloud Exchange (PLACE): Documentation and Testing of a Detailed Process Model of the Partly Cloudy Boundary Layer over Heterogeneous Land.

    Science.gov (United States)

    Wetzel, Peter J.; Boone, Aaron

    1995-07-01

    This paper presents a general description of, and demonstrates the capabilities of, the Parameterization for Land-Atmosphere-Cloud Exchange (PLACE). The PLACE model is a detailed process model of the partly cloudy atmospheric boundary layer and underlying heterogeneous land surfaces. In its development, particular attention has been given to three of the model's subprocesses: the prediction of boundary layer cloud amount, the treatment of surface and soil subgrid heterogeneity, and the liquid water budget. The model includes a three-parameter nonprecipitating cumulus model that feeds back to the surface and boundary layer through radiative effects. Surface heterogeneity in the PLACE model is treated both statistically and by resolving explicit subgrid patches. The model maintains a vertical column of liquid water that is divided into seven reservoirs, from the surface interception store down to bedrock.Five single-day demonstration cases are presented, in which the PLACE model was initialized, run, and compared to field observations from four diverse sites. The model is shown to predict cloud amount well in these while predicting the surface fluxes with similar accuracy. A slight tendency to underpredict boundary layer depth is noted in all cases.Sensitivity tests were also run using anemometer-level forcing provided by the Project for Inter-comparison of Land-surface Parameterization Schemes (PILPS). The purpose is to demonstrate the relative impact of heterogeneity of surface parameters on the predicted annual mean surface fluxes. Significant sensitivity to subgrid variability of certain parameters is demonstrated, particularly to parameters related to soil moisture. A major result is that the PLACE-computed impact of total (homogeneous) deforestation of a rain forest is comparable in magnitude to the effect of imposing heterogeneity of certain surface variables, and is similarly comparable to the overall variance among the other PILPS participant models. Were

  20. Community Cloud Computing

    Science.gov (United States)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  1. MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rahul Yadav

    2017-01-01

    Full Text Available Mobile cloud computing (MCC provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2 emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.

  2. Sophisticated Online Learning Scheme for Green Resource Allocation in 5G Heterogeneous Cloud Radio Access Networks

    KAUST Repository

    Alqerm, Ismail

    2018-01-23

    5G is the upcoming evolution for the current cellular networks that aims at satisfying the future demand for data services. Heterogeneous cloud radio access networks (H-CRANs) are envisioned as a new trend of 5G that exploits the advantages of heterogeneous and cloud radio access networks to enhance spectral and energy efficiency. Remote radio heads (RRHs) are small cells utilized to provide high data rates for users with high quality of service (QoS) requirements, while high power macro base station (BS) is deployed for coverage maintenance and low QoS users service. Inter-tier interference between macro BSs and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRANs. Therefore, we propose an efficient resource allocation scheme using online learning, which mitigates interference and maximizes energy efficiency while maintaining QoS requirements for all users. The resource allocation includes resource blocks (RBs) and power. The proposed scheme is implemented using two approaches: centralized, where the resource allocation is processed at a controller integrated with the baseband processing unit and decentralized, where macro BSs cooperate to achieve optimal resource allocation strategy. To foster the performance of such sophisticated scheme with a model free learning, we consider users\\' priority in RB allocation and compact state representation learning methodology to improve the speed of convergence and account for the curse of dimensionality during the learning process. The proposed scheme including both approaches is implemented using software defined radios testbed. The obtained results and simulation results confirm that the proposed resource allocation solution in H-CRANs increases the energy efficiency significantly and maintains users\\' QoS.

  3. Trust Model to Enhance Security and Interoperability of Cloud Environment

    Science.gov (United States)

    Li, Wenjuan; Ping, Lingdi

    Trust is one of the most important means to improve security and enable interoperability of current heterogeneous independent cloud platforms. This paper first analyzed several trust models used in large and distributed environment and then introduced a novel cloud trust model to solve security issues in cross-clouds environment in which cloud customer can choose different providers' services and resources in heterogeneous domains can cooperate. The model is domain-based. It divides one cloud provider's resource nodes into the same domain and sets trust agent. It distinguishes two different roles cloud customer and cloud server and designs different strategies for them. In our model, trust recommendation is treated as one type of cloud services just like computation or storage. The model achieves both identity authentication and behavior authentication. The results of emulation experiments show that the proposed model can efficiently and safely construct trust relationship in cross-clouds environment.

  4. Evolution of the ATLAS PanDA workload management system for exascale computational science

    International Nuclear Information System (INIS)

    Maeno, T; Klimentov, A; Panitkin, S; Schovancova, J; Wenaus, T; Yu, D; De, K; Nilsson, P; Oleynik, D; Petrosyan, A; Vaniachine, A

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  5. Understanding the Performance of Low Power Raspberry Pi Cloud for Big Data

    Directory of Open Access Journals (Sweden)

    Wajdi Hajji

    2016-06-01

    Full Text Available Nowadays, Internet-of-Things (IoT devices generate data at high speed and large volume. Often the data require real-time processing to support high system responsiveness which can be supported by localised Cloud and/or Fog computing paradigms. However, there are considerably large deployments of IoT such as sensor networks in remote areas where Internet connectivity is sparse, challenging the localised Cloud and/or Fog computing paradigms. With the advent of the Raspberry Pi, a credit card-sized single board computer, there is a great opportunity to construct low-cost, low-power portable cloud to support real-time data processing next to IoT deployments. In this paper, we extend our previous work on constructing Raspberry Pi Cloud to study its feasibility for real-time big data analytics under realistic application-level workload in both native and virtualised environments. We have extensively tested the performance of a single node Raspberry Pi 2 Model B with httperf and a cluster of 12 nodes with Apache Spark and HDFS (Hadoop Distributed File System. Our results have demonstrated that our portable cloud is useful for supporting real-time big data analytics. On the other hand, our results have also unveiled that overhead for CPU-bound workload in virtualised environment is surprisingly high, at 67.2%. We have found that, for big data applications, the virtualisation overhead is fractional for small jobs but becomes more significant for large jobs, up to 28.6%.

  6. Service workload patterns for QoS-driven cloud resource management

    OpenAIRE

    Zhang, Li; Zhang, Yichuan; Jamshidi, Pooyan; Xu, Lei; Pahl, Claus

    2015-01-01

    Cloud service providers negotiate SLAs for customer services they offer based on the reliability of performance and availability of their lower-level platform infrastructure. While availability management is more mature, performance management is less reliable. In order to support a continuous approach that supports the initial static infrastructure configuration as well as dynamic reconfiguration and auto-scaling, an accurate and efficient solution is required. We propose a prediction techni...

  7. Application-oriented offloading in heterogeneous networks for mobile cloud computing

    Science.gov (United States)

    Tseng, Fan-Hsun; Cho, Hsin-Hung; Chang, Kai-Di; Li, Jheng-Cong; Shih, Timothy K.

    2018-04-01

    Nowadays Internet applications have become more complicated that mobile device needs more computing resources for shorter execution time but it is restricted to limited battery capacity. Mobile cloud computing (MCC) is emerged to tackle the finite resource problem of mobile device. MCC offloads the tasks and jobs of mobile devices to cloud and fog environments by using offloading scheme. It is vital to MCC that which task should be offloaded and how to offload efficiently. In the paper, we formulate the offloading problem between mobile device and cloud data center and propose two algorithms based on application-oriented for minimum execution time, i.e. the Minimum Offloading Time for Mobile device (MOTM) algorithm and the Minimum Execution Time for Cloud data center (METC) algorithm. The MOTM algorithm minimizes offloading time by selecting appropriate offloading links based on application categories. The METC algorithm minimizes execution time in cloud data center by selecting virtual and physical machines with corresponding resource requirements of applications. Simulation results show that the proposed mechanism not only minimizes total execution time for mobile devices but also decreases their energy consumption.

  8. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  9. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  10. Memory and subjective workload assessment

    Science.gov (United States)

    Staveland, L.; Hart, S.; Yeh, Y. Y.

    1986-01-01

    Recent research suggested subjective introspection of workload is not based upon specific retrieval of information from long term memory, and only reflects the average workload that is imposed upon the human operator by a particular task. These findings are based upon global ratings of workload for the overall task, suggesting that subjective ratings are limited in ability to retrieve specific details of a task from long term memory. To clarify the limits memory imposes on subjective workload assessment, the difficulty of task segments was varied and the workload of specified segments was retrospectively rated. The ratings were retrospectively collected on the manipulations of three levels of segment difficulty. Subjects were assigned to one of two memory groups. In the Before group, subjects knew before performing a block of trials which segment to rate. In the After group, subjects did not know which segment to rate until after performing the block of trials. The subjective ratings, RTs (reaction times) and MTs (movement times) were compared within group, and between group differences. Performance measures and subjective evaluations of workload reflected the experimental manipulations. Subjects were sensitive to different difficulty levels, and recalled the average workload of task components. Cueing did not appear to help recall, and memory group differences possibly reflected variations in the groups of subjects, or an additional memory task.

  11. Evaluating the Influence of the Client Behavior in Cloud Computing.

    Science.gov (United States)

    Souza Pardo, Mário Henrique; Centurion, Adriana Molina; Franco Eustáquio, Paulo Sérgio; Carlucci Santana, Regina Helena; Bruschi, Sarita Mazzini; Santana, Marcos José

    2016-01-01

    This paper proposes a novel approach for the implementation of simulation scenarios, providing a client entity for cloud computing systems. The client entity allows the creation of scenarios in which the client behavior has an influence on the simulation, making the results more realistic. The proposed client entity is based on several characteristics that affect the performance of a cloud computing system, including different modes of submission and their behavior when the waiting time between requests (think time) is considered. The proposed characterization of the client enables the sending of either individual requests or group of Web services to scenarios where the workload takes the form of bursts. The client entity is included in the CloudSim, a framework for modelling and simulation of cloud computing. Experimental results show the influence of the client behavior on the performance of the services executed in a cloud computing system.

  12. Measuring the effects of heterogeneity on distributed systems

    Science.gov (United States)

    El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi

    1991-01-01

    Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.

  13. Measuring perceived mental workload in children.

    Science.gov (United States)

    Laurie-Rose, Cynthia; Frey, Meredith; Ennis, Aristi; Zamary, Amanda

    2014-01-01

    Little is known about the mental workload, or psychological costs, associated with information processing tasks in children. We adapted the highly regarded NASA Task Load Index (NASA-TLX) multidimensional workload scale (Hart & Staveland, 1988) to test its efficacy for use with elementary school children. We developed 2 types of tasks, each with 2 levels of demand, to draw differentially on resources from the separate subscales of workload. In Experiment 1, our participants were both typical and school-labeled gifted children recruited from 4th and 5th grades. Results revealed that task type elicited different workload profiles, and task demand directly affected the children's experience of workload. In general, gifted children experienced less workload than typical children. Objective response time and accuracy measures provide evidence for the criterion validity of the workload ratings. In Experiment 2, we applied the same method with 1st- and 2nd-grade children. Findings from Experiment 2 paralleled those of Experiment 1 and support the use of NASA-TLX with even the youngest elementary school children. These findings contribute to the fledgling field of educational ergonomics and attest to the innovative application of workload research. Such research may optimize instructional techniques and identify children at risk for experiencing overload.

  14. Cost-aware request routing in multi-geography cloud data centres using software-defined networking

    Science.gov (United States)

    Yuan, Haitao; Bi, Jing; Li, Bo Hu; Tan, Wei

    2017-03-01

    Current geographically distributed cloud data centres (CDCs) require gigantic energy and bandwidth costs to provide multiple cloud applications to users around the world. Previous studies only focus on energy cost minimisation in distributed CDCs. However, a CDC provider needs to deliver gigantic data between users and distributed CDCs through internet service providers (ISPs). Geographical diversity of bandwidth and energy costs brings a highly challenging problem of how to minimise the total cost of a CDC provider. With the recently emerging software-defined networking, we study the total cost minimisation problem for a CDC provider by exploiting geographical diversity of energy and bandwidth costs. We formulate the total cost minimisation problem as a mixed integer non-linear programming (MINLP). Then, we develop heuristic algorithms to solve the problem and to provide a cost-aware request routing for joint optimisation of the selection of ISPs and the number of servers in distributed CDCs. Besides, to tackle the dynamic workload in distributed CDCs, this article proposes a regression-based workload prediction method to obtain future incoming workload. Finally, this work evaluates the cost-aware request routing by trace-driven simulation and compares it with the existing approaches to demonstrate its effectiveness.

  15. Modelling of cirrus clouds – Part 2: Competition of different nucleation mechanisms

    Directory of Open Access Journals (Sweden)

    P. Spichtinger

    2009-04-01

    Full Text Available We study the competition of two different freezing mechanisms (homogeneous and heterogeneous freezing in the same environment for cold cirrus clouds. To this goal we use the recently developed and validated ice microphysics scheme (Spichtinger and Gierens, 2009a which distinguishes between ice classes according to their formation process. We investigate cases with purely homogeneous ice formation and compare them with cases where background ice nuclei in varying concentration heterogeneously form ice prior to homogeneous nucleation. We perform additionally a couple of sensitivity studies regarding threshold humidity for heterogeneous freezing, uplift speed, and ambient temperature, and we study the influence of random motions induced by temperature fluctuations in the clouds. We find three types of cloud evolution, homogeneously dominated, heterogeneously dominated, and a mixed type where neither nucleation process dominates. The latter case is prone to long–lasting in–cloud ice supersaturation of the order 30% and more.

  16. Cloud Computing Concepts for Academic Collaboration

    Directory of Open Access Journals (Sweden)

    K.K. Jabbour

    2013-05-01

    Full Text Available The aim of this paper is to explain how cloud computing technologies improve academic collaboration. To accomplish that, we have to explore the current trend of the global computer network field. During the past few years, technology has evolved in many ways; many valuable web applications and services have been introduced to internet users. Social networking, synchronous/asynchronous communication, on-line video conferencing, and wikis are just a few examples of those web technologies that altered the way people interact nowadays. By utilizing some of the latest web tools and services and combining them with the most recent semantic Cloud Computing techniques, a wide and growing array of technology services and applications are provided, which are highly specialized or distinctive to individual or to educational campuses. Therefore, cloud computing can facilitate a new way of world academic collaboration; and introduce students to new and different ways that can help them manage massive workloads.

  17. The CTTC 5G End-to-End Experimental Platform : Integrating Heterogeneous Wireless/Optical Networks, Distributed Cloud, and IoT Devices

    OpenAIRE

    Munoz, Raul; Mangues-Bafalluy, Josep; Vilalta, Ricard; Verikoukis, Christos; Alonso-Zarate, Jesus; Bartzoudis, Nikolaos; Georgiadis, Apostolos; Payaro, Miquel; Perez-Neira, Ana; Casellas, Ramon; Martinez, Ricardo; Nunez-Martinez, Jose; Requena Esteso, Manuel; Pubill, David; Font-Bach, Oriol

    2016-01-01

    The Internet of Things (IoT) will facilitate a wide variety of applications in different domains, such as smart cities, smart grids, industrial automation (Industry 4.0), smart driving, assistance of the elderly, and home automation. Billions of heterogeneous smart devices with different application requirements will be connected to the networks and will generate huge aggregated volumes of data that will be processed in distributed cloud infrastructures. On the other hand, there is also a gen...

  18. Patient Safety Incidents and Nursing Workload

    Directory of Open Access Journals (Sweden)

    Katya Cuadros Carlesi

    Full Text Available ABSTRACT Objective: to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. Method: quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs was performed using the Therapeutic Interventions Scoring System (TISS-28 and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. Results: 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919 and rate of falls (r = 0.8770. The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. Conclusions: the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload.

  19. Global model comparison of heterogeneous ice nucleation parameterizations in mixed phase clouds

    Science.gov (United States)

    Yun, Yuxing; Penner, Joyce E.

    2012-04-01

    A new aerosol-dependent mixed phase cloud parameterization for deposition/condensation/immersion (DCI) ice nucleation and one for contact freezing are compared to the original formulations in a coupled general circulation model and aerosol transport model. The present-day cloud liquid and ice water fields and cloud radiative forcing are analyzed and compared to observations. The new DCI freezing parameterization changes the spatial distribution of the cloud water field. Significant changes are found in the cloud ice water fraction and in the middle cloud fractions. The new DCI freezing parameterization predicts less ice water path (IWP) than the original formulation, especially in the Southern Hemisphere. The smaller IWP leads to a less efficient Bergeron-Findeisen process resulting in a larger liquid water path, shortwave cloud forcing, and longwave cloud forcing. It is found that contact freezing parameterizations have a greater impact on the cloud water field and radiative forcing than the two DCI freezing parameterizations that we compared. The net solar flux at top of atmosphere and net longwave flux at the top of the atmosphere change by up to 8.73 and 3.52 W m-2, respectively, due to the use of different DCI and contact freezing parameterizations in mixed phase clouds. The total climate forcing from anthropogenic black carbon/organic matter in mixed phase clouds is estimated to be 0.16-0.93 W m-2using the aerosol-dependent parameterizations. A sensitivity test with contact ice nuclei concentration in the original parameterization fit to that recommended by Young (1974) gives results that are closer to the new contact freezing parameterization.

  20. Defining inter-cloud architecture for interoperability and integration

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; Makkes, M.X.; Strijkers, R.; de Laat, C.; Zimmermann, W.; Lee, Y.W.; Demchenko, Y.

    2012-01-01

    This paper presents an on-going research to develop the Inter-Cloud Architecture, which addresses the architectural problems in multi-provider multi-domain heterogeneous cloud based applications integration and interoperability, including integration and interoperability with legacy infrastructure

  1. Improved Cloud resource allocation: how INDIGO-DataCloud is overcoming the current limitations in Cloud schedulers

    Science.gov (United States)

    Lopez Garcia, Alvaro; Zangrando, Lisa; Sgaravatto, Massimo; Llorens, Vincent; Vallero, Sara; Zaccolo, Valentina; Bagnasco, Stefano; Taneja, Sonia; Dal Pra, Stefano; Salomoni, Davide; Donvito, Giacinto

    2017-10-01

    Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.

  2. An Architecture of IoT Service Delegation and Resource Allocation Based on Collaboration between Fog and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Aymen Abdullah Alsaffar

    2016-01-01

    Full Text Available Despite the wide utilization of cloud computing (e.g., services, applications, and resources, some of the services, applications, and smart devices are not able to fully benefit from this attractive cloud computing paradigm due to the following issues: (1 smart devices might be lacking in their capacity (e.g., processing, memory, storage, battery, and resource allocation, (2 they might be lacking in their network resources, and (3 the high network latency to centralized server in cloud might not be efficient for delay-sensitive application, services, and resource allocations requests. Fog computing is promising paradigm that can extend cloud resources to edge of network, solving the abovementioned issue. As a result, in this work, we propose an architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. We provide new algorithm that is decision rules of linearized decision tree based on three conditions (services size, completion time, and VMs capacity for managing and delegating user request in order to balance workload. Moreover, we propose algorithm to allocate resources to meet service level agreement (SLA and quality of services (QoS as well as optimizing big data distribution in fog and cloud computing. Our simulation result shows that our proposed approach can efficiently balance workload, improve resource allocation efficiently, optimize big data distribution, and show better performance than other existing methods.

  3. Resource Management in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Andrei IONESCU

    2015-01-01

    Full Text Available Mobile cloud computing is a major research topic in Information Technology & Communications. It integrates cloud computing, mobile computing and wireless networks. While mainly built on cloud computing, it has to operate using more heterogeneous resources with implications on how these resources are managed and used. Managing the resources of a mobile cloud is not a trivial task, involving vastly different architectures. The process is outside the scope of human users. Using the resources by the applications at both platform and software tiers come with its own challenges. This paper presents different approaches in use for managing cloud resources at infrastructure and platform levels.

  4. Perceived Time as a Measure of Mental Workload

    DEFF Research Database (Denmark)

    Hertzum, Morten; Holmegaard, Kristin Due

    2013-01-01

    The mental workload imposed by systems is important to their operation and usability. Consequently, researchers and practitioners need reliable, valid, and easy-to-administer methods for measuring mental workload. The ratio of perceived time to clock time appears to be such a method, yet mental...... is a performance-related rather than task-related dimension of mental workload. We find a higher perceived time ratio for timed than untimed tasks. According to subjective workload ratings and pupil-diameter measurements the timed tasks impose higher mental workload. This finding contradicts the prospective...... paradigm, which asserts that perceived time decreases with increasing mental workload. We also find a higher perceived time ratio for solved than unsolved tasks, while subjective workload ratings indicate lower mental workload for the solved tasks. This finding shows that the relationship between...

  5. Implementation of a Novel Educational Modeling Approach for Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sara Ouahabi

    2014-12-01

    Full Text Available The Cloud model is cost-effective because customers pay for their actual usage without upfront costs, and scalable because it can be used more or less depending on the customers’ needs. Due to its advantages, Cloud has been increasingly adopted in many areas, such as banking, e-commerce, retail industry, and academy. For education, cloud is used to manage the large volume of educational resources produced across many universities in the cloud. Keep interoperability between content in an inter-university Cloud is not always easy. Diffusion of pedagogical contents on the Cloud by different E-Learning institutions leads to heterogeneous content which influence the quality of teaching offered by university to teachers and learners. From this reason, comes the idea of using IMS-LD coupled with metadata in the cloud. This paper presents the implementation of our previous educational modeling by combining an application in J2EE with Reload editor that consists of modeling heterogeneous content in the cloud. The new approach that we followed focuses on keeping interoperability between Educational Cloud content for teachers and learners and facilitates the task of identification, reuse, sharing, adapting teaching and learning resources in the Cloud.

  6. Psychological workload and body weight

    DEFF Research Database (Denmark)

    Overgaard, Dorthe; Gyntelberg, Finn; Heitmann, Berit L

    2004-01-01

    on the association between obesity and psychological workload. METHOD: We carried out a review of the associations between psychological workload and body weight in men and women. In total, 10 cross-sectional studies were identified. RESULTS: The review showed little evidence of a general association between...... adjustment for education. For women, there was no evidence of a consistent association. CONCLUSION: The reviewed articles were not supportive of any associations between psychological workload and either general or abdominal obesity. Future epidemiological studies in this field should be prospective......BACKGROUND: According to Karasek's Demand/Control Model, workload can be conceptualized as job strain, a combination of psychological job demands and control in the job. High job strain may result from high job demands combined with low job control. Aim To give an overview of the literature...

  7. Early experience on using glideinWMS in the cloud

    International Nuclear Information System (INIS)

    Andrews, W; Dost, J; Martin, T; McCrea, A; Pi, H; Sfiligoi, I; Würthwein, F; Bockelman, B; Weitzel, D; Bradley, D; Frey, J; Livny, M; Tannenbaum, T; Evans, D; Fisk, I; Holzman, B; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  8. Early experience on using glidein WMS in the cloud

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, W. [UC, San Diego; Bockelman, B. [Nebraska U.; Bradley, D. [Wisconsin U., Madison; Dost, J. [UC, San Diego; Evans, D. [Fermilab; Fisk, I. [Fermilab; Frey, J. [Wisconsin U., Madison; Holzman, B. [Fermilab; Livny, M. [Wisconsin U., Madison; Martin, T. [UC, San Diego; McCrea, A. [UC, San Diego; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Pi, H. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Sheldon, P. [Vanderbilt U.; Tannenbaum, T. [Wisconsin U., Madison; Tiradani, A. [Fermilab; Wurthwein, F. [UC, San Diego; Weitzel, D. [Nebraska U.

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  9. Pilot Workload and Speech Analysis: A Preliminary Investigation

    Science.gov (United States)

    Bittner, Rachel M.; Begault, Durand R.; Christopher, Bonny R.

    2013-01-01

    Prior research has questioned the effectiveness of speech analysis to measure the stress, workload, truthfulness, or emotional state of a talker. The question remains regarding the utility of speech analysis for restricted vocabularies such as those used in aviation communications. A part-task experiment was conducted in which participants performed Air Traffic Control read-backs in different workload environments. Participant's subjective workload and the speech qualities of fundamental frequency (F0) and articulation rate were evaluated. A significant increase in subjective workload rating was found for high workload segments. F0 was found to be significantly higher during high workload while articulation rates were found to be significantly slower. No correlation was found to exist between subjective workload and F0 or articulation rate.

  10. Big Data X-Learning Resources Integration and Processing in Cloud Environments

    Directory of Open Access Journals (Sweden)

    Kong Xiangsheng

    2014-09-01

    Full Text Available The cloud computing platform has good flexibility characteristics, more and more learning systems are migrated to the cloud platform. Firstly, this paper describes different types of educational environments and the data they provide. Then, it proposes a kind of heterogeneous learning resources mining, integration and processing architecture. In order to integrate and process the different types of learning resources in different educational environments, this paper specifically proposes a novel solution and massive storage integration algorithm and conversion algorithm to the heterogeneous learning resources storage and management cloud environments.

  11. Workload modelling for data-intensive systems

    CERN Document Server

    Lassnig, Mario

    This thesis presents a comprehensive study built upon the requirements of a global data-intensive system, built for the ATLAS Experiment at CERN's Large Hadron Collider. First, a scalable method is described to capture distributed data management operations in a non-intrusive way. These operations are collected into a globally synchronised sequence of events, the workload. A comparative analysis of this new data-intensive workload against existing computational workloads is conducted, leading to the discovery of the importance of descriptive attributes in the operations. Existing computational workload models only consider the arrival rates of operations, however, in data-intensive systems the correlations between attributes play a central role. Furthermore, the detrimental effect of rapid correlated arrivals, so called bursts, is assessed. A model is proposed that can learn burst behaviour from captured workload, and in turn forecast potential future bursts. To help with the creation of a full representative...

  12. Experimental Analysis on Autonomic Strategies for Cloud Elasticity

    OpenAIRE

    Dupont , Simon; Lejeune , Jonathan; Alvares , Frederico; Ledoux , Thomas

    2015-01-01

    International audience; In spite of the indubitable advantages of elasticity in Cloud infrastructures, some technical and conceptual limitations are still to be considered. For instance , resource start up time is generally too long to react to unexpected workload spikes. Also, the billing cycles' granularity of existing pricing models may incur consumers to suffer from partial usage waste. We advocate that the software layer can take part in the elasticity process as the overhead of software...

  13. An Efficient Virtual Machine Consolidation Scheme for Multimedia Cloud Computing.

    Science.gov (United States)

    Han, Guangjie; Que, Wenhui; Jia, Gangyong; Shu, Lei

    2016-02-18

    Cloud computing has innovated the IT industry in recent years, as it can delivery subscription-based services to users in the pay-as-you-go model. Meanwhile, multimedia cloud computing is emerging based on cloud computing to provide a variety of media services on the Internet. However, with the growing popularity of multimedia cloud computing, its large energy consumption cannot only contribute to greenhouse gas emissions, but also result in the rising of cloud users' costs. Therefore, the multimedia cloud providers should try to minimize its energy consumption as much as possible while satisfying the consumers' resource requirements and guaranteeing quality of service (QoS). In this paper, we have proposed a remaining utilization-aware (RUA) algorithm for virtual machine (VM) placement, and a power-aware algorithm (PA) is proposed to find proper hosts to shut down for energy saving. These two algorithms have been combined and applied to cloud data centers for completing the process of VM consolidation. Simulation results have shown that there exists a trade-off between the cloud data center's energy consumption and service-level agreement (SLA) violations. Besides, the RUA algorithm is able to deal with variable workload to prevent hosts from overloading after VM placement and to reduce the SLA violations dramatically.

  14. Patient Safety Incidents and Nursing Workload 1

    Science.gov (United States)

    Carlesi, Katya Cuadros; Padilha, Kátia Grillo; Toffoletto, Maria Cecília; Henriquez-Roldán, Carlos; Juan, Monica Andrea Canales

    2017-01-01

    ABSTRACT Objective: to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. Method: quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs) was performed using the Therapeutic Interventions Scoring System (TISS-28) and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. Results: 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919) and rate of falls (r = 0.8770). The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. Conclusions: the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload. PMID:28403334

  15. Image selection as a service for cloud computing environments

    KAUST Repository

    Filepp, Robert

    2010-12-01

    Customers of Cloud Services are expected to choose specific machine images to instantiate in order to host their workloads. Unfortunately very little information is provided to the users to enable them to make intelligent choices. We believe that as the number of images proliferates it will become increasingly difficult for users to decide effectively. Cloud service providers often allow their customers to instantiate standard system images, to modify their instances, and to store images of these customized instances for public or private future use. Storing modified instances as images enables customers to avoid re-provisioning and re-configuration of required resources thereby reducing their future costs. However Cloud service providers generally do not expose details regarding the configurations of the images in a rigorous canonical fashion nor offer services that assist clients in the best target image selection to support client transformation objectives. Rather, they allow customers to enter a free-form description of an image based on client\\'s best effort. This means in order to find a "best fit" image to instantiate, a human user must review potentially thousands of image descriptions, reading each description to evaluate its suitability as a platform to host their source application. Furthermore, the actual content of the selected image may differ greatly from its description. Finally, even images that have been customized and retained for future use may need additional provisioning and customization to accommodate specific needs. In this paper we propose a service that accumulates image configuration details in a canonical fashion and a further service that employs an algorithm to order images per best fit /least cost in conformance to user-specified policies. These services collectively facilitate workload transformation into enterprise cloud environments.

  16. Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Muraraşu, Alin

    2012-01-01

    Multi-core parallelism and accelerators are becoming common features of today’s computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.

  17. AVOCLOUDY: a simulator of volunteer clouds

    DEFF Research Database (Denmark)

    Sebastio, Stefano; Amoretti, Michele; Lluch Lafuente, Alberto

    2015-01-01

    The increasing demand of computational and storage resources is shifting users toward the adoption of cloud technologies. Cloud computing is based on the vision of computing as utility, where users no more need to buy machines but simply access remote resources made available on-demand by cloud...... application, intelligent agents constitute a feasible technology to add autonomic features to cloud operations. Furthermore, the volunteer computing paradigm—one of the Information and Communications Technology (ICT) trends of the last decade—can be pulled alongside traditional cloud approaches...... management solutions before their deployment in the production environment. However, currently available simulators of cloud platforms are not suitable to model and analyze such heterogeneous, large-scale, and highly dynamic systems. We propose the AVOCLOUDY simulator to fill this gap. This paper presents...

  18. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Directory of Open Access Journals (Sweden)

    Supriya Kinger

    2014-01-01

    Full Text Available Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  19. Prediction based proactive thermal virtual machine scheduling in green clouds.

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated.

  20. Prediction Based Proactive Thermal Virtual Machine Scheduling in Green Clouds

    Science.gov (United States)

    Kinger, Supriya; Kumar, Rajesh; Sharma, Anju

    2014-01-01

    Cloud computing has rapidly emerged as a widely accepted computing paradigm, but the research on Cloud computing is still at an early stage. Cloud computing provides many advanced features but it still has some shortcomings such as relatively high operating cost and environmental hazards like increasing carbon footprints. These hazards can be reduced up to some extent by efficient scheduling of Cloud resources. Working temperature on which a machine is currently running can be taken as a criterion for Virtual Machine (VM) scheduling. This paper proposes a new proactive technique that considers current and maximum threshold temperature of Server Machines (SMs) before making scheduling decisions with the help of a temperature predictor, so that maximum temperature is never reached. Different workload scenarios have been taken into consideration. The results obtained show that the proposed system is better than existing systems of VM scheduling, which does not consider current temperature of nodes before making scheduling decisions. Thus, a reduction in need of cooling systems for a Cloud environment has been obtained and validated. PMID:24737962

  1. Workload Control with Continuous Release

    NARCIS (Netherlands)

    Phan, B. S. Nguyen; Land, M. J.; Gaalman, G. J. C.

    2009-01-01

    Workload Control (WLC) is a production planning and control concept which is suitable for the needs of make-to-order job shops. Release decisions based on the workload norms form the core of the concept. This paper develops continuous time WLC release variants and investigates their due date

  2. Combining Quick-Turnaround and Batch Workloads at Scale

    Science.gov (United States)

    Matthews, Gregory A.

    2012-01-01

    NAS uses PBS Professional to schedule and manage the workload on Pleiades, an 11,000+ node 1B cluster. At this scale the user experience for quick-turnaround jobs can degrade, which led NAS initially to set up two separate PBS servers, each dedicated to a particular workload. Recently we have employed PBS hooks and scheduler modifications to merge these workloads together under one PBS server, delivering sub-1-minute start times for the quick-turnaround workload, and enabling dynamic management of the resources set aside for that workload.

  3. Biobjective VoIP Service Management in Cloud Infrastructure

    Directory of Open Access Journals (Sweden)

    Jorge M. Cortés-Mendoza

    2016-01-01

    Full Text Available Voice over Internet Protocol (VoIP allows communication of voice and/or data over the internet in less expensive and reliable manner than traditional ISDN systems. This solution typically allows flexible interconnection between organization and companies on any domains. Cloud VoIP solutions can offer even cheaper and scalable service when virtualized telephone infrastructure is used in the most efficient way. Scheduling and load balancing algorithms are fundamental parts of this approach. Unfortunately, VoIP scheduling techniques do not take into account uncertainty in dynamic and unpredictable cloud environments. In this paper, we formulate the problem of scheduling of VoIP services in distributed cloud environments and propose a new model for biobjective optimization. We consider the special case of the on-line nonclairvoyant dynamic bin-packing problem and discuss solutions for provider cost and quality of service optimization. We propose twenty call allocation strategies and evaluate their performance by comprehensive simulation analysis on real workload considering six months of the MIXvoip company service.

  4. Mental workload during brain-computer interface training.

    Science.gov (United States)

    Felton, Elizabeth A; Williams, Justin C; Vanderheiden, Gregg C; Radwin, Robert G

    2012-01-01

    It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.

  5. The hipster approach for improving cloud system efficiency

    OpenAIRE

    Nishtala, Rajiv; Carpenter, Paul Matthew; Petrucci, Vinicius; Martorell Bofill, Xavier

    2017-01-01

    In 2013, U.S. data centers accounted for 2.2% of the country’s total electricity consumption, a figure that is projected to increase rapidly over the next decade. Many important data center workloads in cloud computing are interactive, and they demand strict levels of quality-of-service (QoS) to meet user expectations, making it challenging to optimize power consumption along with increasing performance demands. This article introduces Hipster, a technique that combines heuristics and rein...

  6. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    Science.gov (United States)

    Spiegel, J.K.; Aemisegger, F.; Scholl, M.; Wienhold, F.G.; Collett, J.L.; Lee, T.; van Pinxteren, D.; Mertes, S.; Tilgner, A.; Herrmann, H.; Werner, Roland A.; Buchmann, N.; Eugster, W.

    2012-01-01

    In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog) during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010) using a three-stage Caltech Active Strand Cloud water Collector (CASCC). An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range) were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  7. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    Directory of Open Access Journals (Sweden)

    J. K. Spiegel

    2012-10-01

    Full Text Available In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010 using a three-stage Caltech Active Strand Cloud water Collector (CASCC. An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  8. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    Science.gov (United States)

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time

  9. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Science.gov (United States)

    Chung, Wei-Chun; Chen, Chien-Chih; Ho, Jan-Ming; Lin, Chung-Yen; Hsu, Wen-Lian; Wang, Yu-Chun; Lee, D T; Lai, Feipei; Huang, Chih-Wei; Chang, Yu-Jung

    2014-01-01

    Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce. We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard. CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate to improve the

  10. CloudDOE: a user-friendly tool for deploying Hadoop clouds and analyzing high-throughput sequencing data with MapReduce.

    Directory of Open Access Journals (Sweden)

    Wei-Chun Chung

    Full Text Available Explosive growth of next-generation sequencing data has resulted in ultra-large-scale data sets and ensuing computational problems. Cloud computing provides an on-demand and scalable environment for large-scale data analysis. Using a MapReduce framework, data and workload can be distributed via a network to computers in the cloud to substantially reduce computational latency. Hadoop/MapReduce has been successfully adopted in bioinformatics for genome assembly, mapping reads to genomes, and finding single nucleotide polymorphisms. Major cloud providers offer Hadoop cloud services to their users. However, it remains technically challenging to deploy a Hadoop cloud for those who prefer to run MapReduce programs in a cluster without built-in Hadoop/MapReduce.We present CloudDOE, a platform-independent software package implemented in Java. CloudDOE encapsulates technical details behind a user-friendly graphical interface, thus liberating scientists from having to perform complicated operational procedures. Users are guided through the user interface to deploy a Hadoop cloud within in-house computing environments and to run applications specifically targeted for bioinformatics, including CloudBurst, CloudBrush, and CloudRS. One may also use CloudDOE on top of a public cloud. CloudDOE consists of three wizards, i.e., Deploy, Operate, and Extend wizards. Deploy wizard is designed to aid the system administrator to deploy a Hadoop cloud. It installs Java runtime environment version 1.6 and Hadoop version 0.20.203, and initiates the service automatically. Operate wizard allows the user to run a MapReduce application on the dashboard list. To extend the dashboard list, the administrator may install a new MapReduce application using Extend wizard.CloudDOE is a user-friendly tool for deploying a Hadoop cloud. Its smart wizards substantially reduce the complexity and costs of deployment, execution, enhancement, and management. Interested users may collaborate

  11. A Model of Student Workload

    Science.gov (United States)

    Bowyer, Kyle

    2012-01-01

    Student workload is a contributing factor to students deciding to withdraw from their study before completion of the course, at significant cost to students, institutions and society. The aim of this paper is to create a basic workload model for a group of undergraduate students studying business law units at Curtin University in Western…

  12. Evaluating the Efficacy of the Cloud for Cluster Computation

    Science.gov (United States)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  13. Mental workload in decision and control

    Science.gov (United States)

    Sheridan, T. B.

    1979-01-01

    This paper briefly reviews the problems of defining and measuring the 'mental workload' of aircraft pilots and other human operators of complex dynamic systems. Of the alternative approaches the author indicates a clear preference for the use of subjective scaling. Some recent experiments from MIT and elsewhere are described which utilize subjective mental workload scales in conjunction with human decision and control tasks in the laboratory. Finally a new three-dimensional mental workload rating scale, under current development for use by IFR aircraft pilots, is presented.

  14. Zero Trust Cloud Networks using Transport Access Control and High Availability Optical Bypass Switching

    Directory of Open Access Journals (Sweden)

    Casimer DeCusatis

    2017-04-01

    Full Text Available Cyberinfrastructure is undergoing a radical transformation as traditional enterprise and cloud computing environments hosting dynamic, mobile workloads replace telecommunication data centers. Traditional data center security best practices involving network segmentation are not well suited to these new environments. We discuss a novel network architecture, which enables an explicit zero trust approach, based on a steganographic overlay, which embeds authentication tokens in the TCP packet request, and first-packet authentication. Experimental demonstration of this approach is provided in both an enterprise-class server and cloud computing data center environment.

  15. Coordinated Energy Management in Heterogeneous Processors

    Directory of Open Access Journals (Sweden)

    Indrani Paul

    2014-01-01

    Full Text Available This paper examines energy management in a heterogeneous processor consisting of an integrated CPU–GPU for high-performance computing (HPC applications. Energy management for HPC applications is challenged by their uncompromising performance requirements and complicated by the need for coordinating energy management across distinct core types – a new and less understood problem. We examine the intra-node CPU–GPU frequency sensitivity of HPC applications on tightly coupled CPU–GPU architectures as the first step in understanding power and performance optimization for a heterogeneous multi-node HPC system. The insights from this analysis form the basis of a coordinated energy management scheme, called DynaCo, for integrated CPU–GPU architectures. We implement DynaCo on a modern heterogeneous processor and compare its performance to a state-of-the-art power- and performance-management algorithm. DynaCo improves measured average energy-delay squared (ED2 product by up to 30% with less than 2% average performance loss across several exascale and other HPC workloads.

  16. Relationship between workload and mind-wandering in simulated driving.

    Directory of Open Access Journals (Sweden)

    Yuyu Zhang

    Full Text Available Mental workload and mind-wandering are highly related to driving safety. This study investigated the relationship between mental workload and mind-wandering while driving. Participants (N = 40 were asked to perform a car following task in driving simulator, and report whether they had experienced mind-wandering upon hearing a tone. After driving, participants reported their workload using the NASA-Task Load Index (TLX. Results revealed an interaction between workload and mind-wandering in two different perspectives. First, there was a negative correlation between workload and mind-wandering (r = -0.459, p < 0.01 for different individuals. Second, from temporal perspective workload and mind-wandering frequency increased significantly over task time and were positively correlated. Together, these findings contribute to understanding the roles of workload and mind-wandering in driving.

  17. Measuring workload in collaborative contexts: trait versus state perspectives.

    Science.gov (United States)

    Helton, William S; Funke, Gregory J; Knott, Benjamin A

    2014-03-01

    In the present study, we explored the state versus trait aspects of measures of task and team workload in a disaster simulation. There is often a need to assess workload in both individual and collaborative settings. Researchers in this field often use the NASATask Load Index (NASA-TLX) as a global measure of workload by aggregating the NASA-TLX's component items. Using this practice, one may overlook the distinction between traits and states. Fifteen dyadic teams (11 inexperienced, 4 experienced) completed five sessions of a tsunami disaster simulator. After every session, individuals completed a modified version of the NASA-TLX that included team workload measures.We then examined the workload items by using a between-subjects and within-subjects perspective. Between-subjects and within-subjects correlations among the items indicated the workload items are more independent within subjects (as states) than between subjects (as traits). Correlations between the workload items and simulation performance were also different at the trait and state levels. Workload may behave differently at trait (between-subjects) and state (within-subjects) levels. Researchers interested in workload measurement as a state should take a within-subjects perspective in their analyses.

  18. Workload-Aware Indexing of Continuously Moving Objects

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Yiu, Man Lung; Jensen, Christian Søndergaard

    2009-01-01

    structures can easily become performance bottlenecks. We address the need for indexing that is adaptive to the workload characteristics, called workload-aware, in order to cover the space in between maintaining an accurate index, and having no index at all. Our proposal, QU-Trade, extends R-tree type...... indexing and achieves workload-awareness by controlling the underlying index’s filtering quality. QU-Trade safely drops index updates, increasing the overlap in the index when the workload is update-intensive, and it restores the filtering capabilities of the index when the workload becomes query......-intensive. This is done in a non-uniform way in space so that the quality of the index remains high in frequently queried regions, while it deteriorates in frequently updated regions. The adaptation occurs online, without the need for a learning phase. We apply QU-Trade to the R-tree and the TPR-tree, and we offer...

  19. An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud

    Directory of Open Access Journals (Sweden)

    Thanh Dinh

    2016-06-01

    Full Text Available This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.

  20. Workload based order acceptance in job shop environments

    NARCIS (Netherlands)

    Ebben, Mark; Hans, Elias W.; Olde Weghuis, F.M.; Olde Weghuis, F.M.

    2005-01-01

    In practice, order acceptance and production planning are often functionally separated. As a result, order acceptance decisions are made without considering the actual workload in the production system, or by only regarding the aggregate workload. We investigate the importance of a good workload

  1. Characterization and Architectural Implications of Big Data Workloads

    OpenAIRE

    Wang, Lei; Zhan, Jianfeng; Jia, Zhen; Han, Rui

    2015-01-01

    Big data areas are expanding in a fast way in terms of increasing workloads and runtime systems, and this situation imposes a serious challenge to workload characterization, which is the foundation of innovative system and architecture design. The previous major efforts on big data benchmarking either propose a comprehensive but a large amount of workloads, or only select a few workloads according to so-called popularity, which may lead to partial or even biased observations. In this paper, o...

  2. The Magellan Final Report on Cloud Computing

    Energy Technology Data Exchange (ETDEWEB)

    ,; Coghlan, Susan; Yelick, Katherine

    2011-12-21

    The goal of Magellan, a project funded through the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR), was to investigate the potential role of cloud computing in addressing the computing needs for the DOE Office of Science (SC), particularly related to serving the needs of mid- range computing and future data-intensive computing workloads. A set of research questions was formed to probe various aspects of cloud computing from performance, usability, and cost. To address these questions, a distributed testbed infrastructure was deployed at the Argonne Leadership Computing Facility (ALCF) and the National Energy Research Scientific Computing Center (NERSC). The testbed was designed to be flexible and capable enough to explore a variety of computing models and hardware design points in order to understand the impact for various scientific applications. During the project, the testbed also served as a valuable resource to application scientists. Applications from a diverse set of projects such as MG-RAST (a metagenomics analysis server), the Joint Genome Institute, the STAR experiment at the Relativistic Heavy Ion Collider, and the Laser Interferometer Gravitational Wave Observatory (LIGO), were used by the Magellan project for benchmarking within the cloud, but the project teams were also able to accomplish important production science utilizing the Magellan cloud resources.

  3. Modelling heterogeneous ice nucleation on mineral dust and soot with parameterizations based on laboratory experiments

    Science.gov (United States)

    Hoose, C.; Hande, L. B.; Mohler, O.; Niemand, M.; Paukert, M.; Reichardt, I.; Ullrich, R.

    2016-12-01

    Between 0 and -37°C, ice formation in clouds is triggered by aerosol particles acting as heterogeneous ice nuclei. At lower temperatures, heterogeneous ice nucleation on aerosols can occur at lower supersaturations than homogeneous freezing of solutes. In laboratory experiments, the ability of different aerosol species (e.g. desert dusts, soot, biological particles) has been studied in detail and quantified via various theoretical or empirical parameterization approaches. For experiments in the AIDA cloud chamber, we have quantified the ice nucleation efficiency via a temperature- and supersaturation dependent ice nucleation active site density. Here we present a new empirical parameterization scheme for immersion and deposition ice nucleation on desert dust and soot based on these experimental data. The application of this parameterization to the simulation of cirrus clouds, deep convective clouds and orographic clouds will be shown, including the extension of the scheme to the treatment of freezing of rain drops. The results are compared to other heterogeneous ice nucleation schemes. Furthermore, an aerosol-dependent parameterization of contact ice nucleation is presented.

  4. Managing Teacher Workload: Work-Life Balance and Wellbeing

    Science.gov (United States)

    Bubb, Sara; Earley, Peter

    2004-01-01

    This book is divided into three sections. In the First Section, entitled "Wellbeing and Workload", the authors examine teacher workload and how teachers spend their time. Chapter 1 focuses on what the causes and effects of excessive workload are, especially in relation to wellbeing, stress and, crucially, recruitment and retention?…

  5. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  6. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2017-08-01

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  7. Monday Morning Workload Reports (FY15 - 17)

    Data.gov (United States)

    Department of Veterans Affairs — The Monday Morning Workload Report (MMWR) displays a snapshot of the Veterans Benefits Administration’s (VBA) workload as of a specified date, typically the previous...

  8. [Nursing workloads and working conditions: integrative review].

    Science.gov (United States)

    Schmoeller, Roseli; Trindade, Letícia de Lima; Neis, Márcia Binder; Gelbcke, Francine Lima; de Pires, Denise Elvira Pires

    2011-06-01

    This study reviews theoretical production concerning workloads and working conditions for nurses. For that, an integrative review was carried out using scientific articles, theses and dissertations indexed in two Brazilian databases, Virtual Health Care Library (Biblioteca Virtual de Saúde) and Digital Database of Dissertations (Banco Digital de Teses), over the last ten years. From 132 identified studies, 27 were selected. Results indicate workloads as responsible for professional weariness, affecting the occurrence of work accidents and health problems. In order to adequate workloads studies indicate some strategies, such as having an adequate numbers of employees, continuing education, and better working conditions. The challenge is to continue research that reveal more precisely the relationships between workloads, working conditions, and health of the nursing team.

  9. DIRAC pilot framework and the DIRAC Workload Management System

    International Nuclear Information System (INIS)

    Casajus, Adrian; Graciani, Ricardo; Paterson, Stuart; Tsaregorodtsev, Andrei

    2010-01-01

    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot Jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, Pilot Jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC Workload Management System provides one single scheduling mechanism for jobs with very different profiles. To achieve an overall optimisation, it organizes pending jobs in task queues, both for individual users and production activities. Task queues are created with jobs having similar requirements. Following the VO policy a priority is assigned to each task queue. Pilot submission and subsequent job matching are based on these priorities following a statistical approach.

  10. DIRAC pilot framework and the DIRAC Workload Management System

    Energy Technology Data Exchange (ETDEWEB)

    Casajus, Adrian; Graciani, Ricardo [Universitat de Barcelona (Spain); Paterson, Stuart [CERN (Switzerland); Tsaregorodtsev, Andrei, E-mail: adria@ecm.ub.e, E-mail: graciani@ecm.ub.e, E-mail: stuart.paterson@cern.c, E-mail: atsareg@in2p3.f [CPPM Marseille (France)

    2010-04-01

    DIRAC, the LHCb community Grid solution, has pioneered the use of pilot jobs in the Grid. Pilot Jobs provide a homogeneous interface to an heterogeneous set of computing resources. At the same time, Pilot Jobs allow to delay the scheduling decision to the last moment, thus taking into account the precise running conditions at the resource and last moment requests to the system. The DIRAC Workload Management System provides one single scheduling mechanism for jobs with very different profiles. To achieve an overall optimisation, it organizes pending jobs in task queues, both for individual users and production activities. Task queues are created with jobs having similar requirements. Following the VO policy a priority is assigned to each task queue. Pilot submission and subsequent job matching are based on these priorities following a statistical approach.

  11. Design and Development of ChemInfoCloud: An Integrated Cloud Enabled Platform for Virtual Screening.

    Science.gov (United States)

    Karthikeyan, Muthukumarasamy; Pandit, Deepak; Bhavasar, Arvind; Vyas, Renu

    2015-01-01

    The power of cloud computing and distributed computing has been harnessed to handle vast and heterogeneous data required to be processed in any virtual screening protocol. A cloud computing platorm ChemInfoCloud was built and integrated with several chemoinformatics and bioinformatics tools. The robust engine performs the core chemoinformatics tasks of lead generation, lead optimisation and property prediction in a fast and efficient manner. It has also been provided with some of the bioinformatics functionalities including sequence alignment, active site pose prediction and protein ligand docking. Text mining, NMR chemical shift (1H, 13C) prediction and reaction fingerprint generation modules for efficient lead discovery are also implemented in this platform. We have developed an integrated problem solving cloud environment for virtual screening studies that also provides workflow management, better usability and interaction with end users using container based virtualization, OpenVz.

  12. Federated Access Control in Heterogeneous Intercloud Environment: Basic Models and Architecture Patterns

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Lee, C.

    2014-01-01

    This paper presents on-going research to define the basic models and architecture patterns for federated access control in heterogeneous (multi-provider) multi-cloud and inter-cloud environment. The proposed research contributes to the further definition of Intercloud Federation Framework (ICFF)

  13. Harvester : An edge service harvesting heterogeneous resources for ATLAS

    CERN Document Server

    Maeno, Tadashi; The ATLAS collaboration

    2018-01-01

    The Production and Distributed Analysis (PanDA) system has been successfully used in the ATLAS experiment as a data-driven workload management system. The PanDA system has proven to be capable of operating at the Large Hadron Collider data processing scale over the last decade including the Run 1 and Run 2 data taking periods. PanDA was originally designed to be weakly coupled with the WLCG processing resources. Lately the system is revealing the difficulties to optimally integrate and exploit new resource types such as HPC and preemptable cloud resources with instant spin-up, and new workflows such as the event service, because their intrinsic nature and requirements are quite different from that of traditional grid resources. Therefore, a new component, Harvester, has been developed to mediate the control and information flow between PanDA and the resources, in order to enable more intelligent workload management and dynamic resource provisioning based on detailed knowledge of resource capabilities and thei...

  14. Temperature Dependence in Homogeneous and Heterogeneous Nucleation

    Energy Technology Data Exchange (ETDEWEB)

    McGraw R. L.; Winkler, P. M.; Wagner, P. E.

    2017-08-01

    Heterogeneous nucleation on stable (sub-2 nm) nuclei aids the formation of atmospheric cloud condensation nuclei (CCN) by circumventing or reducing vapor pressure barriers that would otherwise limit condensation and new particle growth. Aerosol and cloud formation depend largely on the interaction between a condensing liquid and the nucleating site. A new paper published this year reports the first direct experimental determination of contact angles as well as contact line curvature and other geometric properties of a spherical cap nucleus at nanometer scale using measurements from the Vienna Size Analyzing Nucleus Counter (SANC) (Winkler et al., 2016). For water nucleating heterogeneously on silver oxide nanoparticles we find contact angles around 15 degrees compared to around 90 degrees for the macroscopically measured equilibrium angle for water on bulk silver. The small microscopic contact angles can be attributed via the generalized Young equation to a negative line tension that becomes increasingly dominant with increasing curvature of the contact line. These results enable a consistent theoretical description of heterogeneous nucleation and provide firm insight to the wetting of nanosized objects.

  15. Strategic workload management and decision biases in aviation

    Science.gov (United States)

    Raby, Mireille; Wickens, Christopher D.

    1994-01-01

    Thirty pilots flew three simulated landing approaches under conditions of low, medium, and high workload. Workload conditions were created by varying time pressure and external communications requirements. Our interest was in how the pilots strategically managed or adapted to the increasing workload. We independently assessed the pilot's ranking of the priority of different discrete tasks during the approach and landing. Pilots were found to sacrifice some aspects of primary flight control as workload increased. For discrete tasks, increasing workload increased the amount of time in performing the high priority tasks, decreased the time in performing those of lowest priority, and did not affect duration of performance episodes or optimality of scheduling of tasks of any priority level. Individual differences analysis revealed that high-performing subjects scheduled discrete tasks earlier in the flight and shifted more often between different activities.

  16. impact of workload induced stress on the professional effectiveness

    African Journals Online (AJOL)

    PROF EKWUEME

    aids, evaluation of students, learning motivation, classroom management, supervision of co-curricular activities and ... of workload. KEYWORDS; Stress, Workload, Professional effectiveness, Teachers, Cross River State .... determining the relationship between workload ..... adapted to cope with the stress that could have.

  17. Longwave indirect effect of mineral dusts on ice clouds

    Directory of Open Access Journals (Sweden)

    Q. Min

    2010-08-01

    Full Text Available In addition to microphysical changes in clouds, changes in nucleation processes of ice cloud due to aerosols would result in substantial changes in cloud top temperature as mildly supercooled clouds are glaciated through heterogenous nucleation processes. Measurements from multiple sensors on multiple observing platforms over the Atlantic Ocean show that the cloud effective temperature increases with mineral dust loading with a slope of +3.06 °C per unit aerosol optical depth. The macrophysical changes in ice cloud top distributions as a consequence of mineral dust-cloud interaction exert a strong cooling effect (up to 16 Wm−2 of thermal infrared radiation on cloud systems. Induced changes of ice particle size by mineral dusts influence cloud emissivity and play a minor role in modulating the outgoing longwave radiation for optically thin ice clouds. Such a strong cooling forcing of thermal infrared radiation would have significant impacts on cloud systems and subsequently on climate.

  18. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  19. SIMPLE HEURISTIC ALGORITHM FOR DYNAMIC VM REALLOCATION IN IAAS CLOUDS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available The rapid development of cloud technologies and its high prevalence in both commercial and academic areas have stimulated active research in the domain of optimal cloud resource management. One of the most active research directions is dynamic virtual machine (VM placement optimization in clouds build on Infrastructure-as-a-Service model. This kind of research may pursue different goals with energy-aware optimization being the most common goal as it aims at a urgent problem of green cloud computing - reducing energy consumption by data centers. In this paper we present a new heuristic algorithm of dynamic reallocation of VMs based on an approach presented in one of our previous works. In the algorithm we apply a 2-rank strategy to classify VMs and servers corresponding to the highly and lowly active VMs and solve four tasks: VM classification, host classification, forming a VM migration map and VMs migration. Dividing all of the VMs and servers into two classes we attempt to implement the possibility of risk reduction in case of hardware overloads under overcommitment conditions and to reduce the influence of the occurring overloads on the performance of the cloud VMs. Presented algorithm was developed based on the workload profile of the JINR cloud (a scientific private cloud with the goal of maximizing its usage, but it can also be applied in both public and private commercial clouds to organize the simultaneous use of different SLA and QoS levels in the same cloud environment by giving each VM rank its own level of overcommitment.

  20. School Nurse Workload: A Scoping Review of Acute Care, Community Health, and Mental Health Nursing Workload Literature

    Science.gov (United States)

    Endsley, Patricia

    2017-01-01

    The purpose of this scoping review was to survey the most recent (5 years) acute care, community health, and mental health nursing workload literature to understand themes and research avenues that may be applicable to school nursing workload research. The search for empirical and nonempirical literature was conducted using search engines such as…

  1. A self-analysis of the NASA-TLX workload measure.

    Science.gov (United States)

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  2. Workload Measurement in Human Autonomy Teaming: How and Why?

    Science.gov (United States)

    Shively, Jay

    2016-01-01

    This is an invited talk on autonomy and workload for an AFRL Blue Sky workshop sponsored by the Florida Institute for Human Machine Studies. The presentation reviews various metrics of workload and how to move forward with measuring workload in a human-autonomy teaming environment.

  3. Mental workload measurement in operator control room using NASA-TLX

    Science.gov (United States)

    Sugarindra, M.; Suryoputro, M. R.; Permana, A. I.

    2017-12-01

    The workload, encountered a combination of physical workload and mental workload, is a consequence of the activities for workers. Central control room is one department in the oil processing company, employees tasked with monitoring the processing unit for 24 hours nonstop with a combination of 3 shifts in 8 hours. NASA-TLX (NASA Task Load Index) is one of the subjective mental workload measurement using six factors, namely the Mental demand (MD), Physical demand (PD), Temporal demand (TD), Performance (OP), Effort (EF), frustration levels (FR). Measurement of a subjective mental workload most widely used because it has a high degree of validity. Based on the calculation of the mental workload, there at 5 units (DTU, NPU, HTU, DIST and OPS) at the control chamber (94; 83.33; 94.67; 81, 33 and 94.67 respectively) that categorize as very high mental workload. The high level of mental workload on the operator in the Central Control Room is a requirement to have high accuracy, alertness and can make decisions quickly

  4. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    Directory of Open Access Journals (Sweden)

    Kazi Masudul Alam

    2015-09-01

    Full Text Available Social Internet of Things (SIoT has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  5. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    Science.gov (United States)

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  6. Role of adenosine in regulating the heterogeneity of skeletal muscle blood flow during exercise in humans

    DEFF Research Database (Denmark)

    Heinonen, Ilkka; Nesterov, Sergey V; Kemppainen, Jukka

    2007-01-01

    receptor blockade. BF heterogeneity within muscles was calculated from 16-mm(3) voxels in BF images and heterogeneity among the muscles from the mean values of the four QF compartments. Mean BF in the whole QF and its four parts increased, and heterogeneity decreased with workload both without......Evidence from both animal and human studies suggests that adenosine plays a role in the regulation of exercise hyperemia in skeletal muscle. We tested whether adenosine also plays a role in the regulation of blood flow (BF) distribution and heterogeneity among and within quadriceps femoris (QF...... and with theophylline (P heterogeneity among the QF muscles, yet blockade increased within-muscle BF heterogeneity in all four QF muscles (P = 0.03). Taken together, these results show that BF becomes less heterogeneous with increasing...

  7. Front-line ordering clinicians: matching workforce to workload.

    Science.gov (United States)

    Fieldston, Evan S; Zaoutis, Lisa B; Hicks, Patricia J; Kolb, Susan; Sladek, Erin; Geiger, Debra; Agosto, Paula M; Boswinkel, Jan P; Bell, Louis M

    2014-07-01

    Matching workforce to workload is particularly important in healthcare delivery, where an excess of workload for the available workforce may negatively impact processes and outcomes of patient care and resident learning. Hospitals currently lack a means to measure and match dynamic workload and workforce factors. This article describes our work to develop and obtain consensus for use of an objective tool to dynamically match the front-line ordering clinician (FLOC) workforce to clinical workload in a variety of inpatient settings. We undertook development of a tool to represent hospital workload and workforce based on literature reviews, discussions with clinical leadership, and repeated validation sessions. We met with physicians and nurses from every clinical care area of our large, urban children's hospital at least twice. We successfully created a tool in a matrix format that is objective and flexible and can be applied to a variety of settings. We presented the tool in 14 hospital divisions and received widespread acceptance among physician, nursing, and administrative leadership. The hospital uses the tool to identify gaps in FLOC coverage and guide staffing decisions. Hospitals can better match workload to workforce if they can define and measure these elements. The Care Model Matrix is a flexible, objective tool that quantifies the multidimensional aspects of workload and workforce. The tool, which uses multiple variables that are easily modifiable, can be adapted to a variety of settings. © 2014 Society of Hospital Medicine.

  8. A comparison of policies on nurse faculty workload in the United States.

    Science.gov (United States)

    Ellis, Peggy A

    2013-01-01

    This article describes nurse faculty workload policies from across the nation in order to assess current practice. There is a well-documented shortage of nursing faculty leading to an increase in workload demands. Increases in faculty workload results in difficulties with work-life balance and dissatisfaction threatening to make nursing education less attractive to young faculty. In order to begin an examination of faculty workload in nursing, existing workloads must be known. Faculty workload data were solicited from nursing programs nationwide and analyzed to determine the current workloads. The most common faculty teaching workload reported overall for nursing is 12 credit hours per semester; however, some variations exist. Consideration should be given to the multiple components of the faculty workload. Research is needed to address the most effective and efficient workload allocation for nursing faculty.

  9. Mobile Cloud Computing for Telemedicine Solutions

    Directory of Open Access Journals (Sweden)

    Mihaela GHEORGHE

    2014-01-01

    Full Text Available Mobile Cloud Computing is a significant technology which combines emerging domains such as mobile computing and cloud computing which has conducted to the development of one of the most IT industry challenging and innovative trend. This is still at the early stage of devel-opment but its main characteristics, advantages and range of services which are provided by an internet-based cluster system have a strong impact on the process of developing telemedi-cine solutions for overcoming the wide challenges the medical system is confronting with. Mo-bile Cloud integrates cloud computing into the mobile environment and has the advantage of overcoming obstacles related to performance (e.g. battery life, storage, and bandwidth, envi-ronment (e.g. heterogeneity, scalability, availability and security (e.g. reliability and privacy which are commonly present at mobile computing level. In this paper, I will present a compre-hensive overview on mobile cloud computing including definitions, services and the use of this technology for developing telemedicine application.

  10. Assessing Clinical Trial-Associated Workload in Community-Based Research Programs Using the ASCO Clinical Trial Workload Assessment Tool.

    Science.gov (United States)

    Good, Marjorie J; Hurley, Patricia; Woo, Kaitlin M; Szczepanek, Connie; Stewart, Teresa; Robert, Nicholas; Lyss, Alan; Gönen, Mithat; Lilenbaum, Rogerio

    2016-05-01

    Clinical research program managers are regularly faced with the quandary of determining how much of a workload research staff members can manage while they balance clinical practice and still achieve clinical trial accrual goals, maintain data quality and protocol compliance, and stay within budget. A tool was developed to measure clinical trial-associated workload, to apply objective metrics toward documentation of work, and to provide clearer insight to better meet clinical research program challenges and aid in balancing staff workloads. A project was conducted to assess the feasibility and utility of using this tool in diverse research settings. Community-based research programs were recruited to collect and enter clinical trial-associated monthly workload data into a web-based tool for 6 consecutive months. Descriptive statistics were computed for self-reported program characteristics and workload data, including staff acuity scores and number of patient encounters. Fifty-one research programs that represented 30 states participated. Median staff acuity scores were highest for staff with patients enrolled in studies and receiving treatment, relative to staff with patients in follow-up status. Treatment trials typically resulted in higher median staff acuity, relative to cancer control, observational/registry, and prevention trials. Industry trials exhibited higher median staff acuity scores than trials sponsored by the National Institutes of Health/National Cancer Institute, academic institutions, or others. The results from this project demonstrate that trial-specific acuity measurement is a better measure of workload than simply counting the number of patients. The tool was shown to be feasible and useable in diverse community-based research settings. Copyright © 2016 by American Society of Clinical Oncology.

  11. Leveraging Renewable Energies in Distributed Private Clouds

    Directory of Open Access Journals (Sweden)

    Pape Christian

    2016-01-01

    Full Text Available The vast and unstoppable rise of virtualization technologies and the related hardware abstraction in the last years established the foundation for new cloud-based infrastructures and new scalable and elastic services. This new paradigm has already found its way in modern data centers and their infrastructures. A positive side effect of these technologies is the transparency of the execution of workloads in a location-independent and hardware-independent manner. For instance, due to higher utilization of underlying hardware thanks to the consolidation of virtual resources or by moving virtual resources to sites with lower energy prices or more available renewable energy resources, data centers can counteract their economic and ecological downsides resulting from their steadily increasing energy demand. This paper introduces a vector-based algorithm for the placement of virtual machines in distributed private cloud environments. After outlining the basic operation of our approach, we provide a formal definition as well as an outlook for further research.

  12. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    Science.gov (United States)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  13. Evaluation of Mental Workload among ICU Ward's Nurses.

    Science.gov (United States)

    Mohammadi, Mohsen; Mazloumi, Adel; Kazemi, Zeinab; Zeraati, Hojat

    2015-01-01

    High level of workload has been identified among stressors of nurses in intensive care units (ICUs). The present study investigated nursing workload and identified its influencing perfor-mance obstacles in ICUs. This cross-sectional study was conducted, in 2013, on 81 nurses working in ICUs in Imam Khomeini Hospital in Tehran, Iran. NASA-TLX was applied for assessment of workload. Moreover, ICUs Performance Obstacles Questionnaire was used to identify performance obstacles associated with ICU nursing. Physical demand (mean=84.17) was perceived as the most important dimensions of workload by nurses. The most critical performance obstacles affecting workload included: difficulty in finding a place to sit down, hectic workplace, disorganized workplace, poor-conditioned equipment, waiting for using a piece of equipment, spending much time seeking for supplies in the central stock, poor quality of medical materials, delay in getting medications, unpredicted problems, disorganized central stock, outpatient surgery, spending much time dealing with family needs, late, inadequate, and useless help from nurse assistants, and ineffective morning rounds (P-value<0.05). Various performance obstacles are correlated with nurses' workload, affirms the significance of nursing work system characteristics. Interventions are recommended based on the results of this study in the work settings of nurses in ICUs.

  14. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  15. Assessing physician job satisfaction and mental workload.

    Science.gov (United States)

    Boultinghouse, Oscar W; Hammack, Glenn G; Vo, Alexander H; Dittmar, Mary Lynne

    2007-12-01

    Physician job satisfaction and mental workload were evaluated in a pilot study of five physicians engaged in a telemedicine practice at The University of Texas Medical Branch at Galveston Electronic Health Network. Several previous studies have examined physician satisfaction with specific telemedicine applications; however, few have attempted to identify the underlying factors that contribute to physician satisfaction or lack thereof. One factor that has been found to affect well-being and functionality in the workplace-particularly with regard to human interaction with complex systems and tasks as seen in telemedicine-is mental workload. Workload is generally defined as the "cost" to a person for performing a complex task or tasks; however, prior to this study, it was unexplored as a variable that influences physician satisfaction. Two measures of job satisfaction were used: The Job Descriptive Index and the Job In General scales. Mental workload was evaluated by means of the National Aeronautics and Space Administration Task Load Index. The measures were administered by means of Web-based surveys and were given twice over a 6-month period. Nonparametric statistical analyses revealed that physician job satisfaction was generally high relative to that of the general population and other professionals. Mental workload scores associated with the practice of telemedicine in this environment are also high, and appeared stable over time. In addition, they are commensurate with scores found in individuals practicing tasks with elevated information-processing demands, such as quality control engineers and air traffic controllers. No relationship was found between the measures of job satisfaction and mental workload.

  16. Scheduling Parallel Jobs Using Migration and Consolidation in the Cloud

    Directory of Open Access Journals (Sweden)

    Xiaocheng Liu

    2012-01-01

    Full Text Available An increasing number of high performance computing parallel applications leverages the power of the cloud for parallel processing. How to schedule the parallel applications to improve the quality of service is the key to the successful host of parallel applications in the cloud. The large scale of the cloud makes the parallel job scheduling more complicated as even simple parallel job scheduling problem is NP-complete. In this paper, we propose a parallel job scheduling algorithm named MEASY. MEASY adopts migration and consolidation to enhance the most popular EASY scheduling algorithm. Our extensive experiments on well-known workloads show that our algorithm takes very good care of the quality of service. For two common parallel job scheduling objectives, our algorithm produces an up to 41.1% and an average of 23.1% improvement on the average response time; an up to 82.9% and an average of 69.3% improvement on the average slowdown. Our algorithm is robust even in terms that it allows inaccurate CPU usage estimation and high migration cost. Our approach involves trivial modification on EASY and requires no additional technique; it is practical and effective in the cloud environment.

  17. Activity-based differentiation of pathologists' workload in surgical pathology.

    Science.gov (United States)

    Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M

    2009-06-01

    Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.

  18. Crew workload-management strategies - A critical factor in system performance

    Science.gov (United States)

    Hart, Sandra G.

    1989-01-01

    This paper reviews the philosophy and goals of the NASA/USAF Strategic Behavior/Workload Management Program. The philosophical foundation of the program is based on the assumption that an improved understanding of pilot strategies will clarify the complex and inconsistent relationships observed among objective task demands and measures of system performance and pilot workload. The goals are to: (1) develop operationally relevant figures of merit for performance, (2) quantify the effects of strategic behaviors on system performance and pilot workload, (3) identify evaluation criteria for workload measures, and (4) develop methods of improving pilots' abilities to manage workload extremes.

  19. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    Science.gov (United States)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  20. State of science: mental workload in ergonomics.

    Science.gov (United States)

    Young, Mark S; Brookhuis, Karel A; Wickens, Christopher D; Hancock, Peter A

    2015-01-01

    Mental workload (MWL) is one of the most widely used concepts in ergonomics and human factors and represents a topic of increasing importance. Since modern technology in many working environments imposes ever more cognitive demands upon operators while physical demands diminish, understanding how MWL impinges on performance is increasingly critical. Yet, MWL is also one of the most nebulous concepts, with numerous definitions and dimensions associated with it. Moreover, MWL research has had a tendency to focus on complex, often safety-critical systems (e.g. transport, process control). Here we provide a general overview of the current state of affairs regarding the understanding, measurement and application of MWL in the design of complex systems over the last three decades. We conclude by discussing contemporary challenges for applied research, such as the interaction between cognitive workload and physical workload, and the quantification of workload 'redlines' which specify when operators are approaching or exceeding their performance tolerances.

  1. Competition for water vapour results in suppression of ice formation in mixed-phase clouds

    Directory of Open Access Journals (Sweden)

    E. L. Simpson

    2018-05-01

    Full Text Available The formation of ice in clouds can initiate precipitation and influence a cloud's reflectivity and lifetime, affecting climate to a highly uncertain degree. Nucleation of ice at elevated temperatures requires an ice nucleating particle (INP, which results in so-called heterogeneous freezing. Previously reported measurements for the ability of a particle to nucleate ice have been made in the absence of other aerosol which will act as cloud condensation nuclei (CCN and are ubiquitous in the atmosphere. Here we show that CCN can outcompete INPs for available water vapour thus suppressing ice formation, which has the potential to significantly affect the Earth's radiation budget. The magnitude of this suppression is shown to be dependent on the mass of condensed water required for freezing. Here we show that ice formation in a state-of-the-art cloud parcel model is strongly dependent on the criteria for heterogeneous freezing selected from those previously hypothesised. We have developed an alternative criteria which agrees well with observations from cloud chamber experiments. This study demonstrates the dominant role that competition for water vapour can play in ice formation, highlighting both a need for clarity in the requirements for heterogeneous freezing and for measurements under atmospherically appropriate conditions.

  2. Competition for water vapour results in suppression of ice formation in mixed-phase clouds

    Science.gov (United States)

    Simpson, Emma L.; Connolly, Paul J.; McFiggans, Gordon

    2018-05-01

    The formation of ice in clouds can initiate precipitation and influence a cloud's reflectivity and lifetime, affecting climate to a highly uncertain degree. Nucleation of ice at elevated temperatures requires an ice nucleating particle (INP), which results in so-called heterogeneous freezing. Previously reported measurements for the ability of a particle to nucleate ice have been made in the absence of other aerosol which will act as cloud condensation nuclei (CCN) and are ubiquitous in the atmosphere. Here we show that CCN can outcompete INPs for available water vapour thus suppressing ice formation, which has the potential to significantly affect the Earth's radiation budget. The magnitude of this suppression is shown to be dependent on the mass of condensed water required for freezing. Here we show that ice formation in a state-of-the-art cloud parcel model is strongly dependent on the criteria for heterogeneous freezing selected from those previously hypothesised. We have developed an alternative criteria which agrees well with observations from cloud chamber experiments. This study demonstrates the dominant role that competition for water vapour can play in ice formation, highlighting both a need for clarity in the requirements for heterogeneous freezing and for measurements under atmospherically appropriate conditions.

  3. Online EEG-Based Workload Adaptation of an Arithmetic Learning Environment.

    Science.gov (United States)

    Walter, Carina; Rosenstiel, Wolfgang; Bogdan, Martin; Gerjets, Peter; Spüler, Martin

    2017-01-01

    In this paper, we demonstrate a closed-loop EEG-based learning environment, that adapts instructional learning material online, to improve learning success in students during arithmetic learning. The amount of cognitive workload during learning is crucial for successful learning and should be held in the optimal range for each learner. Based on EEG data from 10 subjects, we created a prediction model that estimates the learner's workload to obtain an unobtrusive workload measure. Furthermore, we developed an interactive learning environment that uses the prediction model to estimate the learner's workload online based on the EEG data and adapt the difficulty of the learning material to keep the learner's workload in an optimal range. The EEG-based learning environment was used by 13 subjects to learn arithmetic addition in the octal number system, leading to a significant learning effect. The results suggest that it is feasible to use EEG as an unobtrusive measure of cognitive workload to adapt the learning content. Further it demonstrates that a promptly workload prediction is possible using a generalized prediction model without the need for a user-specific calibration.

  4. Academic context and perceived mental workload of psychology students.

    Science.gov (United States)

    Rubio-Valdehita, Susana; López-Higes, Ramón; Díaz-Ramiro, Eva

    2014-01-01

    The excessive workload of university students is an academic stressor. Consequently, it is necessary to evaluate and control the workload in education. This research applies the NASA-TLX scale, as a measure of the workload. The objectives of this study were: (a) to measure the workload levels of a sample of 367 psychology students, (b) to group students according to their positive or negative perception of academic context (AC) and c) to analyze the effects of AC on workload. To assess the perceived AC, we used an ad hoc questionnaire designed according to Demand-Control-Social Support and Effort-Reward Imbalance models. Using cluster analysis, participants were classified into two groups (positive versus negative context). The differences between groups show that a positive AC improves performance (p student autonomy and result satisfaction were relevant dimensions of the AC (p < .001 in all cases).

  5. Rework the workload.

    Science.gov (United States)

    O'Bryan, Linda; Krueger, Janelle; Lusk, Ruth

    2002-03-01

    Kindred Healthcare, Inc., the nation's largest full-service network of long-term acute care hospitals, initiated a 3-year strategic plan to re-evaluate its workload management system. Here, follow the project's most important and difficult phase--designing and implementing the patient classification system.

  6. Mental workload and its relation with fatigue among urban bus drivers

    Directory of Open Access Journals (Sweden)

    Narmin Hassanzadeh-Rangi

    2017-06-01

    Full Text Available Introduction: Driving crash is one of major concerns in all countries. Mental workload reflects the level of attention resources required to meet both objec­tive and subjective performance criteria, which may be affected by task demand, external support, and past experience. Mental workload has been commonly cited as a major cause of workplace and transportation accidents. The objective of this study was assessment of mental workload and its relation with fatigue among urban bus drivers in Tehran, Iran. Methods: In this descriptive and analytical study, the NASA-TLX workload scale and the Samn-Perelli fatigue scale were completed by 194 professional bus drivers. Descriptive statistics as well as correlation and regression analysis were performed for data processing. Results: The total mental workload had highest correlation with the physical demand(r=0.73, p<0.001, the mental demand (r=0.68, p<0.001 and the time pressure (r=0.58, p<0.001. The total fatigue perceived by bus driver had highest correlation with the frustration level (r=0.42, p<0.001, the time pressure (r=0.24, p<0.001 and the mental workload (r=0.21, p<0.001. Conclusion: Mental workload, physical workload and time pressure are important determinants of the total mental workload and fatigue perceived by urban bus drivers. A comprehensive intervention program, include work turnover, trip and work-rest scheduling as well as smoking cessation, was recommended to improve mental workload and fatigue. 

  7. Intelligent Continuous Double Auction method For Service Allocation in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Nima Farajian

    2013-10-01

    Full Text Available Market-oriented approach is an effective method for resource management because of its regulation of supply and demand and is suitable for cloud environment where the computing resources, either software or hardware, are virtualized and allocated as services from providers to users. In this paper a continuous double auction method for efficient cloud service allocation is presented in which i enables consumers to order various resources (services for workflows and coallocation, ii consumers and providers make bid and request prices based on deadline and workload time and in addition providers can tradeoff between utilization time and price of bids, iii auctioneers can intelligently find optimum matching by sharing and merging resources which result more trades. Experimental results show that proposed method is efficient in terms of successful allocation rate and resource utilization.

  8. Effect of time span and task load on pilot mental workload

    Science.gov (United States)

    Berg, S. L.; Sheridan, T. B.

    1986-01-01

    Two sets of simulations designed to examine how a pilot's mental workload is affected by continuous manual-control activity versus discrete mental tasks that included the length of time between receiving an assignment and executing it are described. The first experiment evaluated two types of measures: objective performance indicators and subjective ratings. Subjective ratings for the two missions were different, but the objective performance measures were similar. In the second experiments, workload levels were increased and a second performance measure was taken. Mental workload had no influence on either performance-based workload measure. Subjective ratings discriminated among the scenarios and correlated with performance measures for high-workload flights. The number of mental tasks performed did not influence error rates, although high manual workloads did increase errors.

  9. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    Directory of Open Access Journals (Sweden)

    M. Schnaiter

    2016-04-01

    Full Text Available This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere cloud chamber of the Karlsruhe Institute of Technology (KIT. A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the −40 to −60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3. It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN probe of Laboratoire de Métérologie et Physique (LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  10. An Architecture for Cross-Cloud System Management

    Science.gov (United States)

    Dodda, Ravi Teja; Smith, Chris; van Moorsel, Aad

    The emergence of the cloud computing paradigm promises flexibility and adaptability through on-demand provisioning of compute resources. As the utilization of cloud resources extends beyond a single provider, for business as well as technical reasons, the issue of effectively managing such resources comes to the fore. Different providers expose different interfaces to their compute resources utilizing varied architectures and implementation technologies. This heterogeneity poses a significant system management problem, and can limit the extent to which the benefits of cross-cloud resource utilization can be realized. We address this problem through the definition of an architecture to facilitate the management of compute resources from different cloud providers in an homogenous manner. This preserves the flexibility and adaptability promised by the cloud computing paradigm, whilst enabling the benefits of cross-cloud resource utilization to be realized. The practical efficacy of the architecture is demonstrated through an implementation utilizing compute resources managed through different interfaces on the Amazon Elastic Compute Cloud (EC2) service. Additionally, we provide empirical results highlighting the performance differential of these different interfaces, and discuss the impact of this performance differential on efficiency and profitability.

  11. Security of Heterogeneous Content in Cloud Based Library Information Systems Using an Ontology Based Approach

    Directory of Open Access Journals (Sweden)

    Mihai DOINEA

    2014-01-01

    Full Text Available As in any domain that involves the use of software, the library information systems take advantages of cloud computing. The paper highlights the main aspect of cloud based systems, describing some public solutions provided by the most important players on the market. Topics related to content security in cloud based services are tackled in order to emphasize the requirements that must be met by these types of systems. A cloud based implementation of an Information Library System is presented and some adjacent tools that are used together with it to provide digital content and metadata links are described. In a cloud based Information Library System security is approached by means of ontologies. Aspects such as content security in terms of digital rights are presented and a methodology for security optimization is proposed.

  12. The implications of dust ice nuclei effect on cloud top temperature in a complex mesoscale convective system.

    Science.gov (United States)

    Li, Rui; Dong, Xue; Guo, Jingchao; Fu, Yunfei; Zhao, Chun; Wang, Yu; Min, Qilong

    2017-10-23

    Mineral dust is the most important natural source of atmospheric ice nuclei (IN) which may significantly mediate the properties of ice cloud through heterogeneous nucleation and lead to crucial impacts on hydrological and energy cycle. The potential dust IN effect on cloud top temperature (CTT) in a well-developed mesoscale convective system (MCS) was studied using both satellite observations and cloud resolving model (CRM) simulations. We combined satellite observations from passive spectrometer, active cloud radar, lidar, and wind field simulations from CRM to identify the place where ice cloud mixed with dust particles. For given ice water path, the CTT of dust-mixed cloud is warmer than that in relatively pristine cloud. The probability distribution function (PDF) of CTT for dust-mixed clouds shifted to the warmer end and showed two peaks at about -45 °C and -25 °C. The PDF for relatively pristine cloud only show one peak at -55 °C. Cloud simulations with different microphysical schemes agreed well with each other and showed better agreement with satellite observations in pristine clouds, but they showed large discrepancies in dust-mixed clouds. Some microphysical schemes failed to predict the warm peak of CTT related to heterogeneous ice formation.

  13. File-System Workload on a Scientific Multiprocessor

    Science.gov (United States)

    Kotz, David; Nieuwejaar, Nils

    1995-01-01

    Many scientific applications have intense computational and I/O requirements. Although multiprocessors have permitted astounding increases in computational performance, the formidable I/O needs of these applications cannot be met by current multiprocessors a their I/O subsystems. To prevent I/O subsystems from forever bottlenecking multiprocessors and limiting the range of feasible applications, new I/O subsystems must be designed. The successful design of computer systems (both hardware and software) depends on a thorough understanding of their intended use. A system designer optimizes the policies and mechanisms for the cases expected to most common in the user's workload. In the case of multiprocessor file systems, however, designers have been forced to build file systems based only on speculation about how they would be used, extrapolating from file-system characterizations of general-purpose workloads on uniprocessor and distributed systems or scientific workloads on vector supercomputers (see sidebar on related work). To help these system designers, in June 1993 we began the Charisma Project, so named because the project sought to characterize 1/0 in scientific multiprocessor applications from a variety of production parallel computing platforms and sites. The Charisma project is unique in recording individual read and write requests-in live, multiprogramming, parallel workloads (rather than from selected or nonparallel applications). In this article, we present the first results from the project: a characterization of the file-system workload an iPSC/860 multiprocessor running production, parallel scientific applications at NASA's Ames Research Center.

  14. Individual differences and subjective workload assessment - Comparing pilots to nonpilots

    Science.gov (United States)

    Vidulich, Michael A.; Pandit, Parimal

    1987-01-01

    Results by two groups of subjects, pilots and nonpilots, for two subjective workload assessment techniques (the SWAT and NASA-TLX tests) intended to evaluate individual differences in the perception and reporting of subjective workload are compared with results obtained for several traditional personality tests. The personality tests were found to discriminate between the groups while the workload tests did not. It is concluded that although the workload tests may provide useful information with respect to the interaction between tasks and personality, they are not effective as pure tests of individual differences.

  15. A Workload-Adaptive and Reconfigurable Bus Architecture for Multicore Processors

    Directory of Open Access Journals (Sweden)

    Shoaib Akram

    2010-01-01

    Full Text Available Interconnection networks for multicore processors are traditionally designed to serve a diversity of workloads. However, different workloads or even different execution phases of the same workload may benefit from different interconnect configurations. In this paper, we first motivate the need for workload-adaptive interconnection networks. Subsequently, we describe an interconnection network framework based on reconfigurable switches for use in medium-scale (up to 32 cores shared memory multicore processors. Our cost-effective reconfigurable interconnection network is implemented on a traditional shared bus interconnect with snoopy-based coherence, and it enables improved multicore performance. The proposed interconnect architecture distributes the cores of the processor into clusters with reconfigurable logic between clusters to support workload-adaptive policies for inter-cluster communication. Our interconnection scheme is complemented by interconnect-aware scheduling and additional interconnect optimizations which help boost the performance of multiprogramming and multithreaded workloads. We provide experimental results that show that the overall throughput of multiprogramming workloads (consisting of two and four programs can be improved by up to 60% with our configurable bus architecture. Similar gains can be achieved also for multithreaded applications as shown by further experiments. Finally, we present the performance sensitivity of the proposed interconnect architecture on shared memory bandwidth availability.

  16. Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors

    Directory of Open Access Journals (Sweden)

    Claire E. Bulgin

    2018-04-01

    Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.

  17. Quantitative assessment of workload and stressors in clinical radiation oncology.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Jackson, Marianne; Chang, Sha X; Burkhardt, Katharin Deschesne; Adams, Robert D; Jones, Ellen L; Hoyle, Lesley; Xu, Jing; Rockwell, John; Marks, Lawrence B

    2012-08-01

    Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045). Workload level and sources of stressors vary

  18. Quantitative Assessment of Workload and Stressors in Clinical Radiation Oncology

    International Nuclear Information System (INIS)

    Mazur, Lukasz M.; Mosaly, Prithima R.; Jackson, Marianne; Chang, Sha X.; Burkhardt, Katharin Deschesne; Adams, Robert D.; Jones, Ellen L.; Hoyle, Lesley; Xu, Jing; Rockwell, John; Marks, Lawrence B.

    2012-01-01

    Purpose: Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Methods and Materials: Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). Results: A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045

  19. Quantitative Assessment of Workload and Stressors in Clinical Radiation Oncology

    Energy Technology Data Exchange (ETDEWEB)

    Mazur, Lukasz M., E-mail: lukasz_mazur@ncsu.edu [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States); Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Biomedical Engineering, North Carolina State University, Raleigh, North Carolina (United States); Mosaly, Prithima R. [Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Jackson, Marianne; Chang, Sha X.; Burkhardt, Katharin Deschesne; Adams, Robert D.; Jones, Ellen L.; Hoyle, Lesley [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States); Xu, Jing [Industrial Extension Service, North Carolina State University, Raleigh, North Carolina (United States); Rockwell, John; Marks, Lawrence B. [Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina (United States)

    2012-08-01

    Purpose: Workload level and sources of stressors have been implicated as sources of error in multiple settings. We assessed workload levels and sources of stressors among radiation oncology professionals. Furthermore, we explored the potential association between workload and the frequency of reported radiotherapy incidents by the World Health Organization (WHO). Methods and Materials: Data collection was aimed at various tasks performed by 21 study participants from different radiation oncology professional subgroups (simulation therapists, radiation therapists, physicists, dosimetrists, and physicians). Workload was assessed using National Aeronautics and Space Administration Task-Load Index (NASA TLX). Sources of stressors were quantified using observational methods and segregated using a standard taxonomy. Comparisons between professional subgroups and tasks were made using analysis of variance ANOVA, multivariate ANOVA, and Duncan test. An association between workload levels (NASA TLX) and the frequency of radiotherapy incidents (WHO incidents) was explored (Pearson correlation test). Results: A total of 173 workload assessments were obtained. Overall, simulation therapists had relatively low workloads (NASA TLX range, 30-36), and physicists had relatively high workloads (NASA TLX range, 51-63). NASA TLX scores for physicians, radiation therapists, and dosimetrists ranged from 40-52. There was marked intertask/professional subgroup variation (P<.0001). Mental demand (P<.001), physical demand (P=.001), and effort (P=.006) significantly differed among professional subgroups. Typically, there were 3-5 stressors per cycle of analyzed tasks with the following distribution: interruptions (41.4%), time factors (17%), technical factors (13.6%), teamwork issues (11.6%), patient factors (9.0%), and environmental factors (7.4%). A positive association between workload and frequency of reported radiotherapy incidents by the WHO was found (r = 0.87, P value=.045

  20. The impact on UT/LS cirrus clouds in the CAM/CARMA model using a new interactive aerosol parameterization.

    Science.gov (United States)

    Maloney, C.; Toon, B.; Bardeen, C.

    2017-12-01

    Recent studies indicate that heterogeneous nucleation may play a large role in cirrus cloud formation in the UT/LS, a region previously thought to be primarily dominated by homogeneous nucleation. As a result, it is beneficial to ensure that general circulation models properly represent heterogeneous nucleation in ice cloud simulations. Our work strives towards addressing this issue in the NSF/DOE Community Earth System Model's atmospheric model, CAM. More specifically we are addressing the role of heterogeneous nucleation in the coupled sectional microphysics cloud model, CARMA. Currently, our CAM/CARMA cirrus model only performs homogenous ice nucleation while ignoring heterogeneous nucleation. In our work, we couple the CAM/CARMA cirrus model with the Modal Aerosol Model (MAM). By combining the aerosol model with CAM/CARMA we can both account for heterogeneous nucleation, as well as directly link the sulfates used for homogeneous nucleation to computed fields instead of the current static field being utilized. Here we present our initial results and compare our findings to observations from the long running CALIPSO and MODIS satellite missions.

  1. Psychophysical workload in the operating room: primary surgeon versus assistant.

    Science.gov (United States)

    Rieger, Annika; Fenger, Sebastian; Neubert, Sebastian; Weippert, Matthias; Kreuzfeld, Steffi; Stoll, Regina

    2015-07-01

    Working in the operating room is characterized by high demands and overall workload of the surgical team. Surgeons often report that they feel more stressed when operating as a primary surgeon than in the function as an assistant which has been confirmed in recent studies. In this study, intra-individual workload was assessed in both intraoperative functions using a multidimensional approach that combined objective and subjective measures in a realistic work setting. Surgeons' intraoperative psychophysiologic workload was assessed through a mobile health system. 25 surgeons agreed to take part in the 24-hour monitoring by giving their written informed consent. The mobile health system contained a sensor electronic module integrated in a chest belt and measuring physiological parameters such as heart rate (HR), breathing rate (BR), and skin temperature. Subjective workload was assessed pre- and postoperatively using an electronic version of the NASA-TLX on a smartphone. The smartphone served as a communication unit and transferred objective and subjective measures to a communication server where data were stored and analyzed. Working as a primary surgeon did not result in higher workload. Neither NASA-TLX ratings nor physiological workload indicators were related to intraoperative function. In contrast, length of surgeries had a significant impact on intraoperative physical demands (p NASA-TLX sum score (p < 0.01; η(2) = 0.287). Intra-individual workload differences do not relate to intraoperative role of surgeons when length of surgery is considered as covariate. An intelligent operating management that considers the length of surgeries by implementing short breaks could contribute to the optimization of intraoperative workload and the preservation of surgeons' health, respectively. The value of mobile health systems for continuous psychophysiologic workload assessment was shown.

  2. Dynamic Voltage-Frequency and Workload Joint Scaling Power Management for Energy Harvesting Multi-Core WSN Node SoC

    Directory of Open Access Journals (Sweden)

    Xiangyu Li

    2017-02-01

    Full Text Available This paper proposes a scheduling and power management solution for energy harvesting heterogeneous multi-core WSN node SoC such that the system continues to operate perennially and uses the harvested energy efficiently. The solution consists of a heterogeneous multi-core system oriented task scheduling algorithm and a low-complexity dynamic workload scaling and configuration optimization algorithm suitable for light-weight platforms. Moreover, considering the power consumption of most WSN applications have the characteristic of data dependent behavior, we introduce branches handling mechanism into the solution as well. The experimental result shows that the proposed algorithm can operate in real-time on a lightweight embedded processor (MSP430, and that it can make a system do more valuable works and make more than 99.9% use of the power budget.

  3. Evaluation of mental workload on digital maintenance systems in nuclear power plants

    International Nuclear Information System (INIS)

    Hwang, S. L.; Huang, F. H.; Lin, J. C.; Liang, G. F.; Yenn, T. C.; Hsu, C. C.

    2006-01-01

    The purpose of this study is to evaluate operators' mental workload dealing with digital maintenance systems in Nuclear Power Plants. First of all, according to the factors affected the mental workload, a questionnaire was designed to evaluate the mental workload of maintenance operators at the second Nuclear Power (NPP) in Taiwan. Then, sixteen maintenance engineers of the Second NPP participated in the questionnaire survey. The results indicated that the mental workload was lower in digital systems than that in analog systems. Finally, a mental workload model based on Neural Network technique was developed to predict the workload of maintenance operators in digital maintenance systems. (authors)

  4. CHROMagar Orientation Medium Reduces Urine Culture Workload

    Science.gov (United States)

    Manickam, Kanchana; Karlowsky, James A.; Adam, Heather; Lagacé-Wiens, Philippe R. S.; Rendina, Assunta; Pang, Paulette; Murray, Brenda-Lee

    2013-01-01

    Microbiology laboratories continually strive to streamline and improve their urine culture algorithms because of the high volumes of urine specimens they receive and the modest numbers of those specimens that are ultimately considered clinically significant. In the current study, we quantitatively measured the impact of the introduction of CHROMagar Orientation (CO) medium into routine use in two hospital laboratories and compared it to conventional culture on blood and MacConkey agars. Based on data extracted from our Laboratory Information System from 2006 to 2011, the use of CO medium resulted in a 28% reduction in workload for additional procedures such as Gram stains, subcultures, identification panels, agglutination tests, and biochemical tests. The average number of workload units (one workload unit equals 1 min of hands-on labor) per urine specimen was significantly reduced (P < 0.0001; 95% confidence interval [CI], 0.5326 to 1.047) from 2.67 in 2006 (preimplementation of CO medium) to 1.88 in 2011 (postimplementation of CO medium). We conclude that the use of CO medium streamlined the urine culture process and increased bench throughput by reducing both workload and turnaround time in our laboratories. PMID:23363839

  5. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  6. Workload Characterization of a Leadership Class Storage Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Youngjae [ORNL; Gunasekaran, Raghul [ORNL; Shipman, Galen M [ORNL; Dillow, David A [ORNL; Zhang, Zhe [ORNL; Settlemyer, Bradley W [ORNL

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.

  7. Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing.

    Science.gov (United States)

    Jimenez-Molina, Angel; Retamal, Cristian; Lira, Hernan

    2018-02-03

    Knowledge of the mental workload induced by a Web page is essential for improving users' browsing experience. However, continuously assessing the mental workload during a browsing task is challenging. To address this issue, this paper leverages the correlation between stimuli and physiological responses, which are measured with high-frequency, non-invasive psychophysiological sensors during very short span windows. An experiment was conducted to identify levels of mental workload through the analysis of pupil dilation measured by an eye-tracking sensor. In addition, a method was developed to classify mental workload by appropriately combining different signals (electrodermal activity (EDA), electrocardiogram, photoplethysmo-graphy (PPG), electroencephalogram (EEG), temperature and pupil dilation) obtained with non-invasive psychophysiological sensors. The results show that the Web browsing task involves four levels of mental workload. Also, by combining all the sensors, the efficiency of the classification reaches 93.7%.

  8. A computerized multidimensional measurement of mental workload via handwriting analysis.

    Science.gov (United States)

    Luria, Gil; Rosenblum, Sara

    2012-06-01

    The goal of this study was to test the effect of mental workload on handwriting behavior and to identify characteristics of low versus high mental workload in handwriting. We hypothesized differences between handwriting under three different load conditions and tried to establish a profile that integrated these indicators. Fifty-six participants wrote three numerical progressions of varying difficulty on a digitizer attached to a computer so that we could evaluate their handwriting behavior. Differences were found in temporal, spatial, and angular velocity handwriting measures, but no significant differences were found for pressure measures. Using data reduction, we identified three clusters of handwriting, two of which differentiated well according to the three mental workload conditions. We concluded that handwriting behavior is affected by mental workload and that each measure provides distinct information, so that they present a comprehensive indicator of mental workload.

  9. Situation awareness and workload in complex tactical environments

    NARCIS (Netherlands)

    Veltman, J.A.

    1999-01-01

    The paper provides an example of a method to get insight into workload changes over time, executed tasks and situation awareness (SA) in complex task environments. The method is applied to measure the workload of a helicopter crew. The method has three components: 1) task analysis, 2) video

  10. Development of an EEG-based workload measurement method in nuclear power plants

    International Nuclear Information System (INIS)

    Choi, Moon Kyoung; Lee, Seung Min; Ha, Jun Su; Seong, Poong Hyun

    2018-01-01

    Highlights: •A human operator’s workload in nuclear power plants(NPPs) usually has been evaluated by using subjective ratings. •Subjective rating techniques have several weaknesses such as dependence on the operator’s memory as well as bias. •We suggested an electroencephalogram (EEG)-based workload index for measuring the workload of human operators. •The suggested index was applied to evaluate the effects of operating support systems. -- Abstract: The environment of main control rooms of large scale process control systems such as nuclear power plants (NPPs) has been changed from the conventional analog type to the digital type. In digitalized advanced main control rooms, human operators conduct highly cognitive work rather than physical work compared to the case of the original control rooms in NPPs. Various operating support systems (OSSs) have been developed to reduce an operator’s workload. Most representative techniques to evaluate the workload are based on subjective ratings. However, there are some limitations including the possibility of skewed results due to self-assessment of the workload and the impossibility of continuously measuring the workload due to freezing simulation for workload assessment. As opposed to subjective ratings techniques, physiological techniques can be used for objective and continuous measurements of a human operator’s mental status by sensing the physiological changes of the autonomic or central nervous system. In this study, electroencephalogram (EEG) was used to measure the operator’s mental workload because it had been proven to be sensitive to variations of mental workload in other studies, and it allows various types of analysis. Based on various research reviews on the characteristics of brainwaves, EEG-based Workload Index (EWI) was suggested and validated through experiments. As a result, EWI is concluded to be valid for measuring an operator’s mental workload and preferable to subjective techniques

  11. Academic workload management towards learning, components of academic work

    OpenAIRE

    Ocvirk, Aleksandra; Trunk Širca, Nada

    2013-01-01

    This paper deals with attributing time value to academic workload from the point of view of an HEI, management of teaching and an individual. We have conducted a qualitative study aimed at analysing documents on academic workload in terms of its definition, and at analysing the attribution of time value to components of academic work in relation to the proportion of workload devoted to teaching in the sense of ensuring quality and effectiveness of learning, and in relation to financial implic...

  12. ATLAS WORLD-cloud and networking in PanDA

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Di Girolamo, A.; Maeno, T.; Walker, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites’ network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network connectivity. This contribution will introduce the conceptual changes for World-cloud, the development necessary in PanDA, an insight into the network model and the first half-year of operational experience.

  13. A Cloud-Based Internet of Things Platform for Ambient Assisted Living

    Science.gov (United States)

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-01-01

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application

  14. A cloud-based Internet of Things platform for ambient assisted living.

    Science.gov (United States)

    Cubo, Javier; Nieto, Adrián; Pimentel, Ernesto

    2014-08-04

    A common feature of ambient intelligence is that many objects are inter-connected and act in unison, which is also a challenge in the Internet of Things. There has been a shift in research towards integrating both concepts, considering the Internet of Things as representing the future of computing and communications. However, the efficient combination and management of heterogeneous things or devices in the ambient intelligence domain is still a tedious task, and it presents crucial challenges. Therefore, to appropriately manage the inter-connection of diverse devices in these systems requires: (1) specifying and efficiently implementing the devices (e.g., as services); (2) handling and verifying their heterogeneity and composition; and (3) standardizing and managing their data, so as to tackle large numbers of systems together, avoiding standalone applications on local servers. To overcome these challenges, this paper proposes a platform to manage the integration and behavior-aware orchestration of heterogeneous devices as services, stored and accessed via the cloud, with the following contributions: (i) we describe a lightweight model to specify the behavior of devices, to determine the order of the sequence of exchanged messages during the composition of devices; (ii) we define a common architecture using a service-oriented standard environment, to integrate heterogeneous devices by means of their interfaces, via a gateway, and to orchestrate them according to their behavior; (iii) we design a framework based on cloud computing technology, connecting the gateway in charge of acquiring the data from the devices with a cloud platform, to remotely access and monitor the data at run-time and react to emergency situations; and (iv) we implement and generate a novel cloud-based IoT platform of behavior-aware devices as services for ambient intelligence systems, validating the whole approach in real scenarios related to a specific ambient assisted living application.

  15. The Impact of Heavy Perceived Nurse Workloads on Patient and Nurse Outcomes

    Directory of Open Access Journals (Sweden)

    Maura MacPhee

    2017-03-01

    Full Text Available This study investigated the relationships between seven workload factors and patient and nurse outcomes. (1 Background: Health systems researchers are beginning to address nurses’ workload demands at different unit, job and task levels; and the types of administrative interventions needed for specific workload demands. (2 Methods: This was a cross-sectional correlational study of 472 acute care nurses from British Columbia, Canada. The workload factors included nurse reports of unit-level RN staffing levels and patient acuity and patient dependency; job-level nurse perceptions of heavy workloads, nursing tasks left undone and compromised standards; and task-level interruptions to work flow. Patient outcomes were nurse-reported frequencies of medication errors, patient falls and urinary tract infections; and nurse outcomes were emotional exhaustion and job satisfaction. (3 Results: Job-level perceptions of heavy workloads and task-level interruptions had significant direct effects on patient and nurse outcomes. Tasks left undone mediated the relationships between heavy workloads and nurse and patient outcomes; and between interruptions and nurse and patient outcomes. Compromised professional nursing standards mediated the relationships between heavy workloads and nurse outcomes; and between interruptions and nurse outcomes. (4 Conclusion: Administrators should work collaboratively with nurses to identify work environment strategies that ameliorate workload demands at different levels.

  16. Dynamic electronic institutions in agent oriented cloud robotic systems.

    Science.gov (United States)

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  17. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  18. GPs' perceptions of workload in England: a qualitative interview study.

    Science.gov (United States)

    Croxson, Caroline Hd; Ashdown, Helen F; Hobbs, Fd Richard

    2017-02-01

    GPs report the lowest levels of morale among doctors, job satisfaction is low, and the GP workforce is diminishing. Workload is frequently cited as negatively impacting on commitment to a career in general practice, and many GPs report that their workload is unmanageable. To gather an in-depth understanding of GPs' perceptions and attitudes towards workload. All GPs working within NHS England were eligible. Advertisements were circulated via regional GP e-mail lists and national social media networks in June 2015. Of those GPs who responded, a maximum-variation sample was selected until data saturation was reached. Semi-structured, qualitative interviews were conducted. Data were analysed thematically. In total, 171 GPs responded, and 34 were included in this study. GPs described an increase in workload over recent years, with current working days being long and intense, raising concerns over the wellbeing of GPs and patients. Full-time partnership was generally not considered to be possible, and many participants felt workload was unsustainable, particularly given the diminishing workforce. Four major themes emerged to explain increased workload: increased patient needs and expectations; a changing relationship between primary and secondary care; bureaucracy and resources; and the balance of workload within a practice. Continuity of care was perceived as being eroded by changes in contracts and working patterns to deal with workload. This study highlights the urgent need to address perceived lack of investment and clinical capacity in general practice, and suggests that managing patient expectations around what primary care can deliver, and reducing bureaucracy, have become key issues, at least until capacity issues are resolved. © British Journal of General Practice 2017.

  19. A survey of IoT cloud platforms

    Directory of Open Access Journals (Sweden)

    Partha Pratim Ray

    2016-12-01

    Full Text Available Internet of Things (IoT envisages overall merging of several “things” while utilizing internet as the backbone of the communication system to establish a smart interaction between people and surrounding objects. Cloud, being the crucial component of IoT, provides valuable application specific services in many application domains. A number of IoT cloud providers are currently emerging into the market to leverage suitable and specific IoT based services. In spite of huge possible involvement of these IoT clouds, no standard cum comparative analytical study has been found across the literature databases. This article surveys popular IoT cloud platforms in light of solving several service domains such as application development, device management, system management, heterogeneity management, data management, tools for analysis, deployment, monitoring, visualization, and research. A comparison is presented for overall dissemination of IoT clouds according to their applicability. Further, few challenges are also described that the researchers should take on in near future. Ultimately, the goal of this article is to provide detailed knowledge about the existing IoT cloud service providers and their pros and cons in concrete form.

  20. How the workload impacts on cognitive cooperation: A pilot study.

    Science.gov (United States)

    Sciaraffa, Nicolina; Borghini, Gianluca; Arico, Pietro; Di Flumeri, Gianluca; Toppi, Jlenia; Colosimo, Alfredo; Bezerianos, Anastatios; Thakor, Nitish V; Babiloni, Fabio

    2017-07-01

    Cooperation degradation can be seen as one of the main causes of human errors. Poor cooperation could arise from aberrant mental processes, such as mental overload, that negatively affect the user's performance. Using different levels of difficulty in a cooperative task, we combined behavioural, subjective and neurophysiological data with the aim to i) quantify the mental workload under which the crew was operating, ii) evaluate the degree of their cooperation, and iii) assess the impact of the workload demands on the cooperation levels. The combination of such data showed that high workload demand impacted significantly on the performance, workload perception, and degree of cooperation.

  1. SMART POINT CLOUD: DEFINITION AND REMAINING CHALLENGES

    Directory of Open Access Journals (Sweden)

    F. Poux

    2016-10-01

    Full Text Available Dealing with coloured point cloud acquired from terrestrial laser scanner, this paper identifies remaining challenges for a new data structure: the smart point cloud. This concept arises with the statement that massive and discretized spatial information from active remote sensing technology is often underused due to data mining limitations. The generalisation of point cloud data associated with the heterogeneity and temporality of such datasets is the main issue regarding structure, segmentation, classification, and interaction for an immediate understanding. We propose to use both point cloud properties and human knowledge through machine learning to rapidly extract pertinent information, using user-centered information (smart data rather than raw data. A review of feature detection, machine learning frameworks and database systems indexed both for mining queries and data visualisation is studied. Based on existing approaches, we propose a new 3-block flexible framework around device expertise, analytic expertise and domain base reflexion. This contribution serves as the first step for the realisation of a comprehensive smart point cloud data structure.

  2. A Dynamic Resource Scheduling Method Based on Fuzzy Control Theory in Cloud Environment

    OpenAIRE

    Chen, Zhijia; Zhu, Yuanchang; Di, Yanqiang; Feng, Shaochong

    2015-01-01

    The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic re...

  3. Training and testing ERP-BCIs under different mental workload conditions

    Science.gov (United States)

    Ke, Yufeng; Wang, Peiyuan; Chen, Yuqian; Gu, Bin; Qi, Hongzhi; Zhou, Peng; Ming, Dong

    2016-02-01

    Objective. As one of the most popular and extensively studied paradigms of brain-computer interfaces (BCIs), event-related potential-based BCIs (ERP-BCIs) are usually built and tested in ideal laboratory settings in most existing studies, with subjects concentrating on stimuli and intentionally avoiding possible distractors. This study is aimed at examining the effect of simultaneous mental activities on ERP-BCIs by manipulating various levels of mental workload during the training and/or testing of an ERP-BCI. Approach. Mental workload was manipulated during the training or testing of a row-column P300-speller to investigate how and to what extent the spelling performance and the ERPs evoked by the oddball stimuli are affected by simultaneous mental workload. Main results. Responses of certain ERP components, temporal-occipital N200 and the late reorienting negativity evoked by the oddball stimuli and the classifiability of ERP features between targets and non-targets decreased with the increase of mental workload encountered by the subject. However, the effect of mental workload on the performance of ERP-BCI was not always negative but depended on the conditions where the ERP-BCI was built and applied. The performance of ERP-BCI built under an ideal lab setting without any irrelevant mental activities declined with the increasing mental workload of the testing data. However, the performance was significantly improved when an ERP-BCI was built under an appropriate mental workload level, compared to that built under speller-only conditions. Significance. The adverse effect of concurrent mental activities may present a challenge for ERP-BCIs trained in ideal lab settings but which are to be used in daily work, especially when users are performing demanding mental processing. On the other hand, the positive effects of the mental workload of the training data suggest that introducing appropriate mental workload during training ERP-BCIs is of potential benefit to the

  4. WBDOC Weekly Workload Status Report

    Data.gov (United States)

    Social Security Administration — Weekly reports of workloads processed in the Wilkes Barre Data Operation Center. Reports on quantities of work received, processed, pending and average processing...

  5. Nursing workloads in family health: implications for universal access.

    Science.gov (United States)

    de Pires, Denise Elvira Pires; Machado, Rosani Ramos; Soratto, Jacks; Scherer, Magda dos Anjos; Gonçalves, Ana Sofia Resque; Trindade, Letícia Lima

    2016-01-01

    to identify the workloads of nursing professionals of the Family Health Strategy, considering its implications for the effectiveness of universal access. qualitative study with nursing professionals of the Family Health Strategy of the South, Central West and North regions of Brazil, using methodological triangulation. For the analysis, resources of the Atlas.ti software and Thematic Content Analysis were associated; and the data were interpreted based on the labor process and workloads as theorical approaches. the way of working in the Family Health Strategy has predominantly resulted in an increase in the workloads of the nursing professionals, with emphasis on the work overload, excess of demand, problems in the physical infrastructure of the units and failures in the care network, which hinders its effectiveness as a preferred strategy to achieve universal access to health. On the other hand, teamwork, affinity for the work performed, bond with the user, and effectiveness of the assistance contributed to reduce their workloads. investments on elements that reduce the nursing workloads, such as changes in working conditions and management, can contribute to the effectiveness of the Family Health Strategy and achieving the goal of universal access to health.

  6. [Effects of mental workload on work ability in primary and secondary school teachers].

    Science.gov (United States)

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-02-01

    To investigate the change pattern of primary and secondary school teachers' work ability with the changes in their mental workload. A total of 901 primary and secondary school teachers were selected by random cluster sampling, and then their mental workload and work ability were assessed by National Aeronautics and Space Administration-Task Load Index (NASA-TLX) and Work Ability Index (WAI) questionnaires, whose reliability and validity had been tested. The effects of their mental workload on the work ability were analyzed. Primary and secondary school teachers' work ability reached the highest level at a certain level of mental workload (55.73work ability had a positive correlation with the mental workload. Their work ability increased or maintained stable with the increasing mental workload. Moreover, the percentage of teachers with good work ability increased, while that of teachers with moderate work ability decreased. But when their mental workload was higher than the level, their work ability had a negative correlation with the mental workload. Their work ability significantly decreased with the increasing mental workload (P work ability decreased, while that of teachers with moderate work ability increased (P work ability. Moderate mental workload (55.73∼64.10) will benefit the maintaining and stabilization of their work ability.

  7. Operator Workload: Comprehensive Review and Evaluation of Operator Workload Methodologies

    Science.gov (United States)

    1989-06-01

    E. A (1979), Measurement end scaing of workload In oornple performance. Aviation, Space and Environmental Medicine , 50, 376-381. Ctoow, S. L... Medicine , 53, 1087-1072. Harris, R. M., Glenn, F., laveocchia, H. P., & 7ak"d, A, (1986). Human Operndor Simulator. In W. Karwoski (Ed.), Trends in...McGiothlin, W. (1974). Effects of marihuana on auditory signal detection. Psychopharmacologia, 40, 137-145. Mulder, I. J. M., & Mulder, G. (1987

  8. EFFECTIVE INDICES FOR MONITORING MENTAL WORKLOAD WHILE PERFORMING MULTIPLE TASKS.

    Science.gov (United States)

    Hsu, Bin-Wei; Wang, Mao-Jiun J; Chen, Chi-Yuan; Chen, Fang

    2015-08-01

    This study identified several physiological indices that can accurately monitor mental workload while participants performed multiple tasks with the strategy of maintaining stable performance and maximizing accuracy. Thirty male participants completed three 10-min. simulated multitasks: MATB (Multi-Attribute Task Battery) with three workload levels. Twenty-five commonly used mental workload measures were collected, including heart rate, 12 HRV (heart rate variability), 10 EEG (electroencephalography) indices (α, β, θ, α/θ, θ/β from O1-O2 and F4-C4), and two subjective measures. Analyses of index sensitivity showed that two EEG indices, θ and α/θ (F4-C4), one time-domain HRV-SDNN (standard deviation of inter-beat intervals), and four frequency-domain HRV: VLF (very low frequency), LF (low frequency), %HF (percentage of high frequency), and LF/HF were sensitive to differentiate high workload. EEG α/θ (F4-C4) and LF/HF were most effective for monitoring high mental workload. LF/HF showed the highest correlations with other physiological indices. EEG α/θ (F4-C4) showed strong correlations with subjective measures across different mental workload levels. Operation strategy would affect the sensitivity of EEG α (F4-C4) and HF.

  9. Using the NASA Task Load Index to Assess Workload in Electronic Medical Records.

    Science.gov (United States)

    Hudson, Darren; Kushniruk, Andre W; Borycki, Elizabeth M

    2015-01-01

    Electronic medical records (EMRs) has been expected to decrease health professional workload. The NASA Task Load Index has become an important tool for assessing workload in many domains. However, its application in assessing the impact of an EMR on nurse's workload has remained to be explored. In this paper we report the results of a study of workload and we explore the utility of applying the NASA Task Load Index to assess impact of an EMR at the end of its lifecycle on nurses' workload. It was found that mental and temporal demands were the most responsible for the workload. Further work along these lines is recommended.

  10. ATLAS World-cloud and networking in PanDA

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration; De, Kaushik; Di Girolamo, Alessandro; Walker, Rodney

    2016-01-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centers, which confined tasks and most of the data traffic. Since those early days, the sites' network bandwidth has increased at O(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. Nuclei and satellite sites are dynamically paired by PanDA for each task based on the input data availability, capability matching, site load and...

  11. ATLAS WORLD-cloud and networking in PanDA

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Di Girolamo, Alessandro; Maeno, Tadashi; Walker, Rodney

    2017-01-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites' network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network...

  12. Towards an Approach of Semantic Access Control for Cloud Computing

    Science.gov (United States)

    Hu, Luokai; Ying, Shi; Jia, Xiangyang; Zhao, Kai

    With the development of cloud computing, the mutual understandability among distributed Access Control Policies (ACPs) has become an important issue in the security field of cloud computing. Semantic Web technology provides the solution to semantic interoperability of heterogeneous applications. In this paper, we analysis existing access control methods and present a new Semantic Access Control Policy Language (SACPL) for describing ACPs in cloud computing environment. Access Control Oriented Ontology System (ACOOS) is designed as the semantic basis of SACPL. Ontology-based SACPL language can effectively solve the interoperability issue of distributed ACPs. This study enriches the research that the semantic web technology is applied in the field of security, and provides a new way of thinking of access control in cloud computing.

  13. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    Science.gov (United States)

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  14. DIRAC optimized workload management

    CERN Document Server

    Paterson, S K

    2008-01-01

    The LHCb DIRAC Workload and Data Management System employs advanced optimization techniques in order to dynamically allocate resources. The paradigms realized by DIRAC, such as late binding through the Pilot Agent approach, have proven to be highly successful. For example, this has allowed the principles of workload management to be applied not only at the time of user job submission to the Grid but also to optimize the use of computing resources once jobs have been acquired. Along with the central application of job priorities, DIRAC minimizes the system response time for high priority tasks. This paper will describe the recent developments to support Monte Carlo simulation, data processing and distributed user analysis in a consistent way across disparate compute resources including individual PCs, local batch systems, and the Worldwide LHC Computing Grid. The Grid environment is inherently unpredictable and whilst short-term studies have proven to deliver high job efficiencies, the system performance over ...

  15. Sedimentation Efficiency of Condensation Clouds in Substellar Atmospheres

    Science.gov (United States)

    Gao, Peter; Marley, Mark S.; Ackerman, Andrew S.

    2018-03-01

    Condensation clouds in substellar atmospheres have been widely inferred from spectra and photometric variability. Up until now, their horizontally averaged vertical distribution and mean particle size have been largely characterized using models, one of which is the eddy diffusion–sedimentation model from Ackerman and Marley that relies on a sedimentation efficiency parameter, f sed, to determine the vertical extent of clouds in the atmosphere. However, the physical processes controlling the vertical structure of clouds in substellar atmospheres are not well understood. In this work, we derive trends in f sed across a large range of eddy diffusivities (K zz ), gravities, material properties, and cloud formation pathways by fitting cloud distributions calculated by a more detailed cloud microphysics model. We find that f sed is dependent on K zz , but not gravity, when K zz is held constant. f sed is most sensitive to the nucleation rate of cloud particles, as determined by material properties like surface energy and molecular weight. High surface energy materials form fewer, larger cloud particles, leading to large f sed (>1), and vice versa for materials with low surface energy. For cloud formation via heterogeneous nucleation, f sed is sensitive to the condensation nuclei flux and radius, connecting cloud formation in substellar atmospheres to the objects’ formation environments and other atmospheric aerosols. These insights could lead to improved cloud models that help us better understand substellar atmospheres. For example, we demonstrate that f sed could increase with increasing cloud base depth in an atmosphere, shedding light on the nature of the brown dwarf L/T transition.

  16. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.

    Directory of Open Access Journals (Sweden)

    Vasileios Thanasias

    Full Text Available Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.

  17. VM Capacity-Aware Scheduling within Budget Constraints in IaaS Clouds.

    Science.gov (United States)

    Thanasias, Vasileios; Lee, Choonhwa; Hanif, Muhammad; Kim, Eunsam; Helal, Sumi

    2016-01-01

    Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.

  18. Toward ubiquitous healthcare services with a novel efficient cloud platform.

    Science.gov (United States)

    He, Chenguang; Fan, Xiaomao; Li, Ye

    2013-01-01

    Ubiquitous healthcare services are becoming more and more popular, especially under the urgent demand of the global aging issue. Cloud computing owns the pervasive and on-demand service-oriented natures, which can fit the characteristics of healthcare services very well. However, the abilities in dealing with multimodal, heterogeneous, and nonstationary physiological signals to provide persistent personalized services, meanwhile keeping high concurrent online analysis for public, are challenges to the general cloud. In this paper, we proposed a private cloud platform architecture which includes six layers according to the specific requirements. This platform utilizes message queue as a cloud engine, and each layer thereby achieves relative independence by this loosely coupled means of communications with publish/subscribe mechanism. Furthermore, a plug-in algorithm framework is also presented, and massive semistructure or unstructured medical data are accessed adaptively by this cloud architecture. As the testing results showing, this proposed cloud platform, with robust, stable, and efficient features, can satisfy high concurrent requests from ubiquitous healthcare services.

  19. Classification Systems for Individual Differences in Multiple-task Performance and Subjective Estimates of Workload

    Science.gov (United States)

    Damos, D. L.

    1984-01-01

    Human factors practitioners often are concerned with mental workload in multiple-task situations. Investigations of these situations have demonstrated repeatedly that individuals differ in their subjective estimates of workload. These differences may be attributed in part to individual differences in definitions of workload. However, after allowing for differences in the definition of workload, there are still unexplained individual differences in workload ratings. The relation between individual differences in multiple-task performance, subjective estimates of workload, information processing abilities, and the Type A personality trait were examined.

  20. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    Science.gov (United States)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  1. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  2. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  3. Relationship between workload and low back pain in assembly line workers

    Directory of Open Access Journals (Sweden)

    Reza Kalantari

    2016-06-01

    Full Text Available Introduction: Work pressure and excessive workload can jeopardize and impair the people’s health. One of these impairments is musculoskeletal disorders. Among these disorders, low back pain is the most common and most costly problem. The purpose of this study was to investigate the relationship between workload and prevalence of low back pain in assembly line workers of a car manufacturing factory. Methods: This cross-sectional study was conducted on 69 workers working in the assembly line of a factory. Data collection tools included three questionnaires: demographic questionnaire, NASA Task Load Index (NASA-TLX and Cornell Musculoskeletal Discomfort Questionnaire (CMDQ. Data were analyzed by descriptive and inferential (T-test and One-way ANOVA statistics. Results: Of the workers, 72.5% were female. The average total workload was 71.42% and the prevalence of musculoskeletal disorders in low back was 43.37%. The results of the analysis of relationship between workload and the prevalence of low back pain showed a significant relationship between physical/ mental workload and the incidence of low back pain (P<0.05. Conclusion: The more is the workload on the person, the greater is the risk of low back pain. Measures such as increasing the number of workers to distribute the workload, slowing the work pace, having work-rest periods for workers, improving psychological conditions of work, etc. can be useful in this regard.

  4. Quantifying the Workload of Subject Bibliographers in Collection Development.

    Science.gov (United States)

    Metz, Paul

    1991-01-01

    Discussion of the role of subject bibliographers in collection development activities focuses on an approach developed at Virginia Polytechnic and State Institute to provide a formula for estimating the collection development workload of subject bibliographers. Workload standards and matrix models of organizational structures are discussed, and…

  5. Respiratory sinus arrhythmia as a measure of cognitive workload.

    Science.gov (United States)

    Muth, Eric R; Moss, Jason D; Rosopa, Patrick J; Salley, James N; Walker, Alexander D

    2012-01-01

    The current standard for measuring cognitive workload is the NASA Task-load Index (TLX) questionnaire. Although this measure has a high degree of reliability, diagnosticity, and sensitivity, a reliable physiological measure of cognitive workload could provide a non-invasive, objective measure of workload that could be tracked in real or near real-time without interrupting the task. This study investigated changes in respiratory sinus arrhythmia (RSA) during seven different sub-sections of a proposed selection test for Navy aviation and compared them to changes reported on the NASA-TLX. 201 healthy participants performed the seven tasks of the Navy's Performance Based Measure. RSA was measured during each task and the NASA-TLX was administered after each task. Multi-level modeling revealed that RSA significantly predicted NASA-TLX scores. A moderate within-subject correlation was also found between RSA and NASA TLX scores. The findings support the potential development of RSA as a real-time measure of cognitive workload. Copyright © 2011. Published by Elsevier B.V.

  6. Workload and job satisfaction among general practitioners: a review of the literature.

    NARCIS (Netherlands)

    Groenewegen, P.P.; Hutten, J.B.F.

    1991-01-01

    The workload of general practitioners (GPs) is an important issue in health care systems with capitation payment for GPs services. This article reviews the literature on determinants and consequences of workload and job satisfaction of GPs. Determinants of workload are located on the demand side

  7. Continuous measures of situation awareness and workload

    International Nuclear Information System (INIS)

    Droeivoldsmo, Asgeir; Skraaning, Gyrd jr.; Sverrbo, Mona; Dalen, Joergen; Grimstad, Tone; Andresen, Gisle

    1998-03-01

    This report presents methods for continuous measures for Situation Awareness and Workload. The objective has been to identify, develop and test the new measures, and compare them to instruments that require interruptions of scenarios. The new measures are: (1) the Visual Indicator of Situation Awareness (VISA); where Situation Awareness is scored from predefined areas of visual interest critical for solving scenarios. Visual monitoring of areas was recorded by eye-movement tracking. (2) Workload scores reflected by Extended Dwell Time (EDT) and the operator Activity Level. EDT was calculated from eye-movement data files, and the activity level was estimated from simulator logs. Using experimental data from the 1996 CASH NRC Alarm study and the 1997 Human Error Analysis Project/ Human-Centred Automation study, the new measurement techniques have been tested and evaluated on a preliminary basis. The results showed promising relationships between the new continuous measures of situation awareness and workload, and established instruments based upon scenario interruptions. (author)

  8. Remuneration, workload, and allocation of time in general practice.

    NARCIS (Netherlands)

    Berg, M.J. van den; Westert, G.P.; Groenewegen, P.P.; Bakker, D.H. de; Zee, J. van der

    2006-01-01

    Background: General Practitioners (GPs) can cope with workload by, among others, spending more hours in patient care or by spending less time per patient. The way GPs are paid might affect the way they cope with workload. From an economical point of view, capitation payment is an incentive to

  9. Workload demand in police officers during mountain bike patrols

    NARCIS (Netherlands)

    Takken, T.; Ribbink, A.; Heneweer, H.; Moolenaar, H.; Wittink, H.

    2009-01-01

    To the authors' knowledge this is the first paper that has used the training impulse (TRIMP) 'methodology' to calculate workload demand. It is believed that this is a promising method to calculate workload in a range of professions in order to understand the relationship between work demands and

  10. All Things Being Equal: Observing Australian Individual Academic Workloads

    Science.gov (United States)

    Dobele, Angela; Rundle-Thiele, Sharyn; Kopanidis, Foula; Steel, Marion

    2010-01-01

    The achievement of greater gender equity within Australian universities is a significant issue for both the quality and the strength of Australian higher education. This paper contributes to our knowledge of academic workloads, observing individual workloads in business faculties. A multiple case study method was employed to observe individual…

  11. Modest associations between self-reported physical workload and neck trouble

    DEFF Research Database (Denmark)

    Holm, Jonas Winkel; Hartvigsen, Jan; Lings, Svend

    2013-01-01

    -based, cross-sectional questionnaire study using 3,208 monozygotic (MZ) and same-sexed dizygotic (DZ) twins aged 19-70. Twin pairs discordant for self-reported NT during the past year ("Any NT") were included. Self-reported physical workload in four categories was used as exposure ("sitting," "sitting......OBJECTIVES: To investigate the relationship between self-reported physical workload and neck trouble (NT) in twins. Additionally, to explore whether the relationship between physical workload and NT is influenced by genetic factors. METHODS: A twin control study was performed within a population...... and walking," "light physical," and "heavy physical" work). Paired analyses including conditional logistic regression were made for all participants and for each sex, and MZ and DZ pairs separately. RESULTS: No marked associations between physical workload and NT were seen. A moderate risk elevation in "heavy...

  12. Training improves laparoscopic tasks performance and decreases operator workload.

    Science.gov (United States)

    Hu, Jesse S L; Lu, Jirong; Tan, Wee Boon; Lomanto, Davide

    2016-05-01

    It has been postulated that increased operator workload during task performance may increase fatigue and surgical errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a validated tool for self-assessment for workload. Our study aims to assess the relationship of workload and performance of novices in simulated laparoscopic tasks of different complexity levels before and after training. Forty-seven novices without prior laparoscopic experience were recruited in a trial to investigate whether training improves task performance as well as mental workload. The participants were tested on three standard tasks (ring transfer, precision cutting and intracorporeal suturing) in increasing complexity based on the Fundamentals of Laparoscopic Surgery (FLS) curriculum. Following a period of training and rest, participants were tested again. Test scores were computed from time taken and time penalties for precision errors. Test scores and NASA-TLX scores were recorded pre- and post-training and analysed using paired t tests. One-way repeated measures ANOVA was used to analyse differences in NASA-TLX scores between the three tasks. NASA-TLX score was lowest with ring transfer and highest with intracorporeal suturing. This was statistically significant in both pre-training (p NASA-TLX scores mirror the changes in test scores for the three tasks. Workload scores decreased significantly after training for all three tasks (ring transfer = 2.93, p NASA-TLX score is an accurate reflection of the complexity of simulated laparoscopic tasks in the FLS curriculum. This also correlates with the relationship of test scores between the three tasks. Simulation training improves both performance score and workload score across the tasks.

  13. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    Science.gov (United States)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  14. Now and next-generation sequencing techniques: future of sequence analysis using cloud computing.

    Science.gov (United States)

    Thakur, Radhe Shyam; Bandopadhyay, Rajib; Chaudhary, Bratati; Chatterjee, Sourav

    2012-01-01

    Advances in the field of sequencing techniques have resulted in the greatly accelerated production of huge sequence datasets. This presents immediate challenges in database maintenance at datacenters. It provides additional computational challenges in data mining and sequence analysis. Together these represent a significant overburden on traditional stand-alone computer resources, and to reach effective conclusions quickly and efficiently, the virtualization of the resources and computation on a pay-as-you-go concept (together termed "cloud computing") has recently appeared. The collective resources of the datacenter, including both hardware and software, can be available publicly, being then termed a public cloud, the resources being provided in a virtual mode to the clients who pay according to the resources they employ. Examples of public companies providing these resources include Amazon, Google, and Joyent. The computational workload is shifted to the provider, which also implements required hardware and software upgrades over time. A virtual environment is created in the cloud corresponding to the computational and data storage needs of the user via the internet. The task is then performed, the results transmitted to the user, and the environment finally deleted after all tasks are completed. In this discussion, we focus on the basics of cloud computing, and go on to analyze the prerequisites and overall working of clouds. Finally, the applications of cloud computing in biological systems, particularly in comparative genomics, genome informatics, and SNP detection are discussed with reference to traditional workflows.

  15. Approximate entropy: a new evaluation approach of mental workload under multitask conditions

    Science.gov (United States)

    Yao, Lei; Li, Xiaoling; Wang, Wei; Dong, Yuanzhe; Jiang, Ying

    2014-04-01

    There are numerous instruments and an abundance of complex information in the traditional cockpit display-control system, and pilots require a long time to familiarize themselves with the cockpit interface. This can cause accidents when they cope with emergency events, suggesting that it is necessary to evaluate pilot cognitive workload. In order to establish a simplified method to evaluate cognitive workload under a multitask condition. We designed a series of experiments involving different instrument panels and collected electroencephalograms (EEG) from 10 healthy volunteers. The data were classified and analyzed with an approximate entropy (ApEn) signal processing. ApEn increased with increasing experiment difficulty, suggesting that it can be used to evaluate cognitive workload. Our results demonstrate that ApEn can be used as an evaluation criteria of cognitive workload and has good specificity and sensitivity. Moreover, we determined an empirical formula to assess the cognitive workload interval, which can simplify cognitive workload evaluation under multitask conditions.

  16. Balancing nurses' workload in hospital wards: study protocol of developing a method to manage workload.

    Science.gov (United States)

    van den Oetelaar, W F J M; van Stel, H F; van Rhenen, W; Stellato, R K; Grolman, W

    2016-11-10

    Hospitals pursue different goals at the same time: excellent service to their patients, good quality care, operational excellence, retaining employees. This requires a good balance between patient needs and nursing staff. One way to ensure a proper fit between patient needs and nursing staff is to work with a workload management method. In our view, a nursing workload management method needs to have the following characteristics: easy to interpret; limited additional registration; applicable to different types of hospital wards; supported by nurses; covers all activities of nurses and suitable for prospective planning of nursing staff. At present, no such method is available. The research follows several steps to come to a workload management method for staff nurses. First, a list of patient characteristics relevant to care time will be composed by performing a Delphi study among staff nurses. Next, a time study of nurses' activities will be carried out. The 2 can be combined to estimate care time per patient group and estimate the time nurses spend on non-patient-related activities. These 2 estimates can be combined and compared with available nursing resources: this gives an estimate of nurses' workload. The research will take place in an academic hospital in the Netherlands. 6 surgical wards will be included, capacity 15-30 beds. The study protocol was submitted to the Medical Ethical Review Board of the University Medical Center (UMC) Utrecht and received a positive advice, protocol number 14-165/C. This method will be developed in close cooperation with staff nurses and ward management. The strong involvement of the end users will contribute to a broader support of the results. The method we will develop may also be useful for planning purposes; this is a strong advantage compared with existing methods, which tend to focus on retrospective analysis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  17. Subjective workload and individual differences in information processing abilities

    Science.gov (United States)

    Damos, D. L.

    1984-01-01

    This paper describes several experiments examining the source of individual differences in the experience of mental workload. Three sources of such differences were examined: information processing abilities, timesharing abilities, and personality traits/behavior patterns. On the whole, there was little evidence that individual differences in information processing abilities or timesharing abilities are related to perceived differences in mental workload. However, individuals with strong Type A coronary prone behavior patterns differed in both single- and multiple-task performance from individuals who showed little evidence of such a pattern. Additionally, individuals with a strong Type A pattern showed some dissociation between objective performance and the experience of mental workload.

  18. The impact of cloud inhomogeneities on the Earth radiation budget: the 14 October 1989 I.C.E. convective cloud case study

    Directory of Open Access Journals (Sweden)

    F. Parol

    1994-01-01

    Full Text Available Through their multiple interactions with radiation, clouds have an important impact on the climate. Nonetheless, the simulation of clouds in climate models is still coarse. The present evolution of modeling tends to a more realistic representation of the liquid water content; thus the problem of its subgrid scale distribution is crucial. For a convective cloud field observed during ICE 89, Landsat TM data (resolution: 30m have been analyzed in order to quantify the respective influences of both the horizontal distribution of liquid water content and cloud shape on the Earth radiation budget. The cloud field was found to be rather well-represented by a stochastic distribution of hemi-ellipsoidal clouds whose horizontal aspect ratio is close to 2 and whose vertical aspect ratio decreases as the cloud cell area increases. For that particular cloud field, neglecting the influence of the cloud shape leads to an over-estimate of the outgoing longwave flux; in the shortwave, it leads to an over-estimate of the reflected flux for high solar elevations but strongly depends on cloud cell orientations for low elevations. On the other hand, neglecting the influence of cloud size distribution leads to systematic over-estimate of their impact on the shortwave radiation whereas the effect is close to zero in the thermal range. The overall effect of the heterogeneities is estimated to be of the order of 10 W m-2 for the conditions of that Landsat picture (solar zenith angle 65°, cloud cover 70%; it might reach 40 W m-2 for an overhead sun and overcast cloud conditions.

  19. The impact of cloud inhomogeneities on the Earth radiation budget: the 14 October 1989 I.C.E. convective cloud case study

    Directory of Open Access Journals (Sweden)

    F. Parol

    Full Text Available Through their multiple interactions with radiation, clouds have an important impact on the climate. Nonetheless, the simulation of clouds in climate models is still coarse. The present evolution of modeling tends to a more realistic representation of the liquid water content; thus the problem of its subgrid scale distribution is crucial. For a convective cloud field observed during ICE 89, Landsat TM data (resolution: 30m have been analyzed in order to quantify the respective influences of both the horizontal distribution of liquid water content and cloud shape on the Earth radiation budget. The cloud field was found to be rather well-represented by a stochastic distribution of hemi-ellipsoidal clouds whose horizontal aspect ratio is close to 2 and whose vertical aspect ratio decreases as the cloud cell area increases. For that particular cloud field, neglecting the influence of the cloud shape leads to an over-estimate of the outgoing longwave flux; in the shortwave, it leads to an over-estimate of the reflected flux for high solar elevations but strongly depends on cloud cell orientations for low elevations. On the other hand, neglecting the influence of cloud size distribution leads to systematic over-estimate of their impact on the shortwave radiation whereas the effect is close to zero in the thermal range. The overall effect of the heterogeneities is estimated to be of the order of 10 W m-2 for the conditions of that Landsat picture (solar zenith angle 65°, cloud cover 70%; it might reach 40 W m-2 for an overhead sun and overcast cloud conditions.

  20. Investigation on the relationship between mental workload and musculoskeletal disorders among nursing staff

    Directory of Open Access Journals (Sweden)

    Yousef Mahmoudifar

    2018-01-01

    Full Text Available Aims: High prevalence of musculoskeletal disorders owing to the work is one of the popular discomforts between nursing staff. High level of workload is considered as a serious problem and identified as a stressor in the nursing. This study intends to recognize the relationship between musculoskeletal disorders and mental workload in nursing personnel reside at southern part of West Azerbaijan province Iran in 2017. Materials and Methods: In this analytical-descriptive study, 100 nurses working in West Azerbaijan hospitals have been randomly selected. Nordic and National Aeronautics and Space Administration-Task Load Index workload questionnaires have been simultaneously utilized as data collection tools. Data analysis has also carried out using SPSS, variance analysis tests, multiple linear regression, and Pearson's correlation coefficient. Results: Results suggest that the most frequent complaints of musculoskeletal problems are associated to the back area. Investigation on sextet scales of mental workload indicates that each of the six scales of workload was at the high-risk level and the average of total workload was 72.45 ± 19.45 which confirms a high-risk level. Pearson's correlation coefficient also indicates mental workload elements have a significant relationship with musculoskeletal disorders (P < 0.05. Conclusion: The results suggest there is a relationship between musculoskeletal disorders and mental workload and the majority of personnel had mental workload with high-risk level. The best way of management planning to mitigate the risk of musculoskeletal disorders arising of mental workload is, therefore, managing-controlling approach such as staff training, job rotation, and time management.

  1. Neutron beam irradiation study of workload dependence of SER in a microprocessor

    Energy Technology Data Exchange (ETDEWEB)

    Michalak, Sarah E [Los Alamos National Laboratory; Graves, Todd L [Los Alamos National Laboratory; Hong, Ted [STANFORD; Ackaret, Jerry [IBM; Sonny, Rao [IBM; Subhasish, Mitra [STANFORD; Pia, Sanda [IBM

    2009-01-01

    It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.

  2. Impact of deforestation in the Amazon basin on cloud climatology.

    Science.gov (United States)

    Wang, Jingfeng; Chagnon, Frédéric J F; Williams, Earle R; Betts, Alan K; Renno, Nilton O; Machado, Luiz A T; Bisht, Gautam; Knox, Ryan; Bras, Rafael L

    2009-03-10

    Shallow clouds are prone to appear over deforested surfaces whereas deep clouds, much less frequent than shallow clouds, favor forested surfaces. Simultaneous atmospheric soundings at forest and pasture sites during the Rondonian Boundary Layer Experiment (RBLE-3) elucidate the physical mechanisms responsible for the observed correlation between clouds and land cover. We demonstrate that the atmospheric boundary layer over the forested areas is more unstable and characterized by larger values of the convective available potential energy (CAPE) due to greater humidity than that which is found over the deforested area. The shallow convection over the deforested areas is relatively more active than the deep convection over the forested areas. This greater activity results from a stronger lifting mechanism caused by mesoscale circulations driven by deforestation-induced heterogeneities in land cover.

  3. Nursing workload for cancer patients under palliative care

    OpenAIRE

    Fuly, Patrícia dos Santos Claro; Pires, Livia Márcia Vidal; Souza, Claudia Quinto Santos de; Oliveira, Beatriz Guitton Renaud Baptista de; Padilha, Katia Grillo

    2016-01-01

    Abstract OBJECTIVE To verify the nursing workload required by cancer patients undergoing palliative care and possible associations between the demographic and clinical characteristics of the patients and the nursing workload. METHOD This is a quantitative, cross-sectional, prospective study developed in the Connective Bone Tissue (TOC) clinics of Unit II of the Brazilian National Cancer Institute José Alencar Gomes da Silva with patients undergoing palliative care. RESULTS Analysis of 197 ...

  4. Mental workload measurement for emergency operating procedures in digital nuclear power plants.

    Science.gov (United States)

    Gao, Qin; Wang, Yang; Song, Fei; Li, Zhizhong; Dong, Xiaolu

    2013-01-01

    Mental workload is a major consideration for the design of emergency operation procedures (EOPs) in nuclear power plants. Continuous and objective measures are desired. This paper compares seven mental workload measurement methods (pupil size, blink rate, blink duration, heart rate variability, parasympathetic/sympathetic ratio, total power and (Goals, Operations, Methods, and Section Rules)-(Keystroke Level Model) GOMS-KLM-based workload index) with regard to sensitivity, validity and intrusiveness. Eighteen participants performed two computerised EOPs of different complexity levels, and mental workload measures were collected during the experiment. The results show that the blink rate is sensitive to both the difference in the overall task complexity and changes in peak complexity within EOPs, that the error rate is sensitive to the level of arousal and correlate to the step error rate and that blink duration increases over the task period in both low and high complexity EOPs. Cardiac measures were able to distinguish tasks with different overall complexity. The intrusiveness of the physiological instruments is acceptable. Finally, the six physiological measures were integrated using group method of data handling to predict perceived overall mental workload. The study compared seven measures for evaluating the mental workload with emergency operation procedure in nuclear power plants. An experiment with simulated procedures was carried out, and the results show that eye response measures are useful for assessing temporal changes of workload whereas cardiac measures are useful for evaluating the overall workload.

  5. Identity based Encryption and Biometric Authentication Scheme for Secure Data Access in Cloud Computing

    DEFF Research Database (Denmark)

    Cheng, Hongbing; Rong, Chunming; Tan, Zheng-Hua

    2012-01-01

    Cloud computing will be a main information infrastructure in the future; it consists of many large datacenters which are usually geographically distributed and heterogeneous. How to design a secure data access for cloud computing platform is a big challenge. In this paper, we propose a secure data...... access scheme based on identity-based encryption and biometric authentication for cloud computing. Firstly, we describe the security concern of cloud computing and then propose an integrated data access scheme for cloud computing, the procedure of the proposed scheme include parameter setup, key...... distribution, feature template creation, cloud data processing and secure data access control. Finally, we compare the proposed scheme with other schemes through comprehensive analysis and simulation. The results show that the proposed data access scheme is feasible and secure for cloud computing....

  6. The effect of inclement weather on trauma orthopaedic workload.

    LENUS (Irish Health Repository)

    Cashman, J P

    2012-01-31

    BACKGROUND: Climate change models predict increasing frequency of extreme weather. One of the challenges hospitals face is how to make sure they have adequate staffing at various times of the year. AIMS: The aim of this study was to examine the effect of this severe inclement weather on hospital admissions, operative workload and cost in the Irish setting. We hypothesised that there is a direct relationship between cold weather and workload in a regional orthopaedic trauma unit. METHODS: Trauma orthopaedic workload in a regional trauma unit was examined over 2 months between December 2009 and January 2010. This corresponded with a period of severe inclement weather. RESULTS: We identified a direct correlation between the drop in temperature and increase in workload, with a corresponding increase in demand on resources. CONCLUSIONS: Significant cost savings could be made if these injuries were prevented. While the information contained in this study is important in the context of resource planning and staffing of hospital trauma units, it also highlights the vulnerability of the Irish population to wintery weather.

  7. Hybrid resource provisioning for clouds

    International Nuclear Information System (INIS)

    Rahman, Mahfuzur; Graham, Peter

    2012-01-01

    Flexible resource provisioning, the assignment of virtual machines (VMs) to physical machine, is a key requirement for cloud computing. To achieve 'provisioning elasticity', the cloud needs to manage its available resources on demand. A-priori, static, VM provisioning introduces no runtime overhead but fails to deal with unanticipated changes in resource demands. Dynamic provisioning addresses this problem but introduces runtime overhead. To reduce VM management overhead so more useful work can be done and to also avoid sub-optimal provisioning we propose a hybrid approach that combines static and dynamic provisioning. The idea is to adapt a good initial static placement of VMs in response to evolving load characteristics, using live migration, as long as the overhead of doing so is low and the effectiveness is high. When this is no longer so, we trigger a revised static placement. (Thus, we are essentially applying local multi-objective optimization to tune a global optimization with reduced overhead.) This approach requires a complicated migration decision algorithm based on current and predicted:future workloads, power consumptions and memory usage in the host machines as well as network burst characteristics for the various possible VM multiplexings (combinations of VMs on a host). A further challenge is to identify those characteristics of the dynamic provisioning that should trigger static re-provisioning.

  8. Cognitive Workload and Psychophysiological Parameters During Multitask Activity in Helicopter Pilots.

    Science.gov (United States)

    Gaetan, Sophie; Dousset, Erick; Marqueste, Tanguy; Bringoux, Lionel; Bourdin, Christophe; Vercher, Jean-Louis; Besson, Patricia

    2015-12-01

    Helicopter pilots are involved in a complex multitask activity, implying overuse of cognitive resources, which may result in piloting task impairment or in decision-making failure. Studies usually investigate this phenomenon in well-controlled, poorly ecological situations by focusing on the correlation between physiological values and either cognitive workload or emotional state. This study aimed at jointly exploring workload induced by a realistic simulated helicopter flight mission and emotional state, as well as physiological markers. The experiment took place in the helicopter full flight dynamic simulator. Six participants had to fly on two missions. Workload level, skin conductance, RMS-EMG, and emotional state were assessed. Joint analysis of psychological and physiological parameters associated with workload estimation revealed particular dynamics in each of three profiles. 1) Expert pilots showed a slight increase of measured physiological parameters associated with the increase in difficulty level. Workload estimates never reached the highest level and the emotional state for this profile only referred to positive emotions with low emotional intensity. 2) Non-Expert pilots showed increasing physiological values as the perceived workload increased. However, their emotional state referred to either positive or negative emotions, with a greater variability in emotional intensity. 3) Intermediate pilots were similar to Expert pilots regarding emotional states and similar to Non-Expert pilots regarding physiological patterns. Overall, high interindividual variability of these results highlight the complex link between physiological and psychological parameters with workload, and question whether physiology alone could predict a pilot's inability to make the right decision at the right time.

  9. A Matchmaking Strategy Of Mixed Resource On Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Wisam Elshareef

    2015-08-01

    Full Text Available Abstract Today cloud computing has become a key technology for online allotment of computing resources and online storage of user data in a lower cost where computing resources are available all the time over the Internet with pay per use concept. Recently there is a growing need for resource management strategies in a cloud computing environment that encompass both end-users satisfaction and a high job submission throughput with appropriate scheduling. One of the major and essential issues in resource management is related to allocate incoming tasks to suitable virtual machine matchmaking. The main objective of this paper is to propose a matchmaking strategy between the incoming requests and various resources in the cloud environment to satisfy the requirements of users and to load balance the workload on resources. Load Balancing is an important aspect of resource management in a cloud computing environment. So this paper proposes a dynamic weight active monitor DWAM load balance algorithm which allocates on the fly the incoming requests to the all available virtual machines in an efficient manner in order to achieve better performance parameters such as response time processing time and resource utilization. The feasibility of the proposed algorithm is analyzed using Cloudsim simulator which proves the superiority of the proposed DWAM algorithm over its counterparts in literature. Simulation results demonstrate that proposed algorithm dramatically improves response time data processing time and more utilized of resource compared Active monitor and VM-assign algorithms.

  10. Design and Implementation of Cloud Platform for Intelligent Logistics in the Trend of Intellectualization

    Institute of Scientific and Technical Information of China (English)

    Mengke Yang; Movahedipour Mahmood; Xiaoguang Zhou; Salam Shafaq; Latif Zahid

    2017-01-01

    Intellectualization has become a new trend for telecom industry, driven by in-telligent technology including cloud comput-ing, big data, and Internet of things. In order to satisfy the service demand of intelligent logistics, this paper designed an intelligent logistics platform containing the main ap-plications such as e-commerce, self-service transceiver, big data analysis, path location and distribution optimization. The intelligent logistics service platform has been built based on cloud computing to collect, store and han-dling multi-source heterogeneous mass data from sensors, RFID electronic tag, vehicle ter-minals and APP, so that the open-access cloud services including distribution, positioning, navigation, scheduling and other data services can be provided for the logistics distribution applications. And then the architecture of in-telligent logistics cloud platform containing software layer (SaaS), platform layer (PaaS) and infrastructure (IaaS) has been constructed accordance with the core technology relative high concurrent processing technique, hetero-geneous terminal data access, encapsulation and data mining. Therefore, intelligent logis-tics cloud platform can be carried out by the service mode for implementation to accelerate the construction of the symbiotic win-win logistics ecological system and the benign de-velopment of the ICT industry in the trend of intellectualization in China.

  11. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    Science.gov (United States)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  12. The associations between psychosocial workload and mental health complaints in different age groups.

    NARCIS (Netherlands)

    Zoer, I.; Ruitenburg, M.M.; Botje, D.; Frings-Dresen, M.H.W.; Sluiter, J.K.

    2011-01-01

    The objective of the present study was to explore associations between psychosocial workload and mental health complaints in different age groups. A questionnaire was sent to 2021 employees of a Dutch railway company. Six aspects of psychosocial workload (work pressure, mental workload, emotional

  13. The associations between psychosocial workload and mental health complaints in different age groups

    NARCIS (Netherlands)

    Zoer, I.; Ruitenburg, M. M.; Botje, D.; Frings-Dresen, M. H. W.; Sluiter, J. K.

    2011-01-01

    The objective of the present study was to explore associations between psychosocial workload and mental health complaints in different age groups. A questionnaire was sent to 2021 employees of a Dutch railway company. Six aspects of psychosocial workload (work pressure, mental workload, emotional

  14. The Use of the Dynamic Solution Space to Assess Air Traffic Controller Workload

    NARCIS (Netherlands)

    D'Engelbronner, J.G.; Mulder, M.; Van Paassen, M.M.; De Stigter, S.; Huisman, H.

    2010-01-01

    Air traffic capacity is mainly bound by air traffic controller workload. In order to effectively find solutions for this problem, off-line pre-experimental workload assessment methods are desirable. In order to better understand the workload associated with air traffic control, previous research

  15. Cloud Manufacturing Service Paradigm for Group Manufacturing Companies

    Directory of Open Access Journals (Sweden)

    Jingtao Zhou

    2014-07-01

    Full Text Available The continuous refinement of specialization requires that the group manufacturing company must be constantly focused on how to concentrate its core resources in special sphere to form its core competitive advantage. However, the resources in enterprise group are usually distributed in different subsidiary companies, which means they cannot be fully used, constraining the competition and development of the enterprise. Conducted as a response to a need for cloud manufacturing studies, systematic and detailed studies on cloud manufacturing schema for group companies are carried out in this paper. A new hybrid private clouds paradigm is proposed to meet the requirements of aggregation and centralized use of heterogeneous resources and business units distributed in different subsidiary companies. After the introduction of the cloud manufacturing paradigm for enterprise group and its architecture, this paper presents a derivation from the abstraction of paradigm and framework to the application of a practical evaluative working mechanism. In short, the paradigm establishes an effective working mechanism to translate collaborative business process composed by the activities into cloud manufacturing process composed by services so as to create a foundation resulting in mature traditional project monitoring and scheduling technologies being able to be used in cloud manufacturing project management.

  16. ATLAS Global Shares Implementation in the PanDA Workload Management System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2018-01-01

    PanDA (Production and Distributed Analysis) is the workload management system for ATLAS across the Worldwide LHC Computing Grid. While analysis tasks are submitted to PanDA by over a thousand users following personal schedules (e.g. PhD or conference deadlines), production campaigns are scheduled by a central Physics Coordination group based on the organization’s calendar. The Physics Coordination group needs to allocate the amount of Grid resources dedicated to each activity, in order to manage sharing of CPU resources among various parallel campaigns and to make sure that results can be achieved in time for important deadlines. While dynamic and static shares on batch systems have been around for a long time, we are trying to move away from local resource partitioning and manage shares at a global level in the PanDA system. The global solution is not straightforward, given different requirements of the activities (number of cores, memory, I/O and CPU intensity), the heterogeneity of Grid resources (site/H...

  17. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  18. Workload assessment on foundry SME to enhance productivity using full time equivalent

    Directory of Open Access Journals (Sweden)

    Sari Amarria Dila

    2018-01-01

    Full Text Available Aluminium SME aims to increase the production amount by producing wok as much as 300 Units. The problem is workload analysis of operator on the wok production line in the wok foundry SME as well as the production cycle-making cycle time and analyze the workload received by the operator when producing 300 woks using the full time equivalent (FTE method. This study aims to measure the workload of each division worker in the production process with a total of 13 workers observed. This study provides a work division recommendation based on the workload that has been carefully examined. This research involves percentage of workload effectiveness and the wages of workers. In lathe division have overload workload. While the printing division, melting inspection division, packaging and transportation division including normal workload category and the percentage of good work effectiveness. The result provides recommendations for the addition of 2 workers in each division that includes the category of overload of the lathe division with the number of initial workers as many as 13 workers to 15 workers. In the last stage perform a simulation by comparing the system of prefix work and proposal. The simulation results obtained with the initial work system to get an average of 223 woks / day. Meanwhile, for the proposed work system to get an average output of 291 woks.

  19. Nursing workload in a trauma intensive care unit

    Directory of Open Access Journals (Sweden)

    Luana Loppi Goulart

    2014-06-01

    Full Text Available Severely injured patients with multiple and conflicting injuries present themselves to nursing professionals at critical care units faced with care management challenges. The goal of the present study is to evaluate nursing workload and verify the correlation between workload and the APACHE II severity index. It is a descriptive study, conducted in the Trauma Intensive Care Unit of a teaching hospital. We used the Nursing Activities Score and APACHE II as instruments. The sample comprised 32 patients, of which most were male, young adults, presenting polytrauma, coming from the Reference Emergency Unit, in surgical treatment, and discharged from the ICU. The average obtained on the Nursing Activities Score instrument was 72% during hospitalization periods. The data displayed moderate correlation between workload and patient severity. In other words, the higher the score, the higher the patient’s mortality risk. doi: 10.5216/ree.v16i2.22922.

  20. Unsupervised classification of operator workload from brain signals

    Science.gov (United States)

    Schultze-Kraft, Matthias; Dähne, Sven; Gugler, Manfred; Curio, Gabriel; Blankertz, Benjamin

    2016-06-01

    Objective. In this study we aimed for the classification of operator workload as it is expected in many real-life workplace environments. We explored brain-signal based workload predictors that differ with respect to the level of label information required for training, including entirely unsupervised approaches. Approach. Subjects executed a task on a touch screen that required continuous effort of visual and motor processing with alternating difficulty. We first employed classical approaches for workload state classification that operate on the sensor space of EEG and compared those to the performance of three state-of-the-art spatial filtering methods: common spatial patterns (CSPs) analysis, which requires binary label information; source power co-modulation (SPoC) analysis, which uses the subjects’ error rate as a target function; and canonical SPoC (cSPoC) analysis, which solely makes use of cross-frequency power correlations induced by different states of workload and thus represents an unsupervised approach. Finally, we investigated the effects of fusing brain signals and peripheral physiological measures (PPMs) and examined the added value for improving classification performance. Main results. Mean classification accuracies of 94%, 92% and 82% were achieved with CSP, SPoC, cSPoC, respectively. These methods outperformed the approaches that did not use spatial filtering and they extracted physiologically plausible components. The performance of the unsupervised cSPoC is significantly increased by augmenting it with PPM features. Significance. Our analyses ensured that the signal sources used for classification were of cortical origin and not contaminated with artifacts. Our findings show that workload states can be successfully differentiated from brain signals, even when less and less information from the experimental paradigm is used, thus paving the way for real-world applications in which label information may be noisy or entirely unavailable.

  1. Is aerobic workload positively related to ambulatory blood pressure?

    DEFF Research Database (Denmark)

    Korshøj, Mette; Clays, Els; Lidegaard, Mark

    2016-01-01

    workload and ambulatory blood pressure (ABP) are lacking. The aim was to explore the relationship between objectively measured relative aerobic workload and ABP. METHODS: A total of 116 cleaners aged 18-65 years were included after informed consent was obtained. A portable device (Spacelabs 90217......) was mounted for 24-h measurements of ABP, and an Actiheart was mounted for 24-h heart rate measurements to calculate relative aerobic workload as percentage of relative heart rate reserve. A repeated-measure multi-adjusted mixed model was applied for analysis. RESULTS: A fully adjusted mixed model...... of measurements throughout the day showed significant positive relations (p ABP and 0.30 ± 0.04 mmHg (95 % CI 0.22-0.38 mmHg) in diastolic ABP. Correlations between...

  2. Survey of Workload and Job Satisfaction Relationship in a Productive Company

    Directory of Open Access Journals (Sweden)

    M. Maghsoudipour

    2012-05-01

    Full Text Available Background and aims: Promotion of workers’ health and safety is one of the main tasks of managers and planners. One of the important sciences that can assist managers to achieve this gool is ergonomics. This article presents results of workload and job satisfaction survey in a heavy metal components manufacturing company in Tehran, in 2010. Methods: This cross sectional study conducted by survey of all operational workers. Workload is survived by NASA-TLX questionnaire that contained six dimensions and job satisfaction evaluated by short version of Minnesota questionnaire . Results: Job satisfaction questionnaire ’s reliability which assessed by Cronbach’s Alpha was 0.91. In addition, data analysis results declare that the average job satisfaction scale was 65 and at medium level and workload with 85.11 as average scale was at the high level. Effort and physical loads were two dimensions which have high amount in the workload In addition, no statistical significant relation was observed between the total job satisfaction score and workload score. (p<0.05. While the performance dimension showed a positive relationshipwith job satisfaction, frustration demonstrated a negative relationship with job satisfaction. Conclusion: In order to improve the work conditions the administrative and technological controls should be implemented and actions need to be taken to modify workload dimensions specially, two dimensions with the high amount and dimensions that have relationship with job satisfaction.

  3. [Distribution and main influential factors of mental workload of middle school teachers in Nanchang City].

    Science.gov (United States)

    Xiao, Yuanmei; Li, Weijuan; Ren, Qingfeng; Ren, Xiaohui; Wang, Zhiming; Wang, Mianzhen; Lan, Yajia

    2015-01-01

    To investigate the distribution and main influential factors of mental workload of middle school teachers in Nanchang City. A total of 504 middle school teachers were sampled by random cluster sampling from middle schools in Nanchang City, and the mental workload level was assessed with National Aeronautics and Space Administration-Task Load Index (NASA-TLX) which was verified in reliability and validity. The mental workload scores of middle school teachers in Nanchang was approximately normal distribution. The mental workload level of middle school teachers aged 31 -35 years old was the highest. For those no more than 35 years old, there was positive correlation between mental workload and age (r = 0.146, P teachers with lower educational level seemed to have a higher mental workload (P teacher worked per day, the higher the mental workload was. Working hours per day was the most influential factor on mental workload in all influential factors (P teachers was closely related to age, educational level and work hours per day. Working hours per day was the important risk factor of mental workload. Reducing working hours per day, especially reducing it to be no more than 8 hours per day, may be a significant and useful approach alleviating mental workload of middle school teachers in Nanchang City.

  4. Aerosol-cloud interactions in Arctic mixed-phase stratocumulus

    Science.gov (United States)

    Solomon, A.

    2017-12-01

    Reliable climate projections require realistic simulations of Arctic cloud feedbacks. Of particular importance is accurately simulating Arctic mixed-phase stratocumuli (AMPS), which are ubiquitous and play an important role in regional climate due to their impact on the surface energy budget and atmospheric boundary layer structure through cloud-driven turbulence, radiative forcing, and precipitation. AMPS are challenging to model due to uncertainties in ice microphysical processes that determine phase partitioning between ice and radiatively important cloud liquid water. Since temperatures in AMPS are too warm for homogenous ice nucleation, ice must form through heterogeneous nucleation. In this presentation we discuss a relatively unexplored source of ice production-recycling of ice nuclei in regions of ice subsaturation. AMPS frequently have ice-subsaturated air near the cloud-driven mixed-layer base where falling ice crystals can sublimate, leaving behind IN. This study provides an idealized framework to understand feedbacks between dynamics and microphysics that maintain phase-partitioning in AMPS. In addition, the results of this study provide insight into the mechanisms and feedbacks that may maintain cloud ice in AMPS even when entrainment of IN at the mixed-layer boundaries is weak.

  5. The performance of workload control concepts in job shops : Improving the release method

    NARCIS (Netherlands)

    Land, MJ; Gaalman, GJC

    1998-01-01

    A specific class of production control concepts for jobs shops is based on the principles of workload control. Practitioners emphasise the importance of workload control. However, order release methods that reduce the workload on the shop floor show poor due date performance in job shop simulations.

  6. Empirical investigation of workloads of operators in advanced control rooms

    International Nuclear Information System (INIS)

    Kim, Yochan; Jung, Wondea; Kim, Seunghwan

    2014-01-01

    This paper compares the workloads of operators in a computer-based control room of an advanced power reactor (APR 1400) nuclear power plant to investigate the effects from the changes in the interfaces in the control room. The cognitive-communicative-operative activity framework was employed to evaluate the workloads of the operator's roles during emergency operations. The related data were obtained by analyzing the tasks written in the procedures and observing the speech and behaviors of the reserved operators in a full-scope dynamic simulator for an APR 1400. The data were analyzed using an F-test and a Duncan test. It was found that the workloads of the shift supervisors (SSs) were larger than other operators and the operative activities of the SSs increased owing to the computer-based procedure. From these findings, methods to reduce the workloads of the SSs that arise from the computer-based procedure are discussed. (author)

  7. Beating the tyranny of scale with a private cloud configured for Big Data

    Science.gov (United States)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end

  8. Workloads in Australian emergency departments a descriptive study.

    Science.gov (United States)

    Lyneham, Joy; Cloughessy, Liz; Martin, Valmai

    2008-07-01

    This study aimed to identify the current workload of clinical nurses, managers and educators in Australian Emergency Departments according to the classification of the department Additionally the relationship of experienced to inexperienced clinical staff was examined. A descriptive research method utilising a survey distributed to 394 Australian Emergency departments with a 21% response rate. Nursing workloads were calculated and a ratio of nurse to patient was established. The ratios included nurse to patient, management and educators to clinical staff. Additionally the percentage of junior to senior clinical staff was also calculated. Across all categories of emergency departments the mean nurse:patient ratios were 1:15 (am shift), 1:7 (pm shift) and 1:4 (night shift). During this period an average of 17.1% of attendances were admitted to hospital. There were 27 staff members for each manager and 23.3 clinical staff for each educator. The percentage of junior staff rostered ranged from 10% to 38%. Emergency nurses cannot work under such pressure as it may compromise the care given to patients and consequently have a negative effect on the nurse personally. However, emergency nurses are dynamically adjusting to the workload. Such conditions as described in this study could give rise to burnout and attrition of experienced emergency nurses as they cannot resolve the conflict between workload and providing quality nursing care.

  9. Now And Next Generation Sequencing Techniques: Future of Sequence Analysis using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Radhe Shyam Thakur

    2012-12-01

    Full Text Available Advancements in the field of sequencing techniques resulted in the huge sequenced data to be produced at a very faster rate. It is going cumbersome for the datacenter to maintain the databases. Data mining and sequence analysis approaches needs to analyze the databases several times to reach any efficient conclusion. To cope with such overburden on computer resources and to reach efficient and effective conclusions quickly, the virtualization of the resources and computation on pay as you go concept was introduced and termed as cloud computing. The datacenter’s hardware and software is collectively known as cloud which when available publicly is termed as public cloud. The datacenter’s resources are provided in a virtual mode to the clients via a service provider like Amazon, Google and Joyent which charges on pay as you go manner. The workload is shifted to the provider which is maintained by the required hardware and software upgradation. The service provider manages it by upgrading the requirements in the virtual mode. Basically a virtual environment is created according to the need of the user by taking permission from datacenter via internet, the task is performed and the environment is deleted after the task is over. In this discussion, we are focusing on the basics of cloud computing, the prerequisites and overall working of clouds. Furthermore, briefly the applications of cloud computing in biological systems, especially in comparative genomics, genome informatics and SNP detection with reference to traditional workflow are discussed.

  10. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  11. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai; Shae, Zon Yin; Zhang, Xiangliang; Jamjoom, Hani T.; Fong, Liana

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies

  12. Single-Pilot Workload Management

    Science.gov (United States)

    Rogers, Jason; Williams, Kevin; Hackworth, Carla; Burian, Barbara; Pruchnicki, Shawn; Christopher, Bonny; Drechsler, Gena; Silverman, Evan; Runnels, Barry; Mead, Andy

    2013-01-01

    Integrated glass cockpit systems place a heavy cognitive load on pilots (Burian Dismukes, 2007). Researchers from the NASA Ames Flight Cognition Lab and the FAA Flight Deck Human Factors Lab examined task and workload management by single pilots. This poster describes pilot performance regarding programming a reroute while at cruise and meeting a waypoint crossing restriction on the initial descent.

  13. The Management of Local Government Apparatus Resource Based on Job and Workload Analysis

    OpenAIRE

    Cahyasari, Erlita

    2016-01-01

    This Papers focus on Job analysis as the basis of human resource system. It is describe about the job and workload and also the obstacles that are perhaps to observe during the work, and to supply all of activities of human resource management in the organization. Workload analysis is a process to decide the sum of time required to finish a specific job. The result of job and workload analysis goals to determine the number of employees needed in correspond to some specific workload and respon...

  14. Designing workload analysis questionnaire to evaluate needs of employees

    Science.gov (United States)

    Astuti, Rahmaniyah Dwi; Navi, Muhammad Abdu Haq

    2018-02-01

    Incompatibility between workload with work capacity is one of main problem to make optimal result. In case at the office, there are constraints to determine workload because of non-repetitive works. Employees do work based on the targets set in a working period. At the end of the period is usually performed an evaluation of employees performance to evaluate needs of employees. The aims of this study to design a workload questionnaire tools to evaluate the efficiency level of position as indicator to determine needs of employees based on the Indonesian State Employment Agency Regulation on workload analysis. This research is applied to State-Owned Enterprise PT. X by determining 3 positions as a pilot project. Position A is held by 2 employees, position B is held by 7 employees, and position C is held by 6 employees. From the calculation result, position A has an efficiency level of 1,33 or "very good", position B has an efficiency level of 1.71 or "enough", and position C has an efficiency level of 1.03 or "very good". The application of this tools giving suggestion the needs of employees of position A is 3 people, position B is 5 people, and position C is 6 people. The difference between the number of employees and the calculation result is then analyzed by interviewing the employees to get more data about personal perception. It can be concluded that this workload evaluation tools can be used as an alternative solution to evaluate needs of employees in office.

  15. Nursing Workload and the Changing Health Care Environment: A Review of the Literature

    Science.gov (United States)

    Neill, Denise

    2011-01-01

    Changes in the health care environment have impacted nursing workload, quality of care, and patient safety. Traditional nursing workload measures do not guarantee efficiency, nor do they adequately capture the complexity of nursing workload. Review of the literature indicates nurses perceive the quality of their work has diminished. Research has…

  16. Understanding the effect of workload on automation use for younger and older adults.

    Science.gov (United States)

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2011-12-01

    This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults' performance was negatively affected by workload, but their compliance and reliance were unaffected. Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training.

  17. NASA TLX: software for assessing subjective mental workload.

    Science.gov (United States)

    Cao, Alex; Chintamani, Keshav K; Pandya, Abhilash K; Ellis, R Darin

    2009-02-01

    The NASA Task Load Index (TLX) is a popular technique for measuring subjective mental workload. It relies on a multidimensional construct to derive an overall workload score based on a weighted average of ratings on six subscales: mental demand, physical demand, temporal demand, performance, effort, and frustration level. A program for implementing a computerized version of the NASA TLX is described. The software version assists in simplifying collection, postprocessing, and storage of raw data. The program collects raw data from the subject and calculates the weighted (or unweighted) workload score, which is output to a text file. The program can also be tailored to a specific experiment using a simple input text file, if desired. The program was designed in Visual Studio 2005 and is capable of running on a Pocket PC with Windows CE or on a PC with Windows 2000 or higher. The NASA TLX program is available for free download.

  18. EEG BASED COGNITIVE WORKLOAD CLASSIFICATION DURING NASA MATB-II MULTITASKING

    Directory of Open Access Journals (Sweden)

    Sushil Chandra

    2015-06-01

    Full Text Available The objective of this experiment was to determine the best possible input EEG feature for classification of the workload while designing load balancing logic for an automated operator. The input features compared in this study consisted of spectral features of Electroencephalography, objective scoring and subjective scoring. Method utilizes to identify best EEG feature as an input in Neural Network Classifiers for workload classification, to identify channels which could provide classification with the highest accuracy and for identification of EEG feature which could give discrimination among workload level without adding any classifiers. The result had shown Engagement Index is the best feature for neural network classification.

  19. Computational Hydrodynamics: How Portable and Scalable Are Heterogeneous Programming Paradigms?

    DEFF Research Database (Denmark)

    Pawlak, Wojciech; Glimberg, Stefan Lemvig; Engsig-Karup, Allan Peter

    New many-core era applications at the interface of mathematics and computer science adopt modern parallel programming paradigms and expose parallelism through proper algorithms. We present new performance results for a novel massively parallel free surface wave model suitable for advanced......-device system sizes from desktops to large HPC systems such as superclusters and in the cloud utilizing heterogeneous devices like multi-core CPUs, GPUs, and Xeon Phi coprocessors. The numerical efficiency is evaluated on heterogeneous devices like multi-core CPUs, GPUs and Xeon Phi coprocessors to test...

  20. Mental Workload and Its Determinants among Nurses in One Hospital in Kermanshah City, Iran

    Directory of Open Access Journals (Sweden)

    Ehsan Bakhshi

    2017-03-01

    Full Text Available Background & Aims: Mental workload is one of the factors influencing the behavior, performance and efficiency of nurses in the workplace. There are diverse factors that can affect mental workload level. present study performed with the aim of Surveying Mental Workload and its Determinants among Nursing in one of hospital in Kermanshah City Materials and methods: In this cross-sectional study, 203 nurses from 5 wards of infants, emergency, surgery, internal and ICU were selected randomly and surveyed. Data collection tools were demographics and NASA-TLX questionnaires. The statistical data analysis conducted using Independent sample  t-test, ANOVA and Pearson correlation coefficient using software SPSS 19. Results: The mean and standard deviation of overall  mental workload estimated as 69.73±15.26. Among  aspects of mental workload,  the aspect of  effort with an average score of 70.96 was the highest and the aspect of frustration and disappointment with average of 44.93 was the lowest one. There were significant relationship between physical aspect of workload with age, type of shift working, number of shifts, type of employment, between temporal aspect of workload with BMI, type of employment and work experience, and between effort aspect with BMI (p-value≤0/05. Conclusion: Due to the different amount of mental workload in studied hospital wards, relocation of nurses between wards can improve situation and increase the number of nurses can lead to decrease mental workload.

  1. Energy Dependent Divisible Load Theory for Wireless Sensor Network Workload Allocation

    Directory of Open Access Journals (Sweden)

    Haiyan Shi

    2012-01-01

    Full Text Available The wireless sensor network (WSN, consisting of a large number of microsensors with wireless communication abilities, has become an indispensable tool for use in monitoring and surveillance applications. Despite its advantages in deployment flexibility and fault tolerance, the WSN is vulnerable to failures due to the depletion of limited onboard battery energy. A major portion of energy consumption is caused by the transmission of sensed results to the master processor. The amount of energy used, in fact, is related to both the duration of sensing and data transmission. Hence, in order to extend the operation lifespan of the WSN, a proper allocation of sensing workload among the sensors is necessary. An assignment scheme is here formulated on the basis of the divisible load theory, namely, the energy dependent divisible load theory (EDDLT for sensing workload allocations. In particular, the amount of residual energies onboard sensors are considered while deciding the workload assigned to each sensor. Sensors with smaller amount of residual energy are assigned lighter workloads, thus, allowing for a reduced energy consumption and the sensor lifespan is extended. Simulation studies are conducted and results have illustrated the effectiveness of the proposed workload allocation method.

  2. Power Optimization of Multimode Mobile Embedded Systems with Workload-Delay Dependency

    Directory of Open Access Journals (Sweden)

    Hoeseok Yang

    2016-01-01

    Full Text Available This paper proposes to take the relationship between delay and workload into account in the power optimization of microprocessors in mobile embedded systems. Since the components outside a device continuously change their values or properties, the workload to be handled by the systems becomes dynamic and variable. This variable workload is formulated as a staircase function of the delay taken at the previous iteration in this paper and applied to the power optimization of DVFS (dynamic voltage-frequency scaling. In doing so, a graph representation of all possible workload/mode changes during the lifetime of a device, Workload Transition Graph (WTG, is proposed. Then, the power optimization problem is transformed into finding a cycle (closed walk in WTG which minimizes the average power consumption over it. Out of the obtained optimal cycle of WTG, one can derive the optimal power management policy of the target device. It is shown that the proposed policy is valid for both continuous and discrete DVFS models. The effectiveness of the proposed power optimization policy is demonstrated with the simulation results of synthetic and real-life examples.

  3. Porting AMG2013 to Heterogeneous CPU+GPU Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Samfass, Philipp [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-26

    LLNL's future advanced technology system SIERRA will feature heterogeneous compute nodes that consist of IBM PowerV9 CPUs and NVIDIA Volta GPUs. Conceptually, the motivation for such an architecture is quite straightforward: While GPUs are optimized for throughput on massively parallel workloads, CPUs strive to minimize latency for rather sequential operations. Yet, making optimal use of heterogeneous architectures raises new challenges for the development of scalable parallel software, e.g., with respect to work distribution. Porting LLNL's parallel numerical libraries to upcoming heterogeneous CPU+GPU architectures is therefore a critical factor for ensuring LLNL's future success in ful lling its national mission. One of these libraries, called HYPRE, provides parallel solvers and precondi- tioners for large, sparse linear systems of equations. In the context of this intern- ship project, I consider AMG2013 which is a proxy application for major parts of HYPRE that implements a benchmark for setting up and solving di erent systems of linear equations. In the following, I describe in detail how I ported multiple parts of AMG2013 to the GPU (Section 2) and present results for di erent experiments that demonstrate a successful parallel implementation on the heterogeneous ma- chines surface and ray (Section 3). In Section 4, I give guidelines on how my code should be used. Finally, I conclude and give an outlook for future work (Section 5).

  4. Absolute magnitude estimation and relative judgement approaches to subjective workload assessment

    Science.gov (United States)

    Vidulich, Michael A.; Tsang, Pamela S.

    1987-01-01

    Two rating scale techniques employing an absolute magnitude estimation method, were compared to a relative judgment method for assessing subjective workload. One of the absolute estimation techniques used was an unidimensional overall workload scale and the other was the multidimensional NASA-Task Load Index technique. Thomas Saaty's Analytic Hierarchy Process was the unidimensional relative judgment method used. These techniques were used to assess the subjective workload of various single- and dual-tracking conditions. The validity of the techniques was defined as their ability to detect the same phenomena observed in the tracking performance. Reliability was assessed by calculating test-retest correlations. Within the context of the experiment, the Saaty Analytic Hierarchy Process was found to be superior in validity and reliability. These findings suggest that the relative judgment method would be an effective addition to the currently available subjective workload assessment techniques.

  5. Integrating Containers in the CERN Private Cloud

    Science.gov (United States)

    Noel, Bertrand; Michelino, Davide; Velten, Mathieu; Rocha, Ricardo; Trigazis, Spyridon

    2017-10-01

    Containers remain a hot topic in computing, with new use cases and tools appearing every day. Basic functionality such as spawning containers seems to have settled, but topics like volume support or networking are still evolving. Solutions like Docker Swarm, Kubernetes or Mesos provide similar functionality but target different use cases, exposing distinct interfaces and APIs. The CERN private cloud is made of thousands of nodes and users, with many different use cases. A single solution for container deployment would not cover every one of them, and supporting multiple solutions involves repeating the same process multiple times for integration with authentication services, storage services or networking. In this paper we describe OpenStack Magnum as the solution to offer container management in the CERN cloud. We will cover its main functionality and some advanced use cases using Docker Swarm and Kubernetes, highlighting some relevant differences between the two. We will describe the most common use cases in HEP and how we integrated popular services like CVMFS or AFS in the most transparent way possible, along with some limitations found. Finally we will look into ongoing work on advanced scheduling for both Swarm and Kubernetes, support for running batch like workloads and integration of container networking technologies with the CERN infrastructure.

  6. CLOUD COMPUTING AND INTERNET OF THINGS FOR SMART CITY DEPLOYMENTS

    Directory of Open Access Journals (Sweden)

    GEORGE SUCIU

    2013-05-01

    Full Text Available Cloud Computing represents the new method of delivering hardware and software resources to the users, Internet of Things (IoT is currently one of the most popular ICT paradigms. Both concepts can have a major impact on how we build smart or/and smarter cities. Cloud computing represents the delivery of hardware and software resources on-demand over the Internet as a Service. At the same time, the IoT concept envisions a new generation of devices (sensors, both virtual and physical that are connected to the Internet and provide different services for value-added applications. In this paper we present our view on how to deploy Cloud computing and IoT for smart or/and smarter cities. We demonstrate that data gathered from heterogeneous and distributed IoT devices can be automatically managed, handled and reused with decentralized cloud services.

  7. The acute:chonic workload ratio in relation to injury risk in professional soccer.

    Science.gov (United States)

    Malone, Shane; Owen, Adam; Newton, Matt; Mendes, Bruno; Collins, Kieran D; Gabbett, Tim J

    2017-06-01

    To examine the association between combined sRPE measures and injury risk in elite professional soccer. Observational cohort study. Forty-eight professional soccer players (mean±SD age of 25.3±3.1 yr) from two elite European teams were involved within a one season study. Players completed a test of intermittent-aerobic capacity (Yo-YoIR1) to assess player's injury risk in relation to intermittent aerobic capacity. Weekly workload measures and time loss injuries were recorded during the entire period. Rolling weekly sums and week-to-week changes in workload were measured, allowing for the calculation of the acute:chronic workload ratio, which was calculated by dividing the acute (1-weekly) and chronic (4-weekly) workloads. All derived workload measures were modelled against injury data using logistic regression. Odds ratios (OR) were reported against a reference group. Players who exerted pre-season 1-weekly loads of ≥1500 to ≤2120AU were at significantly higher risk of injury compared to the reference group of ≤1500AU (OR=1.95, p=0.006). Players with increased intermittent-aerobic capacity were better able to tolerate increased 1-weekly absolute changes in training load than players with lower fitness levels (OR=4.52, p=0.011). Players who exerted in-season acute:chronic workload ratios of >1.00 to soccer players. A higher intermittent-aerobic capacity appears to offer greater injury protection when players are exposed to rapid changes in workload in elite soccer players. Moderate workloads, coupled with moderate-low to moderate-high acute:chronic workload ratios, appear to be protective for professional soccer players. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  8. Workload and cortisol levels in helicopter combat pilots during simulated flights

    Directory of Open Access Journals (Sweden)

    A. García-Mas

    2016-03-01

    Conclusions: Cortisol levels in saliva and workload are the usual in stress situations, and change inversely: workload increases at the end of the task, whereas the cortisol levels decrease after the simulated flight. The somatic anxiety decreases as the task is done. In contrast, when the pilots are faced with new and demanding tasks, even if they fly this type of helicopter in different conditions, the workload increases toward the end of the task. From an applied point of view, these findings should impact the tactical, physical and mental training of such pilots.

  9. An Energy-Efficient Approach to Enhance Virtual Sensors Provisioning in Sensor Clouds Environments.

    Science.gov (United States)

    Lemos, Marcus Vinícius de S; Filho, Raimir Holanda; Rabêlo, Ricardo de Andrade L; de Carvalho, Carlos Giovanni N; Mendes, Douglas Lopes de S; Costa, Valney da Gama

    2018-02-26

    Virtual sensors provisioning is a central issue for sensors cloud middleware since it is responsible for selecting physical nodes, usually from Wireless Sensor Networks (WSN) of different owners, to handle user's queries or applications. Recent works perform provisioning by clustering sensor nodes based on the correlation measurements and then selecting as few nodes as possible to preserve WSN energy. However, such works consider only homogeneous nodes (same set of sensors). Therefore, those works are not entirely appropriate for sensor clouds, which in most cases comprises heterogeneous sensor nodes. In this paper, we propose ACxSIMv2, an approach to enhance the provisioning task by considering heterogeneous environments. Two main algorithms form ACxSIMv2. The first one, ACASIMv1, creates multi-dimensional clusters of sensor nodes, taking into account the measurements correlations instead of the physical distance between nodes like most works on literature. Then, the second algorithm, ACOSIMv2, based on an Ant Colony Optimization system, selects an optimal set of sensors nodes from to respond user's queries while attending all parameters and preserving the overall energy consumption. Results from initial experiments show that the approach reduces significantly the sensor cloud energy consumption compared to traditional works, providing a solution to be considered in sensor cloud scenarios.

  10. MEASURING WORKLOAD OF ICU NURSES WITH A QUESTIONNAIRE SURVEY: THE NASA TASK LOAD INDEX (TLX).

    Science.gov (United States)

    Hoonakker, Peter; Carayon, Pascale; Gurses, Ayse; Brown, Roger; McGuire, Kerry; Khunlertkit, Adjhaporn; Walker, James M

    2011-01-01

    High workload of nurses in Intensive Care Units (ICUs) has been identified as a major patient safety and worker stress problem. However, relative little attention has been dedicated to the measurement of workload in healthcare. The objectives of this study are to describe and examine several methods to measure workload of ICU nurses. We then focus on the measurement of ICU nurses' workload using a subjective rating instrument: the NASA TLX.We conducted secondary data analysis on data from two, multi-side, cross-sectional questionnaire studies to examine several instruments to measure ICU nurses' workload. The combined database contains the data from 757 ICU nurses in 8 hospitals and 21 ICUs.Results show that the different methods to measure workload of ICU nurses, such as patient-based and operator-based workload, are only moderately correlated, or not correlated at all. Results show further that among the operator-based instruments, the NASA TLX is the most reliable and valid questionnaire to measure workload and that NASA TLX can be used in a healthcare setting. Managers of hospitals and ICUs can benefit from the results of this research as it provides benchmark data on workload experienced by nurses in a variety of ICUs.

  11. EDITORIAL: Aerosol cloud interactions—a challenge for measurements and modeling at the cutting edge of cloud climate interactions

    Science.gov (United States)

    Spichtinger, Peter; Cziczo, Daniel J.

    2008-04-01

    of water which have not yet been fully defined, for example cubic ice, are considered. The impact of natural aerosols on clouds, for example mineral dust, is also discussed, as well as other natural but highly sensitive effects such as the Wegener Bergeron Findeisen process. It is our belief that this focus issue represents a leap forward not only in reducing the uncertainty associated with the interaction of aerosols and clouds but also a new link between groups that must work together to continue progress in this important area of climate science. Focus on Aerosol Cloud Interactions Contents The articles below represent the first accepted contributions and further additions will appear in the near future. The global influence of dust mineralogical composition on heterogeneous ice nucleation in mixed-phase clouds C Hoose, U Lohmann, R Erdin and I Tegen Ice formation via deposition nucleation on mineral dust and organics: dependence of onset relative humidity on total particulate surface area Zamin A Kanji, Octavian Florea and Jonathan P D Abbatt The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol cloud interactions in multiscale modeling framework models: tracer transport results William I Gustafson Jr, Larry K Berg, Richard C Easter and Steven J Ghan Cloud effects from boreal forest fire smoke: evidence for ice nucleation from polarization lidar data and cloud model simulations Kenneth Sassen and Vitaly I Khvorostyanov The effect of organic coating on the heterogeneous ice nucleation efficiency of mineral dust aerosols O Möhler, S Benz, H Saathoff, M Schnaiter, R Wagner, J Schneider, S Walter, V Ebert and S Wagner Enhanced formation of cubic ice in aqueous organic acid droplets Benjamin J Murray Quantification of water uptake by soot particles O B Popovicheva, N M Persiantseva, V Tishkova, N K Shonija and N A Zubareva Meridional gradients of light absorbing carbon over northern Europe D Baumgardner, G Kok, M Krämer and F Weidle

  12. Level of Workload and Its Relationship with Job Burnout among Administrative Staff

    OpenAIRE

    MANSOUR ZIAEI; HAMED YARMOHAMMADI; MEISAM MORADI; MOHAMMAD KHANDAN

    2015-01-01

    Burnout syndrome is a response to prolonged occupational stress. Workload is one of the organizational risk factors of burnout. With regards to the topic, there are no data on administrative employees’ burnout and workload in Iran. This study seeks to determine the levels of job burnout and their relationships with workload among administrative members of staff. Two hundred and forty two administrative staff from Kermanshah University of Medical Sciences [Iran] volunteered to participate in t...

  13. ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE-EVENT SIMULATION

    Science.gov (United States)

    2016-03-24

    ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...in the United States. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION...UNLIMITED. AFIT-ENV-MS-16-M-166 ANALYSIS OF INPATIENT HOSPITAL STAFF MENTAL WORKLOAD BY MEANS OF DISCRETE -EVENT SIMULATION Erich W

  14. Understanding the Effect of Workload on Automation Use for Younger and Older Adults

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2018-01-01

    Objective This study examined how individuals, younger and older, interacted with an imperfect automated system. The impact of workload on performance and automation use was also investigated. Background Automation is used in situations characterized by varying levels of workload. As automated systems spread to domains such as transportation and the home, a diverse population of users will interact with automation. Research is needed to understand how different segments of the population use automation. Method Workload was systematically manipulated to create three levels (low, moderate, high) in a dual-task scenario in which participants interacted with a 70% reliable automated aid. Two experiments were conducted to assess automation use for younger and older adults. Results Both younger and older adults relied on the automation more than they complied with it. Among younger adults, high workload led to poorer performance and higher compliance, even when that compliance was detrimental. Older adults’ performance was negatively affected by workload, but their compliance and reliance were unaffected. Conclusion Younger and older adults were both able to use and double-check an imperfect automated system. Workload affected how younger adults complied with automation, particularly with regard to detecting automation false alarms. Older adults tended to comply and rely at fairly high rates overall, and this did not change with increased workload. Application Training programs for imperfect automated systems should vary workload and provide feedback about error types, and strategies for identifying errors. The ability to identify automation errors varies across individuals, thereby necessitating training. PMID:22235529

  15. Impact of Conflict Avoidance Responsibility Allocation on Pilot Workload in a Distributed Air Traffic Management System

    Science.gov (United States)

    Ligda, Sarah V.; Dao, Arik-Quang V.; Vu, Kim-Phuong; Strybel, Thomas Z.; Battiste, Vernol; Johnson, Walter W.

    2010-01-01

    Pilot workload was examined during simulated flights requiring flight deck-based merging and spacing while avoiding weather. Pilots used flight deck tools to avoid convective weather and space behind a lead aircraft during an arrival into Louisville International airport. Three conflict avoidance management concepts were studied: pilot, controller or automation primarily responsible. A modified Air Traffic Workload Input Technique (ATWIT) metric showed highest workload during the approach phase of flight and lowest during the en-route phase of flight (before deviating for weather). In general, the modified ATWIT was shown to be a valid and reliable workload measure, providing more detailed information than post-run subjective workload metrics. The trend across multiple workload metrics revealed lowest workload when pilots had both conflict alerting and responsibility of the three concepts, while all objective and subjective measures showed highest workload when pilots had no conflict alerting or responsibility. This suggests that pilot workload was not tied primarily to responsibility for resolving conflicts, but to gaining and/or maintaining situation awareness when conflict alerting is unavailable.

  16. Subjective and objective quantification of physician's workload and performance during radiation therapy planning tasks.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Marks, Lawrence B

    2013-01-01

    To quantify, and compare, workload for several common physician-based treatment planning tasks using objective and subjective measures of workload. To assess the relationship between workload and performance to define workload levels where performance could be expected to decline. Nine physicians performed the same 3 tasks on each of 2 cases ("easy" vs "hard"). Workload was assessed objectively throughout the tasks (via monitoring of pupil size and blink rate), and subjectively at the end of each case (via National Aeronautics and Space Administration Task Load Index; NASA-TLX). NASA-TLX assesses the 6 dimensions (mental, physical, and temporal demands, frustration, effort, and performance); scores > or ≈ 50 are associated with reduced performance in other industries. Performance was measured using participants' stated willingness to approve the treatment plan. Differences in subjective and objective workload between cases, tasks, and experience were assessed using analysis of variance (ANOVA). The correlation between subjective and objective workload measures were assessed via the Pearson correlation test. The relationships between workload and performance measures were assessed using the t test. Eighteen case-wise and 54 task-wise assessments were obtained. Subjective NASA-TLX scores (P .1), were significantly lower for the easy vs hard case. Most correlations between the subjective and objective measures were not significant, except between average blink rate and NASA-TLX scores (r = -0.34, P = .02), for task-wise assessments. Performance appeared to decline at NASA-TLX scores of ≥55. The NASA-TLX may provide a reasonable method to quantify subjective workload for broad activities, and objective physiologic eye-based measures may be useful to monitor workload for more granular tasks within activities. The subjective and objective measures, as herein quantified, do not necessarily track each other, and more work is needed to assess their utilities. From a

  17. Mental workload during n-back task-quantified in the prefrontal cortex using fNIRS.

    Science.gov (United States)

    Herff, Christian; Heger, Dominic; Fortmann, Ole; Hennrich, Johannes; Putze, Felix; Schultz, Tanja

    2013-01-01

    When interacting with technical systems, users experience mental workload. Particularly in multitasking scenarios (e.g., interacting with the car navigation system while driving) it is desired to not distract the users from their primary task. For such purposes, human-machine interfaces (HCIs) are desirable which continuously monitor the users' workload and dynamically adapt the behavior of the interface to the measured workload. While memory tasks have been shown to elicit hemodynamic responses in the brain when averaging over multiple trials, a robust single trial classification is a crucial prerequisite for the purpose of dynamically adapting HCIs to the workload of its user. The prefrontal cortex (PFC) plays an important role in the processing of memory and the associated workload. In this study of 10 subjects, we used functional Near-Infrared Spectroscopy (fNIRS), a non-invasive imaging modality, to sample workload activity in the PFC. The results show up to 78% accuracy for single-trial discrimination of three levels of workload from each other. We use an n-back task (n ∈ {1, 2, 3}) to induce different levels of workload, forcing subjects to continuously remember the last one, two, or three of rapidly changing items. Our experimental results show that measuring hemodynamic responses in the PFC with fNIRS, can be used to robustly quantify and classify mental workload. Single trial analysis is still a young field that suffers from a general lack of standards. To increase comparability of fNIRS methods and results, the data corpus for this study is made available online.

  18. Mental workload during n-back task - quantified in the prefrontal cortex using fNIRS

    Directory of Open Access Journals (Sweden)

    Christian eHerff

    2014-01-01

    Full Text Available When interacting with technical systems, users experience mental workload. Particularly in multitasking scenarios (e.g. interacting with the car navigation system while driving it is desired to not distract the users from their primary task. For such purposes, human-machine interfaces (HCIs are desirable which continuously monitor the users' workload and dynamically adapt the behavior of the interface to the measured workload. While memory tasks have been shown to illicit hemodynamic responses in the brain when averaging over multiple trials, a robust single trial classification is a crucial prerequisite for the purpose of dynamically adapting HCIs to the workload of its user.The prefrontal cortex (PFC plays an important role in the processing of memory and the associated workload. In this study of 10 subjects, we used functional Near-Infrared Spectroscopy (fNIRS, a non-invasive imaging modality, to sample workload activity in the PFC. The results show up to 78% accuracy for single-trial discrimination of three levels of workload from each other. We use an n-back task (n ∈ {1, 2, 3} to induce different levels of workload, forcing subjects to continuously remember the last one, two or three of rapidly changing items.Our experimental results show that measuring hemodynamic responses in the PFC with fNIRS, can be used to robustly quantify and classify mental workload.Single trial analysis is still a young field that suffers from a general lack of standards. To increase comparability of fNIRS methods and results, the data corpus for this study is made available online.

  19. Physiological Indicators of Workload in a Remotely Piloted Aircraft Simulation

    Science.gov (United States)

    2015-10-01

    cognitive workload. That is, both cognitive underload and overload can negatively impact performance (Young & Stanton, 2002). One solution to...Report contains color. 14. ABSTRACT Toward preventing performance decrements associated with mental overload in remotely piloted aircraft (RPA...operations, the current research investigated the feasibility of using physiological measures to assess cognitive workload. Two RPA operators were

  20. Elastic Scheduling of Scientific Workflows under Deadline Constraints in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Nazia Anwar

    2018-01-01

    Full Text Available Scientific workflow applications are collections of several structured activities and fine-grained computational tasks. Scientific workflow scheduling in cloud computing is a challenging research topic due to its distinctive features. In cloud environments, it has become critical to perform efficient task scheduling resulting in reduced scheduling overhead, minimized cost and maximized resource utilization while still meeting the user-specified overall deadline. This paper proposes a strategy, Dynamic Scheduling of Bag of Tasks based workflows (DSB, for scheduling scientific workflows with the aim to minimize financial cost of leasing Virtual Machines (VMs under a user-defined deadline constraint. The proposed model groups the workflow into Bag of Tasks (BoTs based on data dependency and priority constraints and thereafter optimizes the allocation and scheduling of BoTs on elastic, heterogeneous and dynamically provisioned cloud resources called VMs in order to attain the proposed method’s objectives. The proposed approach considers pay-as-you-go Infrastructure as a Service (IaaS clouds having inherent features such as elasticity, abundance, heterogeneity and VM provisioning delays. A trace-based simulation using benchmark scientific workflows representing real world applications, demonstrates a significant reduction in workflow computation cost while the workflow deadline is met. The results validate that the proposed model produces better success rates to meet deadlines and cost efficiencies in comparison to adapted state-of-the-art algorithms for similar problems.

  1. The Effects of Workload Transitions in a Multitasking Environment

    Science.gov (United States)

    2016-09-13

    Workload Transitions in a Multitasking Environment 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Margaret A. Bowers...DISTRIBUTION STATEMENT A: Approved for public release. The Effects of Workload Transitions in a Multitasking Environment Margaret A. Bowers1,2, James C...well as performance in a complex multitasking environment. The results of the NASA TLX and shortened DSSQ did not provide support for the position

  2. A service brokering and recommendation mechanism for better selecting cloud services.

    Science.gov (United States)

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI).

  3. A Service Brokering and Recommendation Mechanism for Better Selecting Cloud Services

    Science.gov (United States)

    Gui, Zhipeng; Yang, Chaowei; Xia, Jizhe; Huang, Qunying; Liu, Kai; Li, Zhenlong; Yu, Manzhu; Sun, Min; Zhou, Nanyin; Jin, Baoxuan

    2014-01-01

    Cloud computing is becoming the new generation computing infrastructure, and many cloud vendors provide different types of cloud services. How to choose the best cloud services for specific applications is very challenging. Addressing this challenge requires balancing multiple factors, such as business demands, technologies, policies and preferences in addition to the computing requirements. This paper recommends a mechanism for selecting the best public cloud service at the levels of Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). A systematic framework and associated workflow include cloud service filtration, solution generation, evaluation, and selection of public cloud services. Specifically, we propose the following: a hierarchical information model for integrating heterogeneous cloud information from different providers and a corresponding cloud information collecting mechanism; a cloud service classification model for categorizing and filtering cloud services and an application requirement schema for providing rules for creating application-specific configuration solutions; and a preference-aware solution evaluation mode for evaluating and recommending solutions according to the preferences of application providers. To test the proposed framework and methodologies, a cloud service advisory tool prototype was developed after which relevant experiments were conducted. The results show that the proposed system collects/updates/records the cloud information from multiple mainstream public cloud services in real-time, generates feasible cloud configuration solutions according to user specifications and acceptable cost predication, assesses solutions from multiple aspects (e.g., computing capability, potential cost and Service Level Agreement, SLA) and offers rational recommendations based on user preferences and practical cloud provisioning; and visually presents and compares solutions through an interactive web Graphical User Interface (GUI

  4. Semantic-less Breach Detection of Polymorphic Malware in Federated Cloud

    Directory of Open Access Journals (Sweden)

    Yahav Biran

    2017-06-01

    Full Text Available Cloud computing is one of the largest emerging utility services that is expected to grow enormously over the next decade. Many organizations are moving into hybrid cloud/hosted computing models. Single cloud service provider introduces cost and environmental challenges. Also, multi-cloud solution implemented by the Cloud tenant is suboptimal as it requires expensive adaptation costs. Cloud Federation is a useful structure for aggregating cloud based services under a single umbrella to share resources and responsibilities for the benefit of the member cloud service providers. An efficient security model is crucial for successful cloud business. However, with the advent of large scale and multi-tenant environments, the traditional perimeter boundaries along with traditional security practices are changing. Defining and securing asset and enclave boundaries is more challenging, and system perimeter boundaries are more susceptible to breach. This paper to describe security best practices for Cloud Federation. The paper also describes a tool and technique for detecting anomalous behavior in resource usage across the federation participants. This is a particularly serious issue because of the possibility of an attacker potentially gaining access to more than one CSP federation member. Specifically, this technique is developed for Cloud Federations since they have to deal with heterogeneous multi-platform environments with a diverse mixture of data and security log schema, and it has to do this in real time. A Semantic-less Breach detection system that implements a self-learning system was prototyped and resulted in up to 87% True-Positive rate with 93% True-Negative.

  5. Energy Aware Pricing in a Three-Tiered Cloud Service Market

    Directory of Open Access Journals (Sweden)

    Debdeep Paul

    2016-09-01

    Full Text Available We consider a three-tiered cloud service market and propose an energy efficient pricing strategy in this market. Here, the end customers are served by the Software-as-a-Service (SaaS providers, who implement customized services for their customers. To host these services, these SaaS providers, in turn, lease the infrastructure related resources from the Infrastructure-as-a-Service (IaaS or Platform-as-a-Service (PaaS providers. In this paper, we propose and evaluate a mechanism for pricing between SaaS providers and Iaas/PaaS providers and between SaaS providers and the end customers. The pricing scheme is designed in a way such that the integration of renewable energy is promoted, which is a very crucial aspect of energy efficiency. Thereafter, we propose a technique to strategically provide an improved Quality of Service (QoS by deploying more resources than what is computed by the optimization procedure. This technique is based on the square root staffing law in queueing theory. We carry out numerical evaluations with real data traces on electricity price, renewable energy generation, workload, etc., in order to emulate the real dynamics of the cloud service market. We demonstrate that, under practical assumptions, the proposed technique can generate more profit for the service providers operating in the cloud service market.

  6. Shallow to Deep Convection Transition over a Heterogeneous Land Surface Using the Land Model Coupled Large-Eddy Simulation

    Science.gov (United States)

    Lee, J.; Zhang, Y.; Klein, S. A.

    2017-12-01

    The triggering of the land breeze, and hence the development of deep convection over heterogeneous land should be understood as a consequence of the complex processes involving various factors from land surface and atmosphere simultaneously. That is a sub-grid scale process that many large-scale models have difficulty incorporating it into the parameterization scheme partly due to lack of our understanding. Thus, it is imperative that we approach the problem using a high-resolution modeling framework. In this study, we use SAM-SLM (Lee and Khairoutdinov, 2015), a large-eddy simulation model coupled to a land model, to explore the cloud effect such as cold pool, the cloud shading and the soil moisture memory on the land breeze structure and the further development of cloud and precipitation over a heterogeneous land surface. The atmospheric large scale forcing and the initial sounding are taken from the new composite case study of the fair-weather, non-precipitating shallow cumuli at ARM SGP (Zhang et al., 2017). We model the land surface as a chess board pattern with alternating leaf area index (LAI). The patch contrast of the LAI is adjusted to encompass the weak to strong heterogeneity amplitude. The surface sensible- and latent heat fluxes are computed according to the given LAI representing the differential surface heating over a heterogeneous land surface. Separate from the surface forcing imposed from the originally modeled surface, the cases that transition into the moist convection can induce another layer of the surface heterogeneity from the 1) radiation shading by clouds, 2) adjusted soil moisture pattern by the rain, 3) spreading cold pool. First, we assess and quantifies the individual cloud effect on the land breeze and the moist convection under the weak wind to simplify the feedback processes. And then, the same set of experiments is repeated under sheared background wind with low level jet, a typical summer time wind pattern at ARM SGP site, to

  7. Catastrophe models for cognitive workload and fatigue in N-back tasks.

    Science.gov (United States)

    Guastello, Stephen J; Reiter, Katherine; Malon, Matthew; Timm, Paul; Shircel, Anton; Shaline, James

    2015-04-01

    N-back tasks place a heavy load on working memory, and thus make good candidates for studying cognitive workload and fatigue (CWLF). This study extended previous work on CWLF which separated the two phenomena with two cusp catastrophe models. Participants were 113 undergraduates who completed 2-back and 3-back tasks with both auditory and visual stimuli simultaneously. Task data were complemented by several measures hypothesized to be related to cognitive elasticity and compensatory abilities and the NASA TLX ratings of subjective workload. The adjusted R2 was .980 for the workload model, which indicated a highly accurate prediction with six bifurcation (elasticity versus rigidity) effects: algebra flexibility, TLX performance, effort, and frustration; and psychosocial measures of inflexibility and monitoring. There were also two cognitive load effects (asymmetry): 2 vs. 3-back and TLX temporal demands. The adjusted R2 was .454 for the fatigue model, which contained two bifurcation variables indicating the amount of work done, and algebra flexibility as the compensatory ability variable. Both cusp models were stronger than the next best linear alternative model. The study makes an important step forward by uncovering an apparently complete model for workload, finding the role of subjective workload in the context of performance dynamics, and finding CWLF dynamics in yet another type of memory-intensive task. The results were also consistent with the developing notion that performance deficits induced by workload and deficits induced by fatigue result from the impact of the task on the workspace and executive functions of working memory respectively.

  8. Use of EEG workload indices for diagnostic monitoring of vigilance decrement.

    Science.gov (United States)

    Kamzanova, Altyngul T; Kustubayeva, Almira M; Matthews, Gerald

    2014-09-01

    A study was run to test which of five electroencephalographic (EEG) indices was most diagnostic of loss of vigilance at two levels of workload. EEG indices of alertness include conventional spectral power measures as well as indices combining measures from multiple frequency bands, such as the Task Load Index (TLI) and the Engagement Index (El). However, it is unclear which indices are optimal for early detection of loss of vigilance. Ninety-two participants were assigned to one of two experimental conditions, cued (lower workload) and uncued (higher workload), and then performed a 40-min visual vigilance task. Performance on this task is believed to be limited by attentional resource availability. EEG was recorded continuously. Performance, subjective state, and workload were also assessed. The task showed a vigilance decrement in performance; cuing improved performance and reduced subjective workload. Lower-frequency alpha (8 to 10.9 Hz) and TLI were most sensitive to the task parameters. The magnitude of temporal change was larger for lower-frequency alpha. Surprisingly, higher TLI was associated with superior performance. Frontal theta and El were influenced by task workload only in the final period of work. Correlational data also suggested that the indices are distinct from one another. Lower-frequency alpha appears to be the optimal index for monitoring vigilance on the task used here, but further work is needed to test how diagnosticity of EEG indices varies with task demands. Lower-frequency alpha may be used to diagnose loss of operator alertness on tasks requiring vigilance.

  9. Heterogeneous access and processing of EO-Data on a Cloud based Infrastructure delivering operational Products

    Science.gov (United States)

    Niggemann, F.; Appel, F.; Bach, H.; de la Mar, J.; Schirpke, B.; Dutting, K.; Rucker, G.; Leimbach, D.

    2015-04-01

    To address the challenges of effective data handling faced by Small and Medium Sized Enterprises (SMEs) a cloud-based infrastructure for accessing and processing of Earth Observation(EO)-data has been developed within the project APPS4GMES(www.apps4gmes.de). To gain homogenous multi mission data access an Input Data Portal (IDP) been implemented on this infrastructure. The IDP consists of an Open Geospatial Consortium (OGC) conformant catalogue, a consolidation module for format conversion and an OGC-conformant ordering framework. Metadata of various EO-sources and with different standards is harvested and transferred to an OGC conformant Earth Observation Product standard and inserted into the catalogue by a Metadata Harvester. The IDP can be accessed for search and ordering of the harvested datasets by the services implemented on the cloud infrastructure. Different land-surface services have been realised by the project partners, using the implemented IDP and cloud infrastructure. Results of these are customer ready products, as well as pre-products (e.g. atmospheric corrected EO data), serving as a basis for other services. Within the IDP an automated access to ESA's Sentinel-1 Scientific Data Hub has been implemented. Searching and downloading of the SAR data can be performed in an automated way. With the implementation of the Sentinel-1 Toolbox and own software, for processing of the datasets for further use, for example for Vista's snow monitoring, delivering input for the flood forecast services, can also be performed in an automated way. For performance tests of the cloud environment a sophisticated model based atmospheric correction and pre-classification service has been implemented. Tests conducted an automated synchronised processing of one entire Landsat 8 (LS-8) coverage for Germany and performance comparisons to standard desktop systems. Results of these tests, showing a performance improvement by the factor of six, proved the high flexibility and

  10. Scaling Deep Learning Workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing

    Energy Technology Data Exchange (ETDEWEB)

    Gawande, Nitin A.; Landwehr, Joshua B.; Daily, Jeffrey A.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.

    2017-07-03

    Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, CaffeNet, AlexNet and GoogleNet topologies using the Cifar10 and ImageNet datasets. The workloads are vendor optimized for each architecture. GPUs provide the highest overall raw performance. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and KNL can be competitive when considering performance/watt. Furthermore, NVLink is critical to GPU scaling.

  11. Heterogeneous Ice Nucleation Ability of NaCl and Sea Salt Aerosol Particles at Cirrus Temperatures

    Science.gov (United States)

    Wagner, Robert; Kaufmann, Julia; Möhler, Ottmar; Saathoff, Harald; Schnaiter, Martin; Ullrich, Romy; Leisner, Thomas

    2018-03-01

    In situ measurements of the composition of heterogeneous cirrus ice cloud residuals have indicated a substantial contribution of sea salt in sampling regions above the ocean. We have investigated the heterogeneous ice nucleation ability of sodium chloride (NaCl) and sea salt aerosol (SSA) particles at cirrus cloud temperatures between 235 and 200 K in the Aerosol Interaction and Dynamics in the Atmosphere aerosol and cloud chamber. Effloresced NaCl particles were found to act as ice nucleating particles in the deposition nucleation mode at temperatures below about 225 K, with freezing onsets in terms of the ice saturation ratio, Sice, between 1.28 and 1.40. Above 225 K, the crystalline NaCl particles deliquesced and nucleated ice homogeneously. The heterogeneous ice nucleation efficiency was rather similar for the two crystalline forms of NaCl (anhydrous NaCl and NaCl dihydrate). Mixed-phase (solid/liquid) SSA particles were found to act as ice nucleating particles in the immersion freezing mode at temperatures below about 220 K, with freezing onsets in terms of Sice between 1.24 and 1.42. Above 220 K, the SSA particles fully deliquesced and nucleated ice homogeneously. Ice nucleation active surface site densities of the SSA particles were found to be in the range between 1.0 · 1010 and 1.0 · 1011 m-2 at T < 220 K. These values are of the same order of magnitude as ice nucleation active surface site densities recently determined for desert dust, suggesting a potential contribution of SSA particles to low-temperature heterogeneous ice nucleation in the atmosphere.

  12. Nonparametric estimation of the stationary M/G/1 workload distribution function

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted

    2005-01-01

    In this paper it is demonstrated how a nonparametric estimator of the stationary workload distribution function of the M/G/1-queue can be obtained by systematic sampling the workload process. Weak convergence results and bootstrap methods for empirical distribution functions for stationary associ...

  13. Performance of different radiotherapy workload models

    International Nuclear Information System (INIS)

    Barbera, Lisa; Jackson, Lynda D.; Schulze, Karleen; Groome, Patti A.; Foroudi, Farshad; Delaney, Geoff P.; Mackillop, William J.

    2003-01-01

    Purpose: The purpose of this study was to evaluate the performance of different radiotherapy workload models using a prospectively collected dataset of patient and treatment information from a single center. Methods and Materials: Information about all individual radiotherapy treatments was collected for 2 weeks from the three linear accelerators (linacs) in our department. This information included diagnosis code, treatment site, treatment unit, treatment time, fields per fraction, technique, beam type, blocks, wedges, junctions, port films, and Eastern Cooperative Oncology Group (ECOG) performance status. We evaluated the accuracy and precision of the original and revised basic treatment equivalent (BTE) model, the simple and complex Addenbrooke models, the equivalent simple treatment visit (ESTV) model, fields per hour, and two local standards of workload measurement. Results: Data were collected for 2 weeks in June 2001. During this time, 151 patients were treated with 857 fractions. The revised BTE model performed better than the other models with a mean vertical bar observed - predicted vertical bar of 2.62 (2.44-2.80). It estimated 88.0% of treatment times within 5 min, which is similar to the previously reported accuracy of the model. Conclusion: The revised BTE model had similar accuracy and precision for data collected in our center as it did for the original dataset and performed the best of the models assessed. This model would have uses for patient scheduling, and describing workloads and case complexity

  14. Does daily nurse staffing match ward workload variability? Three hospitals' experiences.

    Science.gov (United States)

    Gabbay, Uri; Bukchin, Michael

    2009-01-01

    Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity

  15. Improved lower extremity pedaling mechanics in individuals with stroke under maximal workloads.

    Science.gov (United States)

    Linder, Susan M; Rosenfeldt, Anson B; Bazyk, Andrew S; Koop, Mandy Miller; Ozinga, Sarah; Alberts, Jay L

    2018-05-01

    Background Individuals with stroke present with motor control deficits resulting in the abnormal activation and timing of agonist and antagonist muscles and inefficient movement patterns. The analysis of pedaling biomechanics provides a window into understanding motor control deficits, which vary as a function of workload. Understanding the relationship between workload and motor control is critical when considering exercise prescription during stroke rehabilitation. Objectives To characterize pedaling kinematics and motor control processes under conditions in which workload was systematically increased to an eventual patient-specific maximum. Methods A cohort study was conducted in which 18 individuals with chronic stroke underwent a maximal exertion cardiopulmonary exercise test on a stationary cycle ergometer, during which pedaling torque was continuously recorded. Measures of force production, pedaling symmetry, and pedaling smoothness were obtained. Results Mean Torque increased significantly (p pedaling action, improved from 0.37(0.29) to 0.29(0.35) during downstroke (p = 0.007), and worsened during the upstroke: -0.37(0.38) to -0.62(0.46) (p pedaling improved significantly from initial to terminal workloads (p pedaling kinematics at terminal workloads indicate that individuals with stroke demonstrate improved motor control with respect to the timing, sequencing, and activation of hemiparetic lower extremity musculature compared to lower workloads. Therapeutic prescription involving higher resistance may be necessary to sufficiently engage and activate the paretic lower extremity.

  16. The impact of draught related to air velocity, air temperature and workload.

    Science.gov (United States)

    Griefahn, B; Künemund, C; Gehring, U

    2001-08-01

    This experimental study was designed to test the hypotheses that the effects of draught increase with higher air velocity, with lower air temperature, and with lower workload. Thirty healthy young males were exposed to horizontal draught during 55 min while they operated an arm ergometer in a standing posture. Air velocity, air temperature, and workload were varied in 3 steps each, between 11 and 23 degrees C, 0.1 and 0.3 m/s, and 104 to 156 W/m2, respectively. The 27 combinations were distributed over subjects in a fractional factorial 3(3)-design. The participants were clothed for thermal neutrality. Workload was measured at the end of the sessions by respirometry. Draught-induced annoyance was determined every 5 min, separately for 10 body sites. Corresponding skin temperature was also recorded. The hypotheses were verified for the influence of air velocity and air temperature. Regarding workload, local heat production is probably decisive, meaning that draft-induced local annoyance is inversely related to workload in active but independent from workload in non-active body areas. To improve the situation for the workers concerned it is suggested to apply protective gloves that cover an as great area of the forearms as possible and to limit airflows to mean velocities of less than 0.2 m/s (with turbulence intensities of 50%).

  17. The multitasking framework: the effects of increasing workload on acute psychobiological stress reactivity.

    Science.gov (United States)

    Wetherell, Mark A; Carter, Kirsty

    2014-04-01

    A variety of techniques exist for eliciting acute psychological stress in the laboratory; however, they vary in terms of their ease of use, reliability to elicit consistent responses and the extent to which they represent the stressors encountered in everyday life. There is, therefore, a need to develop simple laboratory techniques that reliably elicit psychobiological stress reactivity that are representative of the types of stressors encountered in everyday life. The multitasking framework is a performance-based, cognitively demanding stressor, representative of environments where individuals are required to attend and respond to several different stimuli simultaneously with varying levels of workload. Psychological (mood and perceived workload) and physiological (heart rate and blood pressure) stress reactivity was observed in response to a 15-min period of multitasking at different levels of workload intensity in a sample of 20 healthy participants. Multitasking stress elicited increases in heart rate and blood pressure, and increased workload intensity elicited dose-response increases in levels of perceived workload and mood. As individuals rarely attend to single tasks in real life, the multitasking framework provides an alternative technique for modelling acute stress and workload in the laboratory. Copyright © 2013 John Wiley & Sons, Ltd.

  18. From trees to forest: relational complexity network and workload of air traffic controllers.

    Science.gov (United States)

    Zhang, Jingyu; Yang, Jiazhong; Wu, Changxu

    2015-01-01

    In this paper, we propose a relational complexity (RC) network framework based on RC metric and network theory to model controllers' workload in conflict detection and resolution. We suggest that, at the sector level, air traffic showing a centralised network pattern can provide cognitive benefits in visual search and resolution decision which will in turn result in lower workload. We found that the network centralisation index can account for more variance in predicting perceived workload and task completion time in both a static conflict detection task (Study 1) and a dynamic one (Study 2) in addition to other aircraft-level and pair-level factors. This finding suggests that linear combination of aircraft-level or dyad-level information may not be adequate and the global-pattern-based index is necessary. Theoretical and practical implications of using this framework to improve future workload modelling and management are discussed. We propose a RC network framework to model the workload of air traffic controllers. The effect of network centralisation was examined in both a static conflict detection task and a dynamic one. Network centralisation was predictive of perceived workload and task completion time over and above other control variables.

  19. CertiCloud and JShadObf. Towards Integrity and Software Protection in Cloud Computing Platforms

    OpenAIRE

    Bertholon, Benoit

    2013-01-01

    A simple concept that has emerged out of the notion of heterogeneous distributed computing is that of Cloud Computing (CC) where customers do not own any part of the infrastructure; they simply use the available services and pay for what they use. This approach is often viewed as the next ICT revolution, similar to the birth of the Web or the e-commerce. Indeed, since its advent in the middle of the 2000's, the CC paradigm arouse enthusiasm and interest from the industry and the private secto...

  20. Development of the CarMen-Q Questionnaire for mental workload assessment.

    Science.gov (United States)

    Rubio-Valdehita, Susana; López-Núñez, María I; López-Higes, Ramón; Díaz-Ramiro, Eva M

    2017-11-01

    Mental workload has emerged as one of the most important occupational risk factors present in most psychological and physical diseases caused by work. In view of the lack of specific tools to assess mental workload, the objective of this research was to assess the construct validity and reliability of a new questionnaire for mental workload assessment (CarMen-Q). The sample was composed of 884 workers from several professional sectors, between 18 and 65 years old, 53.4% men and 46.6% women. To evaluate the validity based on relationships with other measures, the NASA-TLX scale was also administered. Confirmatory factor analysis showed an internal structure made up of four dimensions: cognitive, temporal and emotional demands and performance requirement. The results show satisfactory evidence of validity based on relationships with NASA-TLX and good reliability. The questionnaire has good psychometric properties and can be an easy, brief, useful tool for mental workload diagnosis and prevention.

  1. Predicting the Consequences of Workload Management Strategies with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Samma, Charneta

    2011-01-01

    Human performance modelers at the US Army Research Laboratory have developed an approach for establishing Soldier high workload that can be used for analyses of proposed system designs. Their technique includes three key components. To implement the approach in an experiment, the researcher would create two experimental conditions: a baseline and a design alternative. Next they would identify a scenario in which the test participants perform all their representative concurrent interactions with the system. This scenario should include any events that would trigger a different set of goals for the human operators. They would collect workload values during both the control and alternative design condition to see if the alternative increased workload and decreased performance. They have successfully implemented this approach for military vehicle. designs using the human performance modeling tool, IMPRINT. Although ARL researches use IMPRINT to implement their approach, it can be applied to any workload analysis. Researchers using other modeling and simulations tools or conducting experiments or field tests can use the same approach.

  2. A cloud-based data network approach for translational cancer research.

    Science.gov (United States)

    Xing, Wei; Tsoumakos, Dimitrios; Ghanem, Moustafa

    2015-01-01

    We develop a new model and associated technology for constructing and managing self-organizing data to support translational cancer research studies. We employ a semantic content network approach to address the challenges of managing cancer research data. Such data is heterogeneous, large, decentralized, growing and continually being updated. Moreover, the data originates from different information sources that may be partially overlapping, creating redundancies as well as contradictions and inconsistencies. Building on the advantages of elasticity of cloud computing, we deploy the cancer data networks on top of the CELAR Cloud platform to enable more effective processing and analysis of Big cancer data.

  3. The impact of automation on workload and dispensing errors in a hospital pharmacy.

    Science.gov (United States)

    James, K Lynette; Barlow, Dave; Bithell, Anne; Hiom, Sarah; Lord, Sue; Pollard, Mike; Roberts, Dave; Way, Cheryl; Whittlesea, Cate

    2013-04-01

    To determine the effect of installing an original-pack automated dispensing system (ADS) on dispensary workload and prevented dispensing incidents in a hospital pharmacy. Data on dispensary workload and prevented dispensing incidents, defined as dispensing errors detected and reported before medication had left the pharmacy, were collected over 6 weeks at a National Health Service hospital in Wales before and after the installation of an ADS. Workload was measured by non-participant observation using the event recording technique. Prevented dispensing incidents were self-reported by pharmacy staff on standardised forms. Median workloads (measured as items dispensed/person/hour) were compared using Mann-Whitney U tests and rate of prevented dispensing incidents were compared using Chi-square test. Spearman's rank correlation was used to examine the association between workload and prevented dispensing incidents. A P value of ≤0.05 was considered statistically significant. Median dispensary workload was significantly lower pre-automation (9.20 items/person/h) compared to post-automation (13.17 items/person/h, P automation (0.28%) than pre-automation (0.64%, P automation (ρ = 0.23, P automation improves dispensing efficiency and reduces the rate of prevented dispensing incidents. It is proposed that prevented dispensing incidents frequently occurred during periods of high workload due to involuntary automaticity. Prevented dispensing incidents occurring after a busy period were attributed to staff experiencing fatigue after-effects. © 2012 The Authors. IJPP © 2012 Royal Pharmaceutical Society.

  4. Evaluating the effect of Locking on Multitenancy Isolation for Components of Cloud-hosted Services

    Directory of Open Access Journals (Sweden)

    Laud Charles Ochei

    2018-05-01

    Full Text Available Multitenancy isolation is a way of ensuring that the performance, stored data volume and access privileges required by one tenant and/or component does not affect other tenants and/or components. One of the conditions that can influence the varying degrees of isolation is when locking is enabled for a process or component that is being shared. Although the concept of locking has been extensively studied in database management, there is little or no research on how locking affects multitenancy isolation and its implications for optimizing the deployment of components of a cloud-hosted service in response to workload changes. This paper applies COMITRE (Component-based approach to Multitenancy Isolation through Request Re-routing to evaluate the impact of enabling locking for a shared process or component of a cloud-hosted application. Results show that locking has a significant effect on the performance and resource consumption of tenants especially for operations that interact directly with the local file system of the platform used on the cloud infrastructure. We also present recommendations for achieving the required degree of multitenancy isolation when locking is enabled for three software processes: continuous integration, version control, and bug tracking.

  5. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  6. Higher mental workload is associated with poorer laparoscopic performance as measured by the NASA-TLX tool.

    Science.gov (United States)

    Yurko, Yuliya Y; Scerbo, Mark W; Prabhu, Ajita S; Acker, Christina E; Stefanidis, Dimitrios

    2010-10-01

    Increased workload during task performance may increase fatigue and facilitate errors. The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is a previously validated tool for workload self-assessment. We assessed the relationship of workload and performance during simulator training on a complex laparoscopic task. NASA-TLX workload data from three separate trials were analyzed. All participants were novices (n = 28), followed the same curriculum on the fundamentals of laparoscopic surgery suturing model, and were tested in the animal operating room (OR) on a Nissen fundoplication model after training. Performance and workload scores were recorded at baseline, after proficiency achievement, and during the test. Performance, NASA-TLX scores, and inadvertent injuries during the test were analyzed and compared. Workload scores declined during training and mirrored performance changes. NASA-TLX scores correlated significantly with performance scores (r = -0.5, P NASA-TLX questionnaire accurately reflects workload changes during simulator training and may identify individuals more likely to experience high workload and more prone to errors during skill transfer to the clinical environment.

  7. The mental workload analysis of safety workers in an Indonesian oil mining industry

    Directory of Open Access Journals (Sweden)

    Indrawati Sri

    2018-01-01

    Full Text Available The responsibilities of occupational health and safety workers are very hard to ensure other workers is safety. The responsibility make the workers of occupational health and safety has some affecting to their job. Some effect can cause over the mental workload. This research aims to determine the score of mental workload from three professions in occupational health and safety, i.e. safetyman contractor, safetyman field and safetyman officer. Six indicators in the NASA-TLX method, i.e. mental demand (MD, physical demand (PD, temporal demand (TD, performance (OP, effort (EF and frustration level (FR are used to determine the worker’s mental workload. The result shows mental demand (MD is the most dominant indicators affecting the mental workload between safetyman contractor, safetyman field and safety officer. The highest mental workload score among safety workers is on the safetyman field with WWL score at 62,38, because among the three types safety workers, the highest MD is on the safetyman field due to the responsibility.

  8. Designing Cloud Infrastructure for Big Data in E-government

    Directory of Open Access Journals (Sweden)

    Jelena Šuh

    2015-03-01

    Full Text Available The development of new information services and technologies, especially in domains of mobile communications, Internet of things, and social media, has led to appearance of the large quantities of unstructured data. The pervasive computing also affects the e-government systems, where big data emerges and cannot be processed and analyzed in a traditional manner due to its complexity, heterogeneity and size. The subject of this paper is the design of the cloud infrastructure for big data storage and processing in e-government. The goal is to analyze the potential of cloud computing for big data infrastructure, and propose a model for effective storing, processing and analyzing big data in e-government. The paper provides an overview of current relevant concepts related to cloud infrastructure design that should provide support for big data. The second part of the paper gives a model of the cloud infrastructure based on the concepts of software defined networks and multi-tenancy. The final goal is to support projects in the field of big data in e-government

  9. Workload assessment of surgeons: correlation between NASA TLX and blinks.

    Science.gov (United States)

    Zheng, Bin; Jiang, Xianta; Tien, Geoffrey; Meneghetti, Adam; Panton, O Neely M; Atkins, M Stella

    2012-10-01

    Blinks are known as an indicator of visual attention and mental stress. In this study, surgeons' mental workload was evaluated utilizing a paper assessment instrument (National Aeronautics and Space Administration Task Load Index, NASA TLX) and by examining their eye blinks. Correlation between these two assessments was reported. Surgeons' eye motions were video-recorded using a head-mounted eye-tracker while the surgeons performed a laparoscopic procedure on a virtual reality trainer. Blink frequency and duration were computed using computer vision technology. The level of workload experienced during the procedure was reported by surgeons using the NASA TLX. A total of 42 valid videos were recorded from 23 surgeons. After blinks were computed, videos were divided into two groups based on the blink frequency: infrequent group (≤ 6 blinks/min) and frequent group (more than 6 blinks/min). Surgical performance (measured by task time and trajectories of tool tips) was not significantly different between these two groups, but NASA TLX scores were significantly different. Surgeons who blinked infrequently reported a higher level of frustration (46 vs. 34, P = 0.047) and higher overall level of workload (57 vs. 47, P = 0.045) than those who blinked more frequently. The correlation coefficients (Pearson test) between NASA TLX and the blink frequency and duration were -0.17 and 0.446. Reduction of blink frequency and shorter blink duration matched the increasing level of mental workload reported by surgeons. The value of using eye-tracking technology for assessment of surgeon mental workload was shown.

  10. Curriculum Change Management and Workload

    Science.gov (United States)

    Alkahtani, Aishah

    2017-01-01

    This study examines the ways in which Saudi teachers have responded or are responding to the challenges posed by a new curriculum. It also deals with issues relating to workload demands which affect teachers' performance when they apply a new curriculum in a Saudi Arabian secondary school. In addition, problems such as scheduling and sharing space…

  11. Integrating PROOF Analysis in Cloud and Batch Clusters

    International Nuclear Information System (INIS)

    Rodríguez-Marrero, Ana Y; Fernández-del-Castillo, Enol; López García, Álvaro; Marco de Lucas, Jesús; Matorras Weinig, Francisco; González Caballero, Isidro; Cuesta Noriega, Alberto

    2012-01-01

    High Energy Physics (HEP) analysis are becoming more complex and demanding due to the large amount of data collected by the current experiments. The Parallel ROOT Facility (PROOF) provides researchers with an interactive tool to speed up the analysis of huge volumes of data by exploiting parallel processing on both multicore machines and computing clusters. The typical PROOF deployment scenario is a permanent set of cores configured to run the PROOF daemons. However, this approach is incapable of adapting to the dynamic nature of interactive usage. Several initiatives seek to improve the use of computing resources by integrating PROOF with a batch system, such as Proof on Demand (PoD) or PROOF Cluster. These solutions are currently in production at Universidad de Oviedo and IFCA and are positively evaluated by users. Although they are able to adapt to the computing needs of users, they must comply with the specific configuration, OS and software installed at the batch nodes. Furthermore, they share the machines with other workloads, which may cause disruptions in the interactive service for users. These limitations make PROOF a typical use-case for cloud computing. In this work we take profit from Cloud Infrastructure at IFCA in order to provide a dynamic PROOF environment where users can control the software configuration of the machines. The Proof Analysis Framework (PAF) facilitates the development of new analysis and offers a transparent access to PROOF resources. Several performance measurements are presented for the different scenarios (PoD, SGE and Cloud), showing a speed improvement closely correlated with the number of cores used.

  12. Subjective evaluation of physical and mental workload interactions across different muscle groups.

    Science.gov (United States)

    Mehta, Ranjana K; Agnew, Michael J

    2015-01-01

    Both physical and mental demands, and their interactions, have been shown to increase biomechanical loading and physiological reactivity as well as impair task performance. Because these interactions have shown to be muscle-dependent, the aim of this study was to determine the sensitivity of the NASA Task Load Index (NASA TLX) and Ratings of Perceived Exertion (RPE) to evaluate physical and mental workload during muscle-specific tasks. Twenty-four participants performed upper extremity and low back exertions at three physical workload levels in the absence and presence of a mental stressor. Outcome measures included RPE and NASA TLX (six sub-scales) ratings. The findings indicate that while both RPEs and NASA TLX ratings were sensitive to muscle-specific changes in physical demand, only an additional mental stressor and its interaction with either physical demand or muscle groups influenced the effort sub-scale and overall workload scores of the NASA TLX. While additional investigations in actual work settings are warranted, the NASA TLX shows promise in evaluating perceived workload that is sensitive not only to physical and mental demands but also sensitive in determining workload for tasks that employ different muscle groups.

  13. Relationship between mental workload and musculoskeletal disorders among Alzahra Hospital nurses

    Science.gov (United States)

    Habibi, Ehsanollah; Taheri, Mohamad Reza; Hasanzadeh, Akbar

    2015-01-01

    Background: Musculoskeletal disorders (MSDs) are a serious problem among the nursing staff. Mental workload is the major cause of MSDs among nursing staff. The aim of this study was to investigate the mental workload dimensions and their association with MSDs among nurses of Alzahra Hospital, affiliated to Isfahan University of Medical Sciences. Materials and Methods: This descriptive cross-sectional study was conducted on 247 randomly selected nurses who worked in the Alzahra Hospital in Isfahan, Iran in the summer of 2013. The Persian version of National Aeronautics and Space Administration Task Load Index (NASA-TLX) (measuring mental load) specialized questionnaire and Cornell Musculoskeletal Discomfort Questionnaire (CMDQ) was used for data collection. Data were collected and analyzed by Pearson correlation coefficient and Spearman correlation coefficient tests in SPSS 20. Results: Pearson and Spearman correlation tests showed a significant association between the nurses’ MSDs and the dimensions of workload frustration, total workload, temporal demand, effort, and physical demand (r = 0.304, 0.277, 0.277, 0.216, and 0.211, respectively). However, there was no significant association between the nurses’ MSDs and the dimensions of workload performance and mental demand (P > 0.05). Conclusions: The nurses’ frustration had a direct correlation with MSDs. This shows that stress is an inseparable component in hospital workplace. Thus, reduction of stress in nursing workplace should be one of the main priorities of hospital managers. PMID:25709683

  14. Relations between mental workload and decision-making in an organizational setting

    Directory of Open Access Journals (Sweden)

    María Soria-Oliver

    2017-05-01

    Full Text Available Asbtract Background The complexity of current organizations implies a potential overload for workers. For this reason, it is of interest to study the effects that mental workload has on the performance of complex tasks in professional settings. Objective The objective of this study is to empirically analyze the relation between the quality of decision-making, on the one hand, and the expected and real mental workload, on the other. Methods The study uses an ex post facto prospective design with a sample of 176 professionals from a higher education organization. Expected mental workload (Pre-Task WL and real mental workload (Post-Task WL were measured with the unweighted NASA-Task Load Index (NASA-TLX questionnaire; difference between real WL and expected WL (Differential WL was also calculated; quality of decision-making was measured by means of the Decision-Making Questionnaire. Results General quality of decision-making and Pre-Task WL relation is compatible with an inverted U pattern, with slight variations depending on the specific dimension of decision-making that is considered. There were no verifiable relations between Post-Task WL and decision-making. The subjects whose expected WL matched the real WL showed worse quality in decision-making than subjects with high or low Differential WL. Conclusions The relations between mental workload and decision-making reveal a complex pattern, with evidence of nonlinear relations.

  15. Formation of Massive Molecular Cloud Cores by Cloud-cloud Collision

    OpenAIRE

    Inoue, Tsuyoshi; Fukui, Yasuo

    2013-01-01

    Recent observations of molecular clouds around rich massive star clusters including NGC3603, Westerlund 2, and M20 revealed that the formation of massive stars could be triggered by a cloud-cloud collision. By using three-dimensional, isothermal, magnetohydrodynamics simulations with the effect of self-gravity, we demonstrate that massive, gravitationally unstable, molecular cloud cores are formed behind the strong shock waves induced by the cloud-cloud collision. We find that the massive mol...

  16. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  17. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  18. GP views on strategies to cope with increasing workload: a qualitative interview study.

    Science.gov (United States)

    Fisher, Rebecca Fr; Croxson, Caroline Hd; Ashdown, Helen F; Hobbs, Fd Richard

    2017-02-01

    The existence of a crisis in primary care in the UK is in little doubt. GP morale and job satisfaction are low, and workload is increasing. In this challenging context, finding ways for GPs to manage that workload is imperative. To explore what existing or potential strategies are described by GPs for dealing with their workload, and their views on the relative merits of each. Semi-structured, qualitative interviews with GPs working within NHS England. All GPs working within NHS England were eligible. Of those who responded to advertisements, a maximum-variation sample was selected and interviewed until data saturation was reached. Data were analysed thematically. Responses were received from 171 GPs, and, from these, 34 were included in the study. Four main themes emerged for workload management: patient-level, GP-level, practice-level, and systems-level strategies. A need for patients to take greater responsibility for self-management was clear, but many felt that GPs should not be responsible for this education. Increased delegation of tasks was felt to be key to managing workload, with innovative use of allied healthcare professionals and extended roles for non-clinical staff suggested. Telephone triage was a commonly used tool for managing workload, although not all participants found this helpful. This in-depth qualitative study demonstrates an encouraging resilience among GPs. They are proactively trying to manage workload, often using innovative local strategies. GPs do not feel that they can do this alone, however, and called repeatedly for increased recruitment and more investment in primary care. © British Journal of General Practice 2017.

  19. Patient Safety Incidents and Nursing Workload.

    Science.gov (United States)

    Carlesi, Katya Cuadros; Padilha, Kátia Grillo; Toffoletto, Maria Cecília; Henriquez-Roldán, Carlos; Juan, Monica Andrea Canales

    2017-04-06

    to identify the relationship between the workload of the nursing team and the occurrence of patient safety incidents linked to nursing care in a public hospital in Chile. quantitative, analytical, cross-sectional research through review of medical records. The estimation of workload in Intensive Care Units (ICUs) was performed using the Therapeutic Interventions Scoring System (TISS-28) and for the other services, we used the nurse/patient and nursing assistant/patient ratios. Descriptive univariate and multivariate analysis were performed. For the multivariate analysis we used principal component analysis and Pearson correlation. 879 post-discharge clinical records and the workload of 85 nurses and 157 nursing assistants were analyzed. The overall incident rate was 71.1%. It was found a high positive correlation between variables workload (r = 0.9611 to r = 0.9919) and rate of falls (r = 0.8770). The medication error rates, mechanical containment incidents and self-removal of invasive devices were not correlated with the workload. the workload was high in all units except the intermediate care unit. Only the rate of falls was associated with the workload. identificar a relação entre a carga de trabalho da equipe de enfermagem e a ocorrência de incidentes de segurança dos pacientes ligados aos cuidados de enfermagem de um hospital público no Chile. pesquisa transversal analítica quantitativa através de revisão de prontuários médicos. A estimativa da carga de trabalho em Unidade de Terapia Intensiva (UTI) foi realizada utilizando o Índice de Intervenções Terapêuticas-TISS-28 e para os outros serviços, foram utilizados os cocientes enfermeira/paciente e auxiliar de enfermagem/ paciente. Foram feitas análises univariada descritiva e multivariada. Para a análise multivariada utilizou-se análise de componentes principais e correlação de Pearson. foram analisados 879 prontuáriosclínicos de pós-alta e a carga de trabalho de 85 enfermeiros e 157

  20. Patient Workload Profile: National Naval Medical Center (NNMC), Bethesda, MD.

    Science.gov (United States)

    1980-06-01

    AD-A09a 729 WESTEC SERVICES NC SAN DIEGOCA0S / PATIENT WORKLOAD PROFILE: NATIONAL NAVAL MEDICAL CENTER NNMC),- ETC(U) JUN 80 W T RASMUSSEN, H W...provides site workload data for the National Naval Medical Center (NNMC) within the following functional support areas: Patient Appointment...on managing medical and patient data, thereby offering the health care provider and administrator more powerful capabilities in dealing with and

  1. QUALITY OF NURSING DOCUMENTATION AND NURSE’S OBJECTIVE WORKLOAD BASED ON TIME AND MOTION STUDY (TMS

    Directory of Open Access Journals (Sweden)

    Mira Amelynda Prakosa

    2017-02-01

    Full Text Available Introduction. The quality of documentation can decrease because of bad admission filling of documentation. Workload is one of the factor that can influence admission filling of documentation. This study was aimed to analyze the correlation between nurse’s objective workload and the quality of nursing documentation in RSU Haji. Method. The design of this study was descriptive correlation with cross-sectional approach. The population on this study was the nurse that works in Marwah 3 and 4 inpatient care in RSU Haji Surabaya. The number of the sample was 14 respondents were selected by simple random sampling. The independent variable was nurse’s objective workload and the dependent variable was quality of nursing documentation. The data were analyzed by using regression logistic. Result. Nurse’s objective workload in RSU Haji was 72%. There was no correlational between nurse’s objective workload with the completeness of nursing documentation (P= 0,999, also nurse’s objective workload with accurate of nursing documentation (P= 0,999. Discussion. This study concluded that nurse’s objective workload was low and quality of nursing documentation was accurate enough and complete enough. Next researcher should provide precise operational so the factors that affected the quality of documentation can be reached and the workload of the nurses in RSU Haji become ideal. Keyword:  nurses, quality of nursing documentation, objective workload

  2. Understanding I/O workload characteristics of a Peta-scale storage system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Youngjae [ORNL; Gunasekaran, Raghul [ORNL

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization, and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.

  3. TINJAUAN KEAMANAN SISTEM PADA TEKNOLOGI CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Yuli Fauziah

    2014-01-01

    Full Text Available Dalam perspektif teknologi informasi, cloud computing atau komputasi awan dapat diartikan sebagai suatu teknologi yang memanfaatkan internet sebagai resource untuk komputasi yang dapat di-request oleh pengguna dan merupakan sebuah layanan dengan pusat server bersifat virtual atau berada dalam cloud (internet itu sendiri. Banyak perusahaan yang ingin memindahkan aplikasi dan storage-nya ke dalam cloudcomputing. Teknologi ini menjadi trend dikalangan peneliti dan praktisi IT untuk menggali potensi yang dapat ditawarkan kepada masyarakat luas. Tetapi masih banyak isu keamanan yang muncul, karena teknologi yang masih baru. Salah satu isu keamanannya adalah Theft of Information, yaitu pencurian terhadap data yang disimpan di dalam Storage aplikasi yang menggunakan teknologi Cloud Computing. Kerugian yang akan diperoleh oleh pengguna teknologi ini sangat besar, karena informasi yang dicuri menyangkut data rahasia milik perusahaan, maupun data-data penting lainnya.Beberapa tindakan untuk mencegah terjadinya pencurian data ini, yaitu dengan  menghindari jenis ancaman keamanan berupa kehilangan atau kebocoran data dan pembajakan account atau service, serta Identity Management dan access control adalah kebutuhan yang utama bagi SaaS Cloud computing perusahaan. Dan salah satu metode yang digunakan dalam keamanan data aspek autentikasi dan otorisasi pada aplikasi atau service cloud computing adalah teknologi Single-sign-on. Teknologi Single-sign-on (SSO adalah teknologi yang mengizinkan pengguna jaringan agar dapat mengakses sumber daya dalam jaringan hanya dengan menggunakan satu akun pengguna saja. Teknologi ini sangat diminati, khususnya dalam jaringan yang sangat besar dan bersifat heterogen, juga pada jaringan cloud computing. Dengan menggunakan SSO, seorang pengguna hanya cukup melakukan proses autentikasi sekali saja untuk mendapatkan izin akses terhadap semua layanan yang terdapat di dalam jaringan. Kata Kunci : Storage, Aplikasi, Software as a

  4. Federated and Cloud Enabled Resources for Data Management and Utilization

    Science.gov (United States)

    Rankin, R.; Gordon, M.; Potter, R. G.; Satchwill, B.

    2011-12-01

    The emergence of cloud computing over the past three years has led to a paradigm shift in how data can be managed, processed and made accessible. Building on the federated data management system offered through the Canadian Space Science Data Portal (www.cssdp.ca), we demonstrate how heterogeneous and geographically distributed data sets and modeling tools have been integrated to form a virtual data center and computational modeling platform that has services for data processing and visualization embedded within it. We also discuss positive and negative experiences in utilizing Eucalyptus and OpenStack cloud applications, and job scheduling facilitated by Condor and Star Cluster. We summarize our findings by demonstrating use of these technologies in the Cloud Enabled Space Weather Data Assimilation and Modeling Platform CESWP (www.ceswp.ca), which is funded through Canarie's (canarie.ca) Network Enabled Platforms program in Canada.

  5. Eucalyptus Cloud to Remotely Provision e-Governance Applications

    Directory of Open Access Journals (Sweden)

    Sreerama Prabhu Chivukula

    2011-01-01

    Full Text Available Remote rural areas are constrained by lack of reliable power supply, essential for setting up advanced IT infrastructure as servers or storage; therefore, cloud computing comprising an Infrastructure-as-a-Service (IaaS is well suited to provide such IT infrastructure in remote rural areas. Additional cloud layers of Platform-as-a-Service (PaaS and Software-as-a-Service (SaaS can be added above IaaS. Cluster-based IaaS cloud can be set up by using open-source middleware Eucalyptus in data centres of NIC. Data centres of the central and state governments can be integrated with State Wide Area Networks and NICNET together to form the e-governance grid of India. Web service repositories at centre, state, and district level can be built over the national e-governance grid of India. Using Globus Toolkit, we can achieve stateful web services with speed and security. Adding the cloud layer over the e-governance grid will make a grid-cloud environment possible through Globus Nimbus. Service delivery can be in terms of web services delivery through heterogeneous client devices. Data mining using Weka4WS and DataMiningGrid can produce meaningful knowledge discovery from data. In this paper, a plan of action is provided for the implementation of the above proposed architecture.

  6. On the mechanism of Venusian atmosphere cloud layer formation

    International Nuclear Information System (INIS)

    Zhulanov, Yu.V.; Mukhin, L.M.; Nenarokov, D.F.

    1987-01-01

    Results of investigations into the aerosol component of Venusian atmosphere using a photoelectric counter in the 63-47 km range of heights at the Vega-1 and Vega-2 interplanetary stations are presented. The experiment was carried out in June, 11, 15, 1985 on the night-time side of the planet. Both devices were switched in at the height of 63 km, and data on the quantity of detected particles >=0.5 μm in diameter were transmitted every 0.43 s (that corresponds to 8-20 m spatial resolution). Study of particle concentration profiles obtained at the interval of 4 days (one period of rotation of Venusian atmosphere) permits to make the following conclusions on the structure of Venusian atmosphere cloud layer on the night side: 1) the cloud layer includes two distinct cloud strata: the upper- 56-60 km height range and the lower- 49.5-46.5 km height range separated by the zone of low particle concentrations ( -3 ); 2) the mentioned structure of the cloud layer is rather stable; concentration profiles obtained at the interval of 4 days well agree with each other; 3) concentration profiles, particularly, in the lower cloud-stratum are subjected to heavy fluctuations, that indicates essential spatial field heterogeneity of particle concentrations

  7. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    Science.gov (United States)

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  8. Severity and workload of nursing with patients seeking admission to an intensive care unit

    Directory of Open Access Journals (Sweden)

    Meire Cristina Novelli e Castro

    2017-12-01

    Full Text Available Abstract Objective: To identify the severity and workload of nursing with adult patients seeking admission to an Intensive Care Unit (ICU. Methods: A cross-sectional study with a quantitative, exploratory and prospective approach was performed, developed in a hospital in the state of São Paulo. Demographic data on patients were collected, the Simplified Acute Physiology Score III (SAPS III was applied to assess the severity of patients and the Nursing Activities Score (NAS was used to evaluate nursing workload, between July and August 2014. Results: The overall mean score of the SAPS III was 30.52 ± 18.39 and that of the NAS was 58.18 ± 22.29. The group of patients admitted to the ICU showed higher severity and higher workload of nursing compared to non-admitted patients. Non-admitted patients had an NAS of 53.85. Conclusion: The nursing workload in patients who were not admitted to the ICU was also high. The evaluation of workload in other contexts where patients are seriously ill is important. The workload assessment in other contexts where severely ill patients are found is evident.

  9. Relationship between cloud radiative forcing, cloud fraction and cloud albedo, and new surface-based approach for determining cloud albedo

    OpenAIRE

    Y. Liu; W. Wu; M. P. Jensen; T. Toto

    2011-01-01

    This paper focuses on three interconnected topics: (1) quantitative relationship between surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo; (2) surfaced-based approach for measuring cloud albedo; (3) multiscale (diurnal, annual and inter-annual) variations and covariations of surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo. An analytical expression is first derived to quantify the relationship between cloud radiative forcing, cloud fractio...

  10. The enhancement and suppression of immersion mode heterogeneous ice-nucleation by solutes.

    Science.gov (United States)

    Whale, Thomas F; Holden, Mark A; Wilson, Theodore W; O'Sullivan, Daniel; Murray, Benjamin J

    2018-05-07

    Heterogeneous nucleation of ice from aqueous solutions is an important yet poorly understood process in multiple fields, not least the atmospheric sciences where it impacts the formation and properties of clouds. In the atmosphere ice-nucleating particles are usually, if not always, mixed with soluble material. However, the impact of this soluble material on ice nucleation is poorly understood. In the atmospheric community the current paradigm for freezing under mixed phase cloud conditions is that dilute solutions will not influence heterogeneous freezing. By testing combinations of nucleators and solute molecules we have demonstrated that 0.015 M solutions (predicted melting point depression nucleate ice up to around 3 °C warmer than they do in pure water. In contrast, dilute solutions of certain alkali metal halides can dramatically depress freezing points for the same nucleators. At 0.015 M, solutes can enhance or deactivate the ice-nucleating ability of a microcline feldspar across a range of more than 10 °C, which corresponds to a change in active site density of more than a factor of 10 5 . This concentration was chosen for a survey across multiple solutes-nucleant combinations since it had a minimal colligative impact on freezing and is relevant for activating cloud droplets. Other nucleators, for instance a silica gel, are unaffected by these 'solute effects', to within experimental uncertainty. This split in response to the presence of solutes indicates that different mechanisms of ice nucleation occur on the different nucleators or that surface modification of relevance to ice nucleation proceeds in different ways for different nucleators. These solute effects on immersion mode ice nucleation may be of importance in the atmosphere as sea salt and ammonium sulphate are common cloud condensation nuclei (CCN) for cloud droplets and are internally mixed with ice-nucleating particles in mixed-phase clouds. In addition, we propose a pathway dependence where

  11. Shift manager workload assessment - A case study

    International Nuclear Information System (INIS)

    Berntson, K.; Kozak, A.; Malcolm, J. S.

    2006-01-01

    In early 2003, Bruce Power restarted two of its previously laid up units in the Bruce A generating station, Units 3 and 4. However, due to challenges relating to the availability of personnel with active Shift Manager licenses, an alternate shift structure was proposed to ensure the safe operation of the station. This alternate structure resulted in a redistribution of responsibility, and a need to assess the resulting changes in workload. Atomic Energy of Canada Limited was contracted to perform a workload assessment based on the new shift structure, and to provide recommendations, if necessary, to ensure Shift Managers had sufficient resources available to perform their required duties. This paper discusses the performance of that assessment, and lessons learned as a result of the work performed during the Restart project. (authors)

  12. Cardiovascular responses to plyometric exercise are affected by workload in athletes.

    Science.gov (United States)

    Arazi, Hamid; Asadi, Abbas; Mahdavi, Seyed Amir; Nasiri, Seyed Omid Mirfalah

    2014-01-01

    With regard to blood pressure responses to plyometric exercise and decreasing blood pressure after exercise (post-exercise hypotension), the influence of different workloads of plyometric exercise on blood pressure is not clear. The purpose of this investigation was to examine the effects of a low, moderate and high workload of plyometric exercise on the post-exercise systolic (SBP) and diastolic blood pressure (DBP), heart rate (HR) and rate-pressure product (RPP) responses in athletes. TEN MALE ATHLETES (AGE: 22.6 ±0.5 years; height: 178.2 ±3.3 cm; and body mass: 75.2 ±2.8 kg) underwent PE protocols involving 5 × 10 reps (Low Workload - LW), 10 × 10 reps (Moderate Workload - MW), and 15 × 10 reps (High Workload - HW) depth jump exercise from a 50-cm box in 3 non-consecutive days. After each exercise session, SBP, DBP and HR were measured every 10 min for a period of 70 min. No significant differences were observed among post-exercise SBP and DBP when the protocols (LW, MW and HW) were compared. The MW and HW protocols showed greater increases in HR compared with LW. Also the HW indicated greater increases than LW in RPP at post-exercise (p plyometric exercise, HW condition indicated greater increases in HR and RPP and strength and conditioning professionals and athletes must keep in their mind that HW of plyometric exercise induces greater cardiovascular responses.

  13. Scaling deep learning workloads: NVIDIA DGX-1/Pascal and Intel Knights Landing

    Energy Technology Data Exchange (ETDEWEB)

    Gawande, Nitin A.; Landwehr, Joshua B.; Daily, Jeffrey A.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.

    2017-08-24

    Deep Learning (DL) algorithms have become ubiquitous in data analytics. As a result, major computing vendors --- including NVIDIA, Intel, AMD, and IBM --- have architectural road-maps influenced by DL workloads. Furthermore, several vendors have recently advertised new computing products as accelerating large DL workloads. Unfortunately, it is difficult for data scientists to quantify the potential of these different products. This paper provides a performance and power analysis of important DL workloads on two major parallel architectures: NVIDIA DGX-1 (eight Pascal P100 GPUs interconnected with NVLink) and Intel Knights Landing (KNL) CPUs interconnected with Intel Omni-Path or Cray Aries. Our evaluation consists of a cross section of convolutional neural net workloads: CifarNet, AlexNet, GoogLeNet, and ResNet50 topologies using the Cifar10 and ImageNet datasets. The workloads are vendor-optimized for each architecture. Our analysis indicates that although GPUs provide the highest overall performance, the gap can close for some convolutional networks; and the KNL can be competitive in performance/watt. We find that NVLink facilitates scaling efficiency on GPUs. However, its importance is heavily dependent on neural network architecture. Furthermore, for weak-scaling --- sometimes encouraged by restricted GPU memory --- NVLink is less important.

  14. Remotely Sensed High-Resolution Global Cloud Dynamics for Predicting Ecosystem and Biodiversity Distributions.

    Directory of Open Access Journals (Sweden)

    Adam M Wilson

    2016-03-01

    Full Text Available Cloud cover can influence numerous important ecological processes, including reproduction, growth, survival, and behavior, yet our assessment of its importance at the appropriate spatial scales has remained remarkably limited. If captured over a large extent yet at sufficiently fine spatial grain, cloud cover dynamics may provide key information for delineating a variety of habitat types and predicting species distributions. Here, we develop new near-global, fine-grain (≈1 km monthly cloud frequencies from 15 y of twice-daily Moderate Resolution Imaging Spectroradiometer (MODIS satellite images that expose spatiotemporal cloud cover dynamics of previously undocumented global complexity. We demonstrate that cloud cover varies strongly in its geographic heterogeneity and that the direct, observation-based nature of cloud-derived metrics can improve predictions of habitats, ecosystem, and species distributions with reduced spatial autocorrelation compared to commonly used interpolated climate data. These findings support the fundamental role of remote sensing as an effective lens through which to understand and globally monitor the fine-grain spatial variability of key biodiversity and ecosystem properties.

  15. Modelling ice microphysics of mixed-phase clouds

    Science.gov (United States)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  16. Effects of mental workload on physiological and subjective responses during traffic density monitoring: A field study.

    Science.gov (United States)

    Fallahi, Majid; Motamedzade, Majid; Heidarimoghadam, Rashid; Soltanian, Ali Reza; Miyake, Shinji

    2016-01-01

    This study evaluated operators' mental workload while monitoring traffic density in a city traffic control center. To determine the mental workload, physiological signals (ECG, EMG) were recorded and the NASA-Task Load Index (TLX) was administered for 16 operators. The results showed that the operators experienced a larger mental workload during high traffic density than during low traffic density. The traffic control center stressors caused changes in heart rate variability features and EMG amplitude, although the average workload score was significantly higher in HTD conditions than in LTD conditions. The findings indicated that increasing traffic congestion had a significant effect on HR, RMSSD, SDNN, LF/HF ratio, and EMG amplitude. The results suggested that when operators' workload increases, their mental fatigue and stress level increase and their mental health deteriorate. Therefore, it maybe necessary to implement an ergonomic program to manage mental health. Furthermore, by evaluating mental workload, the traffic control center director can organize the center's traffic congestion operators to sustain the appropriate mental workload and improve traffic control management. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Association of physical workload and leisure time physical activity with incident mobility limitations

    DEFF Research Database (Denmark)

    Mänty, M; Møller, A; Nilsson, C

    2014-01-01

    OBJECTIVES: To examine individual as well as joint associations of physical workload and leisure time physical activity with incident mobility limitations in initially well-functioning middle-aged workers. METHODS: This study is based on 6-year follow-up data of the Danish Longitudinal Study...... on Work, Unemployment and Health. Physical workload was reported at baseline and categorised as light, moderate or heavy. Baseline leisure time physical activity level was categorised as sedentary or active following the current recommendations on physical activity. Incidence of mobility limitations...... with higher workload regardless of level of leisure time physical activity, although the risks tended to be higher among those with sedentary leisure time compared with their active counterparts. All in all, the risk for onset of mobility limitations was highest among those with heavy workload combined...

  18. 3D Cloud Radiative Effects on Polarized Reflectances

    Science.gov (United States)

    Cornet, C.; Matar, C.; C-Labonnote, L.; Szczap, F.; Waquet, F.; Parol, F.; Riedi, J.

    2017-12-01

    As recognized in the last IPCC report, clouds have a major importance in the climate budget and need to be better characterized. Remote sensing observations are a way to obtain either global observations of cloud from satellites or a very fine description of clouds from airborne measurements. An increasing numbers of radiometers plan to measure polarized reflectances in addition to total reflectances, since this information is very helpful to obtain aerosol or cloud properties. In a near future, for example, the Multi-viewing, Multi-channel, Multi-polarization Imager (3MI) will be part the EPS-SG Eumetsat-ESA mission. It will achieve multi-angular polarimetric measurements from visible to shortwave infrared wavelengths. An airborne prototype, OSIRIS (Observing System Including Polarization in the Solar Infrared Spectrum), is also presently developed at the Laboratoire d'Optique Atmospherique and had already participated to several measurements campaigns. In order to analyze suitably the measured signal, it it necessary to have realistic and accurate models able to simulate polarized reflectances. The 3DCLOUD model (Szczap et al., 2014) was used to generate three-dimensional synthetic cloud and the 3D radiative transfer model, 3DMCPOL (Cornet et al., 2010) to compute realistic polarized reflectances. From these simulations, we investigate the effects of 3D cloud structures and heterogeneity on the polarized angular signature often used to retrieve cloud or aerosol properties. We show that 3D effects are weak for flat clouds but become quite significant for fractional clouds above ocean. The 3D effects are quite different according to the observation scale. For the airborne scale (few tens of meter), solar illumination effects can lead to polarized cloud reflectance values higher than the saturation limit predicted by the homogeneous cloud assumption. In the cloud gaps, corresponding to shadowed areas of the total reflectances, polarized signal can also be enhanced

  19. A Dynamic Resource Scheduling Method Based on Fuzzy Control Theory in Cloud Environment

    Directory of Open Access Journals (Sweden)

    Zhijia Chen

    2015-01-01

    Full Text Available The resources in cloud environment have features such as large-scale, diversity, and heterogeneity. Moreover, the user requirements for cloud computing resources are commonly characterized by uncertainty and imprecision. Hereby, to improve the quality of cloud computing service, not merely should the traditional standards such as cost and bandwidth be satisfied, but also particular emphasis should be laid on some extended standards such as system friendliness. This paper proposes a dynamic resource scheduling method based on fuzzy control theory. Firstly, the resource requirements prediction model is established. Then the relationships between resource availability and the resource requirements are concluded. Afterwards fuzzy control theory is adopted to realize a friendly match between user needs and resources availability. Results show that this approach improves the resources scheduling efficiency and the quality of service (QoS of cloud computing.

  20. Effects of workload on teachers' functioning: A moderated mediation model including sleeping problems and overcommitment.

    Science.gov (United States)

    Huyghebaert, Tiphaine; Gillet, Nicolas; Beltou, Nicolas; Tellier, Fanny; Fouquereau, Evelyne

    2018-06-14

    This study investigated the mediating role of sleeping problems in the relationship between workload and outcomes (emotional exhaustion, presenteeism, job satisfaction, and performance), and overcommitment was examined as a moderator in the relationship between workload and sleeping problems. We conducted an empirical study using a sample of 884 teachers. Consistent with our predictions, results revealed that the positive indirect effects of workload on emotional exhaustion and presenteeism, and the negative indirect effects of workload on job satisfaction and performance, through sleeping problems, were only significant among overcommitted teachers. Workload and overcommitment were also directly related to all four outcomes, precisely, they both positively related to emotional exhaustion and presenteeism and negatively related to job satisfaction and performance. Theoretical contributions and perspectives and implications for practice are discussed. Copyright © 2018 John Wiley & Sons, Ltd.

  1. EEG correlates of task engagement and mental workload in vigilance, learning, and memory tasks.

    Science.gov (United States)

    Berka, Chris; Levendowski, Daniel J; Lumicao, Michelle N; Yau, Alan; Davis, Gene; Zivkovic, Vladimir T; Olmstead, Richard E; Tremoulet, Patrice D; Craven, Patrick L

    2007-05-01

    The ability to continuously and unobtrusively monitor levels of task engagement and mental workload in an operational environment could be useful in identifying more accurate and efficient methods for humans to interact with technology. This information could also be used to optimize the design of safer, more efficient work environments that increase motivation and productivity. The present study explored the feasibility of monitoring electroencephalo-graphic (EEG) indices of engagement and workload acquired unobtrusively and quantified during performance of cognitive tests. EEG was acquired from 80 healthy participants with a wireless sensor headset (F3-F4,C3-C4,Cz-POz,F3-Cz,Fz-C3,Fz-POz) during tasks including: multi-level forward/backward-digit-span, grid-recall, trails, mental-addition, 20-min 3-Choice Vigilance, and image-learning and memory tests. EEG metrics for engagement and workload were calculated for each 1 -s of EEG. Across participants, engagement but not workload decreased over the 20-min vigilance test. Engagement and workload were significantly increased during the encoding period of verbal and image-learning and memory tests when compared with the recognition/ recall period. Workload but not engagement increased linearly as level of difficulty increased in forward and backward-digit-span, grid-recall, and mental-addition tests. EEG measures correlated with both subjective and objective performance metrics. These data in combination with previous studies suggest that EEG engagement reflects information-gathering, visual processing, and allocation of attention. EEG workload increases with increasing working memory load and during problem solving, integration of information, analytical reasoning, and may be more reflective of executive functions. Inspection of EEG on a second-by-second timescale revealed associations between workload and engagement levels when aligned with specific task events providing preliminary evidence that second

  2. Investigating workload and its relationship with fatigue among train drivers in Keshesh section of Iranian Railway Company

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Train driving is a high responsibility job in railway industry. Train drivers need different cognitive functions such as vigilance, object detection, memory, planning, decision-making. High level of fatigue is one of the caused factor of accidents among train drivers. Numerous factors can impact train drivers’ fatigue but high level of workload is a key factor. Therefore, the aim of the present study was to investigate workload and its relationship with fatigue among train drivers in Keshesh section of Iranian Railway Company. .Material and Method: This descriptive analytical study was done among 100 train drivers in Keshesh section of Iranian Railway industry. They were selected by simple random sampling. The NASA-TLX workload scale and Samn-Perelli fatigue scale were respectively used to investigate workload and fatigue. Data were analyzed by Paired t-test and Spearman correlation coefficient. . Result: According to the NASA-TLX results, effort and mental workload with the mean score of 74/22 and 73/31 were respectively the most important attributes of workload among train drivers. No significant relationship was observed between workload and level of fatigue before departure and half an hour before reaching the destination station (P>0.05. However, the relationship between of workload and level of fatigue half an hour before the end of shift (on the way back to the origin station was statistically significant (P=0.048 among the sample population. . Conclusion: Effort and mental workload were the most important attributes of workload among train drivers. By focusing on these two variables and adopting fatigue management programs, fatigue and workload can be controlled and the efficiency of the whole system can be enhanced accordingly.

  3. Scheduling Multilevel Deadline-Constrained Scientific Workflows on Clouds Based on Cost Optimization

    Directory of Open Access Journals (Sweden)

    Maciej Malawski

    2015-01-01

    Full Text Available This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows and from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles.

  4. Reducing feedback requirements of workload control

    NARCIS (Netherlands)

    Henrich, Peter; Land, Martin; van der Zee, Durk; Gaalman, Gerard

    2004-01-01

    The workload control concept is known as a robust shop floor control concept. It is especially suited for the dynamic environment of small- and medium-sized enterprises (SMEs) within the make-to-order sector. Before orders are released to the shop floor, they are collected in an ‘order pool’. To

  5. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  6. Glassy aerosols with a range of compositions nucleate ice heterogeneously at cirrus temperatures

    Directory of Open Access Journals (Sweden)

    T. W. Wilson

    2012-09-01

    Full Text Available Atmospheric secondary organic aerosol (SOA is likely to exist in a semi-solid or glassy state, particularly at low temperatures and humidities. Previously, it has been shown that glassy aqueous citric acid aerosol is able to nucleate ice heterogeneously under conditions relevant to cirrus in the tropical tropopause layer (TTL. In this study we test if glassy aerosol distributions with a range of chemical compositions heterogeneously nucleate ice under cirrus conditions. Three single component aqueous solution aerosols (raffinose, 4-hydroxy-3-methoxy-DL-mandelic acid (HMMA and levoglucosan and one multi component aqueous solution aerosol (raffinose mixed with five dicarboxylic acids and ammonium sulphate were studied in both the liquid and glassy states at a large cloud simulation chamber. The investigated organic compounds have similar functionality to oxidised organic material found in atmospheric aerosol and have estimated temperature/humidity induced glass transition thresholds that fall within the range predicted for atmospheric SOA. A small fraction of aerosol particles of all compositions were found to nucleate ice heterogeneously in the deposition mode at temperatures relevant to the TTL (<200 K. Raffinose and HMMA, which form glasses at higher temperatures, nucleated ice heterogeneously at temperatures as high as 214.6 and 218.5 K respectively. We present the calculated ice active surface site density, ns, of the aerosols tested here and also of glassy citric acid aerosol as a function of relative humidity with respect to ice (RHi. We also propose a parameterisation which can be used to estimate heterogeneous ice nucleation by glassy aerosol for use in cirrus cloud models up to ~220 K. Finally, we show that heterogeneous nucleation by glassy aerosol may compete with ice nucleation on mineral dust particles in mid-latitudes cirrus.

  7. Automatic Scaling Hadoop in the Cloud for Efficient Process of Big Geospatial Data

    Directory of Open Access Journals (Sweden)

    Zhenlong Li

    2016-09-01

    Full Text Available Efficient processing of big geospatial data is crucial for tackling global and regional challenges such as climate change and natural disasters, but it is challenging not only due to the massive data volume but also due to the intrinsic complexity and high dimensions of the geospatial datasets. While traditional computing infrastructure does not scale well with the rapidly increasing data volume, Hadoop has attracted increasing attention in geoscience communities for handling big geospatial data. Recently, many studies were carried out to investigate adopting Hadoop for processing big geospatial data, but how to adjust the computing resources to efficiently handle the dynamic geoprocessing workload was barely explored. To bridge this gap, we propose a novel framework to automatically scale the Hadoop cluster in the cloud environment to allocate the right amount of computing resources based on the dynamic geoprocessing workload. The framework and auto-scaling algorithms are introduced, and a prototype system was developed to demonstrate the feasibility and efficiency of the proposed scaling mechanism using Digital Elevation Model (DEM interpolation as an example. Experimental results show that this auto-scaling framework could (1 significantly reduce the computing resource utilization (by 80% in our example while delivering similar performance as a full-powered cluster; and (2 effectively handle the spike processing workload by automatically increasing the computing resources to ensure the processing is finished within an acceptable time. Such an auto-scaling approach provides a valuable reference to optimize the performance of geospatial applications to address data- and computational-intensity challenges in GIScience in a more cost-efficient manner.

  8. A reliability-based maintenance technicians' workloads optimisation model with stochastic consideration

    Science.gov (United States)

    Ighravwe, D. E.; Oke, S. A.; Adebiyi, K. A.

    2016-06-01

    The growing interest in technicians' workloads research is probably associated with the recent surge in competition. This was prompted by unprecedented technological development that triggers changes in customer tastes and preferences for industrial goods. In a quest for business improvement, this worldwide intense competition in industries has stimulated theories and practical frameworks that seek to optimise performance in workplaces. In line with this drive, the present paper proposes an optimisation model which considers technicians' reliability that complements factory information obtained. The information used emerged from technicians' productivity and earned-values using the concept of multi-objective modelling approach. Since technicians are expected to carry out routine and stochastic maintenance work, we consider these workloads as constraints. The influence of training, fatigue and experiential knowledge of technicians on workload management was considered. These workloads were combined with maintenance policy in optimising reliability, productivity and earned-values using the goal programming approach. Practical datasets were utilised in studying the applicability of the proposed model in practice. It was observed that our model was able to generate information that practicing maintenance engineers can apply in making more informed decisions on technicians' management.

  9. Role of Academic Managers in Workload and Performance Management of Academic Staff: A Case Study

    Science.gov (United States)

    Graham, Andrew T.

    2016-01-01

    This small-scale case study focused on academic managers to explore the ways in which they control the workload of academic staff and the extent to which they use the workload model in performance management of academic staff. The links that exist between the workload and performance management were explored to confirm or refute the conceptual…

  10. Perceptions of mental workload in Dutch university employees of different ages: a focus group study

    Science.gov (United States)

    2013-01-01

    Background As academic workload seems to be increasing, many studies examined factors that contribute to the mental workload of academics. Age-related differences in work motives and intellectual ability may lead to differences in experienced workload and in the way employees experience work features. This study aims to obtain a better understanding of age differences in sources of mental workload. 33 academics from one faculty discussed causes of workload during focus group interviews, stratified by age. Findings Among our participants, the influence of ageing seems most evident in employees’ actions and reactions, while the causes of workload mentioned seemed largely similar. These individual reactions to workload may also be driven by differences in tenure. Most positively assessed work characteristics were: interaction with colleagues and students and autonomy. Aspects most often indicated as increasing the workload, were organisational aspects as obstacles for ‘getting the best out of people’ and the feeling that overtime seems unavoidable. Many employees indicated to feel stretched between the ‘greediness’ of the organisation and their own high working standards, and many fear to be assigned even less time for research if they do not meet the rigorous output criteria. Moreover, despite great efforts on their part, promotion opportunities seem limited. A more pronounced role for the supervisor seems appreciated by employees of all ages, although the specific interpretation varied between individuals and career stages. Conclusions To preserve good working conditions and quality of work, it seems important to scrutinize the output requirements and tenure-based needs for employee supervision. PMID:23506458

  11. Viewing the workload of vigilance through the lenses of the NASA-TLX and the MRQ.

    Science.gov (United States)

    Finomore, Victor S; Shaw, Tyler H; Warm, Joel S; Matthews, Gerald; Boles, David B

    2013-12-01

    The aim of this study was to compare the effectiveness of a new index of perceived mental workload, the Multiple Resource Questionnaire (MRQ), with the standard measure of workload used in the study of vigilance, the NASA Task Load Index (NASA-TLX). The NASA-TLX has been used extensively to demonstrate that vigilance tasks impose a high level of workload on observers. However, this instrument does not specify the information-processing resources needed for task performance. The MRQ offers a tool to measure the workload associated with vigilance assignments in which such resources can be identified. Two experiments were performed in which factors known to influence task demand were varied. Included were the detection of stimulus presence or absence, detecting critical signals by means of successive-type (absolute judgment) and simultaneous-type (comparative judgment) discriminations, and operating under multitask vs. single-task conditions. The MRQ paralleled the NASA-TLX in showing that vigilance tasks generally induce high levels of workload and that workload scores are greater in detecting stimulus absence than presence and in making successive as compared to simultaneous-type discriminations. Additionally, the MRQ was more effective than the NASA-TLX in reflecting higher workload in the context of multitask than in single-task conditions. The resource profiles obtained with MRQ fit well with the nature of the vigilance tasks employed, testifying to the scale's content validity. The MRQ may be a meaningful addition to the NASA-TLX for measuring the workload of vigilance assignments. By uncovering knowledge representation associated with different tasks, the MRQ may aid in designing operational vigilance displays.

  12. The global influence of dust mineralogical composition on heterogeneous ice nucleation in mixed-phase clouds

    International Nuclear Information System (INIS)

    Hoose, C; Lohmann, U; Erdin, R; Tegen, I

    2008-01-01

    Mineral dust is the dominant natural ice nucleating aerosol. Its ice nucleation efficiency depends on the mineralogical composition. We show the first sensitivity studies with a global climate model and a three-dimensional dust mineralogy. Results show that, depending on the dust mineralogical composition, coating with soluble material from anthropogenic sources can lead to quasi-deactivation of natural dust ice nuclei. This effect counteracts the increased cloud glaciation by anthropogenic black carbon particles. The resulting aerosol indirect effect through the glaciation of mixed-phase clouds by black carbon particles is small (+0.1 W m -2 in the shortwave top-of-the-atmosphere radiation in the northern hemisphere)

  13. Medical Resident Workload at a Multidisciplinary Hospital in Iran

    Directory of Open Access Journals (Sweden)

    Anahita Sadeghi

    2014-12-01

    Full Text Available Introduction: Medical resident workload has been shown to be associated with learning efficiency and patient satisfaction. However, there is limited evidence about it in developing countries. This study aimed to evaluate the medical resident workload in a multidisciplinary teaching hospital in Tehran, Iran.Methods: All medical residents at Shariati Hospital, a teaching hospital affiliated with Tehran University of Medical Science, who were working between November and December 2011 were enrolled in this cross-sectional study. A self–reported questionnaire was used to gather information about their duty hours (including daily activities and shifts and financial issues.Results:135 (52.5% out of 257 residents responded to the questionnaire. 72 (53.3% residents were in surgical departments and 63 (46.7% were in non-surgical departments. Mean duty hours per month were significantly higher in surgical (350.8 ±76.7 than non-surgical (300.6±74.2 departments (p=0.001. Three cardiology (a non-surgical group residents (5.7% and 30 residents (41% in surgical groups (p<0.001 declared a number of “on-calls in the hospital” more than the approved number in the curriculum. The majority of residents (97.8% declared that their salary was not sufficient to manage their lives and they needed other financial resources. Conclusion: Medical residents at teaching hospitals in Iran suffer from high workloads and low income. There is a need to reduce medical resident workload and increase salary to improve worklife balance and finances.

  14. Simulation of idealized warm fronts and life cycles of cirrus clouds

    Science.gov (United States)

    Bense, Vera; Spichtinger, Peter

    2013-04-01

    One of the generally accepted formation mechanisms of cirrus clouds is connected to warm fronts. As the warm air glides over the cold air mass, it cools through adiabatic expansion and reaches ice supersaturation that eventually leads to the formation of ice clouds. Within this work, the EULAG model (see e.g. Prusa et al., 2008) was used to study the formation and life cycles of cirrus clouds in idealized 2-dimensional simulations. The microphysical processes were modelled with the double-moment bulk scheme of Spichtinger and Gierens (2009), which describes homogeneous and heterogeneous nucleation. In order to represent the gradual gliding of the air along the front, a ramp was chosen as topography. The sensibility of cloud formation to different environmental conditions such as wind shear, aerosol distribution and slope of the front was analyzed. In case of cirrus cloud formation its persistence after the front was studied as well as the change in microphysical properties such as ice crystal number concentrations. References: Prusa, J.M., P.K. Smolarkiewicz, A.A. Wyszogrodzki, 2008: EULAG, a computational model for multiscale flows. Computers and Fluids, doi:10.1016/j.compfluid.2007.12.001. Spichtinger, P., K. M. Gierens, 2009: Modelling of cirrus clouds - Part 1a: Model description and validation, Atmos. Chem. Phys., 9, 685-706.

  15. The Influence of Nursing Faculty Workloads on Faculty Retention: A Case Study

    Science.gov (United States)

    Wood, Jennifer J.

    2013-01-01

    Nursing faculty workloads have come to the forefront of discussion in nursing education. The National League of Nursing (NLN) has made nursing faculty workloads a high priority in nursing education. Included in the priorities are areas of creating reform through innovations in nursing education, evaluating reform through evaluation research, and…

  16. An overview of the Ice Nuclei Research Unit Jungfraujoch/Cloud and Aerosol Characterization Experiment 2013 (INUIT-JFJ/CLACE-2013)

    Science.gov (United States)

    Schneider, Johannes

    2014-05-01

    Ice formation in mixed phase tropospheric clouds is an essential prerequisite for the formation of precipitation at mid-latitudes. Ice formation at temperatures warmer than -35°C is only possible via heterogeneous ice nucleation, but up to now the exact pathways of heterogeneous ice formation are not sufficiently well understood. The research unit INUIT (Ice NUcleation research unIT), funded by the Deutsche Forschungsgemeinschaft (DFG FOR 1525) has been established in 2012 with the objective to investigate heterogeneous ice nucleation by combination of laboratory studies, model calculation and field experiments. The main field campaign of the INUIT project (INUIT-JFJ) was conducted at the High Alpine Research Station Jungfraujoch (Swiss Alps, 3580 m asl) during January and February 2013, in collaboration with several international partners in the framework of CLACE2013. The instrumentation included a large set of aerosol chemical and physical analysis instruments (particle counters, particle sizers, particle mass spectrometers, cloud condensation nuclei counters, ice nucleus counters etc.), that were operated inside the Sphinx laboratory and sampled in mixed phase clouds through two ice selective inlets (Ice-CVI, ISI) as well as through a total aerosol inlet that was used for out-of-cloud aerosol measurements. Besides the on-line measurements, also samples for off-line analysis (ESEM, STXM) have been taken in and out of clouds. Furthermore, several cloud microphysics instruments were operated outside the Sphinx laboratory. First results indicate that a large fraction of ice residues sampled from mixed phase clouds contain organic material, but also mineral dust. Soot and lead were not found to be enriched in ice residues. The concentration of heterogeneous ice nuclei was found to be variable (ranging between 100 per liter) and to be strongly dependent on the operating conditions of the respective IN counter. The number size distribution of ice residues appears to

  17. Cognitive Workload and Psychophysiological Parameters During Multitask Activity in Helicopter Pilots

    OpenAIRE

    Gaetan , Sophie; Dousset , Erick; Marqueste , Tanguy; Bringoux , Lionel; Bourdin , Christophe; Vercher , Jean-Louis; Besson , Patricia

    2015-01-01

    International audience; BACKGROUND: Helicopter pilots are involved in a complex multitask activity, implying overuse of cognitive resources, which may result in piloting task impairment or in decision-making failure. Studies usually investigate this phenomenon in well-controlled, poorly ecological situations by focusing on the correlation between physiological values and either cognitive workload or emotional state. This study aimed at jointly exploring workload induced by a realistic simulat...

  18. Workload Management Strategies for Online Educators

    Science.gov (United States)

    Crews, Tena B.; Wilkinson, Kelly; Hemby, K. Virginia; McCannon, Melinda; Wiedmaier, Cheryl

    2008-01-01

    With increased use of online education, both students and instructors are adapting to the online environment. Online educators must adjust to the change in responsibilities required to teach online, as it is quite intensive during the designing, teaching, and revising stages. The purpose of this study is to examine and update workload management…

  19. Cognitive and affective components of mental workload: Understanding the effects of each on human decision making behavior

    Science.gov (United States)

    Nygren, Thomas E.

    1992-01-01

    Human factors and ergonomics researchers have recognized for some time the increasing importance of understanding the role of the construct of mental workload in flight research. Current models of mental workload suggest that it is a multidimensional and complex construct, but one that has proved difficult to measure. Because of this difficulty, emphasis has usually been placed on using direct reports through subjective measures such as rating scales to assess levels of mental workload. The NASA Task Load Index (NASA/TLX, Hart and Staveland) has been shown to be a highly reliable and sensitive measure of perceived mental workload. But a problem with measures like TLX is that there is still considerable disagreement as to what it is about mental workload that these subjective measures are actually measuring. The empirical use of subjective workload measures has largely been to provide estimates of the cognitive components of the actual mental workload required for a task. However, my research suggests that these measures may, in fact have greater potential in accurately assessing the affective components of workload. That is, for example, TLX may be more likely to assess the positive and negative feelings associated with varying workload levels, which in turn may potentially influence the decision making behavior that directly bears on performance and safety issues. Pilots, for example, are often called upon to complete many complex tasks that are high in mental workload, stress, and frustration, and that have significant dynamic decision making components -- often ones that involve risk as well.

  20. The impact of crosstalk on three-dimensional laparoscopic performance and workload.

    Science.gov (United States)

    Sakata, Shinichiro; Grove, Philip M; Watson, Marcus O; Stevenson, Andrew R L

    2017-10-01

    This is the first study to explore the effects of crosstalk from 3D laparoscopic displays on technical performance and workload. We studied crosstalk at magnitudes that may have been tolerated during laparoscopic surgery. Participants were 36 voluntary doctors. To minimize floor effects, participants completed their surgery rotations, and a laparoscopic suturing course for surgical trainees. We used a counterbalanced, within-subjects design in which participants were randomly assigned to complete laparoscopic tasks in one of six unique testing sequences. In a simulation laboratory, participants were randomly assigned to complete laparoscopic 'navigation in space' and suturing tasks in three viewing conditions: 2D, 3D without ghosting and 3D with ghosting. Participants calibrated their exposure to crosstalk as the maximum level of ghosting that they could tolerate without discomfort. The Randot® Stereotest was used to verify stereoacuity. The study performance metric was time to completion. The NASA TLX was used to measure workload. Normal threshold stereoacuity (40-20 second of arc) was verified in all participants. Comparing optimal 3D with 2D viewing conditions, mean performance times were 2.8 and 1.6 times faster in laparoscopic navigation in space and suturing tasks respectively (p< .001). Comparing optimal 3D with suboptimal 3D viewing conditions, mean performance times were 2.9 times faster in both tasks (p< .001). Mean workload in 2D was 1.5 and 1.3 times greater than in optimal 3D viewing, for navigation in space and suturing tasks respectively (p< .001). Mean workload associated with suboptimal 3D was 1.3 times greater than optimal 3D in both laparoscopic tasks (p< .001). There was no significant relationship between the magnitude of ghosting score, laparoscopic performance and workload. Our findings highlight the advantages of 3D displays when used optimally, and their shortcomings when used sub-optimally, on both laparoscopic performance and workload.

  1. Mental workload associated with operating an agricultural sprayer: an empirical approach.

    Science.gov (United States)

    Dey, A K; Mann, D D

    2011-04-01

    Agricultural spraying involves two major tasks: guiding a sprayer in response to a GPS navigation device, and simultaneous monitoring of rear-attached booms under various illumination and terrain difficulty levels. The aim of the present study was to investigate the effect of illumination, task difficulty, and task level on the mental workload of an individual operating an agricultural sprayer in response to a commercial GPS lightbar, and to explore the sensitivity of the NASA-TLX and SSWAT subjective rating scales in discriminating the subjective experienced workload under various task, illumination, and difficulty levels. Mental workload was measured using performance measures (lateral root mean square error and reaction time), physiological measures (0.1 Hz power of HRV, latency of the P300 component of event-related potential, and eye-glance behavior), and two subjective rating scales (NASA-TLX and SSWAT). Sixteen male university students participated in this experiment, and a fixed-base high-fidelity agricultural tractor simulator was used to create a simulated spraying task. All performance measures, the P300 latency, and subjective rating scales showed a common trend that mental workload increased with the change in illumination from day to night, with task difficulty from low to high, and with task type from single to dual. The 0.1 Hz power of HRV contradicted the performance measures. Eye-glance data showed that under night illumination, participants spent more time looking at the lightbar for guidance information. A similar trend was observed with the change in task type from single to dual. Both subjective rating scales showed a common trend of increasing mental workload with the change in illumination, difficulty, and task levels. However, the SSWAT scale was more sensitive than the NASA-TLX scale. With the change in illumination, difficulty, and task levels, participants spent more mental resources to meet the increased task demand; hence, the

  2. Climate impact of anthropogenic aerosols on cirrus clouds

    Science.gov (United States)

    Penner, J.; Zhou, C.

    2017-12-01

    Cirrus clouds have a net warming effect on the atmosphere and cover about 30% of the Earth's area. Aerosol particles initiate ice formation in the upper troposphere through modes of action that include homogeneous freezing of solution droplets, heterogeneous nucleation on solid particles immersed in a solution, and deposition nucleation of vapor onto solid particles. However, the efficacy with which particles act to form cirrus particles in a model depends on the representation of updrafts. Here, we use a representation of updrafts based on observations of gravity waves, and follow ice formation/evaporation during both updrafts and downdrafts. We examine the possible change in ice number concentration from anthropogenic soot originating from surface sources of fossil fuel and biomass burning and from aircraft particles that have previously formed ice in contrails. Results show that fossil fuel and biomass burning soot aerosols with this version exert a radiative forcing of -0.15±0.02 Wm-2 while aircraft aerosols that have been pre-activated within contrails exert a forcing of -0.20±0.06 Wm-2, but it is possible to decrease these estimates of forcing if a larger fraction of dust particles act as heterogeneous ice nuclei. In addition aircraft aerosols may warm the climate if a large fraction of these particles act as ice nuclei. The magnitude of the forcing in cirrus clouds can be comparable to the forcing exerted by anthropogenic aerosols on warm clouds. This assessment could therefore support climate models with high sensitivity to greenhouse gas forcing, while still allowing the models to fit the overall historical temperature change.

  3. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    Science.gov (United States)

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  4. Workload and Marital Satisfaction over Time: Testing Lagged Spillover and Crossover Effects during the Newlywed Years.

    Science.gov (United States)

    Lavner, Justin A; Clark, Malissa A

    2017-08-01

    Although many studies have found that higher workloads covary with lower levels of marital satisfaction, the question of whether workloads may also predict changes in marital satisfaction over time has been overlooked. To address this question, we investigated the lagged association between own and partner workload and marital satisfaction using eight waves of data collected every 6 months over the first four years of marriage from 172 heterosexual couples. Significant crossover, but not spillover, effects were found, indicating that partners of individuals with higher workloads at one time point experience greater declines in marital satisfaction by the following time point compared to the partners of individuals with lower workloads. These effects were not moderated by gender or parental status. These findings suggest that higher partner workloads can prove deleterious for relationship functioning over time and call for increased attention to the long-term effects of spillover and crossover from work to marital functioning.

  5. Cloud-Top Entrainment in Stratocumulus Clouds

    Science.gov (United States)

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  6. Planning and management of cloud computing networks

    Science.gov (United States)

    Larumbe, Federico

    The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a

  7. Cloud type comparisons of AIRS, CloudSat, and CALIPSO cloud height and amount

    Directory of Open Access Journals (Sweden)

    B. H. Kahn

    2008-03-01

    Full Text Available The precision of the two-layer cloud height fields derived from the Atmospheric Infrared Sounder (AIRS is explored and quantified for a five-day set of observations. Coincident profiles of vertical cloud structure by CloudSat, a 94 GHz profiling radar, and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO, are compared to AIRS for a wide range of cloud types. Bias and variability in cloud height differences are shown to have dependence on cloud type, height, and amount, as well as whether CloudSat or CALIPSO is used as the comparison standard. The CloudSat-AIRS biases and variability range from −4.3 to 0.5±1.2–3.6 km for all cloud types. Likewise, the CALIPSO-AIRS biases range from 0.6–3.0±1.2–3.6 km (−5.8 to −0.2±0.5–2.7 km for clouds ≥7 km (<7 km. The upper layer of AIRS has the greatest sensitivity to Altocumulus, Altostratus, Cirrus, Cumulonimbus, and Nimbostratus, whereas the lower layer has the greatest sensitivity to Cumulus and Stratocumulus. Although the bias and variability generally decrease with increasing cloud amount, the ability of AIRS to constrain cloud occurrence, height, and amount is demonstrated across all cloud types for many geophysical conditions. In particular, skill is demonstrated for thin Cirrus, as well as some Cumulus and Stratocumulus, cloud types infrared sounders typically struggle to quantify. Furthermore, some improvements in the AIRS Version 5 operational retrieval algorithm are demonstrated. However, limitations in AIRS cloud retrievals are also revealed, including the existence of spurious Cirrus near the tropopause and low cloud layers within Cumulonimbus and Nimbostratus clouds. Likely causes of spurious clouds are identified and the potential for further improvement is discussed.

  8. Bitwise dimensional co-clustering for analytical workloads

    NARCIS (Netherlands)

    S. Baumann (Stephan); P.A. Boncz (Peter); K.-U. Sattler

    2016-01-01

    htmlabstractAnalytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called Bitwise Dimensional

  9. A comparison of shock-cloud and wind-cloud interactions: effect of increased cloud density contrast on cloud evolution

    Science.gov (United States)

    Goldsmith, K. J. A.; Pittard, J. M.

    2018-05-01

    The similarities, or otherwise, of a shock or wind interacting with a cloud of density contrast χ = 10 were explored in a previous paper. Here, we investigate such interactions with clouds of higher density contrast. We compare the adiabatic hydrodynamic interaction of a Mach 10 shock with a spherical cloud of χ = 103 with that of a cloud embedded in a wind with identical parameters to the post-shock flow. We find that initially there are only minor morphological differences between the shock-cloud and wind-cloud interactions, compared to when χ = 10. However, once the transmitted shock exits the cloud, the development of a turbulent wake and fragmentation of the cloud differs between the two simulations. On increasing the wind Mach number, we note the development of a thin, smooth tail of cloud material, which is then disrupted by the fragmentation of the cloud core and subsequent `mass-loading' of the flow. We find that the normalized cloud mixing time (tmix) is shorter at higher χ. However, a strong Mach number dependence on tmix and the normalized cloud drag time, t_{drag}^' }, is not observed. Mach-number-dependent values of tmix and t_{drag}^' } from comparable shock-cloud interactions converge towards the Mach-number-independent time-scales of the wind-cloud simulations. We find that high χ clouds can be accelerated up to 80-90 per cent of the wind velocity and travel large distances before being significantly mixed. However, complete mixing is not achieved in our simulations and at late times the flow remains perturbed.

  10. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  11. How to reduce workload--augmented reality to ease the work of air traffic controllers.

    Science.gov (United States)

    Hofmann, Thomas; König, Christina; Bruder, Ralph; Bergner, Jörg

    2012-01-01

    In the future the air traffic will rise--the workload of the controllers will do the same. In the BMWi research project, one of the tasks is, how to ensure safe air traffic, and a reasonable workload for the air traffic controllers. In this project it was the goal to find ways how to reduce the workload (and stress) for the controllers to allow safe air traffic, esp. at huge hub-airports by implementing augmented reality visualization and interaction.

  12. Workload measurement: diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nuss, Wayne [The Prince Charles Hospital, Chermside, QLD (Australia). Dept. of Medical Imaging

    1993-06-01

    Departments of medical imaging, as with many other service departments in the health industry, are being asked to develop performance indicators. No longer are they assured that annual budget allocations will be forthcoming without justification or some output measurement indicators that will substantiate a claim for a reasonable share of resources. The human resource is the most valuable and the most expensive to any department. This paper provides a brief overview of the research and implementation of a radiographer workload measurement system that was commenced in the Brisbane North Health Region. 2 refs., 10 tabs.

  13. Physical workload and thoughts of retirement.

    Science.gov (United States)

    Perkiö-Mäkelä, Merja; Hirvonen, Maria

    2012-01-01

    The aim of this paper is to present Finnish employees' opinions on continuing work until retirement pension and after the age of 63, and to find out if physical workload is related to these opinions. Altogether 39% of men and 40% of women had never had thoughts of early retirement, and 59% claimed (both men and women) that they would consider working beyond the age of 63. Own health (20%); financial gain such as salary and better pension (19%); meaningful, interesting and challenging work (15%); flexible working hours or part-time work (13%); lighter work load (13%); good work community (8%); and good work environment (6%) were stated as factors affecting the decision to continue working after the age of 63. Employees whose work involved low physical workload had less thoughts of early retirement and had considered continuing work after the age of 63 more often than those whose work involved high physical loads. Own health in particular was stated as a reason to consider continuing work by employees whose work was physically demanding.

  14. Bitwise dimensional co-clustering for analytical workloads

    NARCIS (Netherlands)

    Baumann, Stephan; Boncz, Peter; Sattler, Kai Uwe

    2016-01-01

    Analytical workloads in data warehouses often include heavy joins where queries involve multiple fact tables in addition to the typical star-patterns, dimensional grouping and selections. In this paper we propose a new processing and storage framework called bitwise dimensional co-clustering (BDCC)

  15. Dynamic workload peak detection for slack management

    NARCIS (Netherlands)

    Milutinovic, A.; Goossens, Kees; Smit, Gerardus Johannes Maria; Kuper, Jan; Kuper, J.

    2009-01-01

    In this paper an analytical study on dynamism and possibilities on slack exploitation by dynamic power management is presented. We introduce a specific workload decomposition method for work required for (streaming) application processing data tokens (e.g. video frames) with work behaviour patterns

  16. Mental workload and cognitive task automaticity: an evaluation of subjective and time estimation metrics.

    Science.gov (United States)

    Liu, Y; Wickens, C D

    1994-11-01

    The evaluation of mental workload is becoming increasingly important in system design and analysis. The present study examined the structure and assessment of mental workload in performing decision and monitoring tasks by focusing on two mental workload measurements: subjective assessment and time estimation. The task required the assignment of a series of incoming customers to the shortest of three parallel service lines displayed on a computer monitor. The subject was either in charge of the customer assignment (manual mode) or was monitoring an automated system performing the same task (automatic mode). In both cases, the subjects were required to detect the non-optimal assignments that they or the computer had made. Time pressure was manipulated by the experimenter to create fast and slow conditions. The results revealed a multi-dimensional structure of mental workload and a multi-step process of subjective workload assessment. The results also indicated that subjective workload was more influenced by the subject's participatory mode than by the factor of task speed. The time estimation intervals produced while performing the decision and monitoring tasks had significantly greater length and larger variability than those produced while either performing no other tasks or performing a well practised customer assignment task. This result seemed to indicate that time estimation was sensitive to the presence of perceptual/cognitive demands, but not to response related activities to which behavioural automaticity has developed.

  17. Clarifying the dominant sources and mechanisms of cirrus cloud formation.

    Science.gov (United States)

    Cziczo, Daniel J; Froyd, Karl D; Hoose, Corinna; Jensen, Eric J; Diao, Minghui; Zondlo, Mark A; Smith, Jessica B; Twohy, Cynthia H; Murphy, Daniel M

    2013-06-14

    Formation of cirrus clouds depends on the availability of ice nuclei to begin condensation of atmospheric water vapor. Although it is known that only a small fraction of atmospheric aerosols are efficient ice nuclei, the critical ingredients that make those aerosols so effective have not been established. We have determined in situ the composition of the residual particles within cirrus crystals after the ice was sublimated. Our results demonstrate that mineral dust and metallic particles are the dominant source of residual particles, whereas sulfate and organic particles are underrepresented, and elemental carbon and biological materials are essentially absent. Further, composition analysis combined with relative humidity measurements suggests that heterogeneous freezing was the dominant formation mechanism of these clouds.

  18. Comparative analysis of methods for workload assessment of the main control room operators of NPP

    International Nuclear Information System (INIS)

    Georgiev, V.; Petkov, G.

    2008-01-01

    The paper presents benchmarking workload results obtained by a method for operator workload assessment – NASA Task Load Index, and a method for human error probability assessment - Performance Evaluation of Teamwork. Based on the archives of FSS-1000 training in the accident “Main Steam Line Tube Rupture at the WWER-1000 Containment” the capacities of the two methods for direct and indirect workload assessment are evaluated

  19. Driving with varying secondary task levels: mental workload, behavioural effects, and task prioritization

    NARCIS (Netherlands)

    Schaap, Nina; van Arem, Bart; van der Horst, Richard; Brookhuis, Karel; Alkim, T.P.; Arentze, T.

    2010-01-01

    Advanced Driver Assistance (ADA) Systems may provide a solution for safety-critical traffic situations. But these systems are new additions into the vehicle that might increase drivers’ mental workload. How do drivers behave in situations with high mental workload, and do they actively prioritize

  20. Combat surgical workload in Operation Iraqi Freedom and Operation Enduring Freedom: The definitive analysis.

    Science.gov (United States)

    Turner, Caryn A; Stockinger, Zsolt T; Gurney, Jennifer M

    2017-07-01

    Relatively few publications exist on surgical workload in the deployed military setting. This study analyzes US military combat surgical workload in Iraq and Afghanistan to gain a more thorough understanding of surgical training gaps and personnel requirements. A retrospective analysis of the Department of Defense Trauma Registry was performed for all Role 2 (R2) and Role 3 (R3) military treatment facilities from January 2001 to May 2016. International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes were grouped into 18 categories based on functional surgical skill sets. The 189,167 surgical procedures identified were stratified by role of care, month, and year. Percentiles were calculated for the number of procedures for each skill set. A literature search was performed for publications documenting combat surgical workload during the same period. A total of 23,548 surgical procedures were performed at R2 facilities, while 165,619 surgical procedures were performed at R3 facilities. The most common surgical procedures performed overall were soft tissue (37.5%), orthopedic (13.84%), abdominal (13.01%), and vascular (6.53%). The least common surgical procedures performed overall were cardiac (0.23%), peripheral nervous system (0.53%), and spine (0.34%).Mean surgical workload at any point in time clearly underrepresented those units in highly kinetic areas, at times by an order of magnitude or more. The published literature always demonstrated workloads well in excess of the 50th percentile for the relevant time period. The published literature on combat surgical workload represents the high end of the spectrum of deployed surgical experience. These trends in surgical workload provide vital information that can be used to determine the manpower needs of future conflicts in ever-changing operational tempo environments. Our findings provide surgical types and surgical workload requirements that will be useful in surgical training and

  1. The evaluation of team lifting on physical work demands and workload in ironworkers.

    Science.gov (United States)

    van der Molen, Henk F; Visser, Steven; Kuijer, P Paul F M; Faber, Gert; Hoozemans, Marco J M; van Dieën, Jaap H; Frings-Dresen, Monique H W

    2012-01-01

    Lifting and carrying heavy loads occur frequently among ironworkers and result in high prevalence and incidence rates of low back complaints, injuries and work-disability. From a health perspective, little information is available on the effect of team lifting on work demands and workload. Therefore, the objective of this study was to compare the effects of team lifting of maximally 50 kg by two ironworkers (T50) with team lifting of maximally 100 kg by four ironworkers (T100). This study combined a field and laboratory study with the following outcome measures: duration and frequency of tasks and activities, energetic workload, perceived discomfort and maximal compression forces (Fc peak) on the low back. The physical work demands and workload of an individual iron worker during manual handling of rebar materials of 100 kg with four workers did not differ from the manual handling of rebar materials of 50 kg with two workers, with the exception of low back discomfort and Fc peak. The biomechanical workload of the low back exceeded for both T50 and T100 the NIOSH threshold limit of 3400N. Therefore, mechanical transport or other effective design solutions should be considered to reduce the biomechanical workload of the low back and the accompanying health risks among iron workers.

  2. Estimating workload using EEG spectral power and ERPs in the n-back task

    Science.gov (United States)

    Brouwer, Anne-Marie; Hogervorst, Maarten A.; van Erp, Jan B. F.; Heffelaar, Tobias; Zimmerman, Patrick H.; Oostenveld, Robert

    2012-08-01

    Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.

  3. A cloud-ozone data product from Aura OMI and MLS satellite measurements

    Directory of Open Access Journals (Sweden)

    J. R. Ziemke

    2017-11-01

    Full Text Available Ozone within deep convective clouds is controlled by several factors involving photochemical reactions and transport. Gas-phase photochemical reactions and heterogeneous surface chemical reactions involving ice, water particles, and aerosols inside the clouds all contribute to the distribution and net production and loss of ozone. Ozone in clouds is also dependent on convective transport that carries low-troposphere/boundary-layer ozone and ozone precursors upward into the clouds. Characterizing ozone in thick clouds is an important step for quantifying relationships of ozone with tropospheric H2O, OH production, and cloud microphysics/transport properties. Although measuring ozone in deep convective clouds from either aircraft or balloon ozonesondes is largely impossible due to extreme meteorological conditions associated with these clouds, it is possible to estimate ozone in thick clouds using backscattered solar UV radiation measured by satellite instruments. Our study combines Aura Ozone Monitoring Instrument (OMI and Microwave Limb Sounder (MLS satellite measurements to generate a new research product of monthly-mean ozone concentrations in deep convective clouds between 30° S and 30° N for October 2004–April 2016. These measurements represent mean ozone concentration primarily in the upper levels of thick clouds and reveal key features of cloud ozone including: persistent low ozone concentrations in the tropical Pacific of  ∼ 10 ppbv or less; concentrations of up to 60 pphv or greater over landmass regions of South America, southern Africa, Australia, and India/east Asia; connections with tropical ENSO events; and intraseasonal/Madden–Julian oscillation variability. Analysis of OMI aerosol measurements suggests a cause and effect relation between boundary-layer pollution and elevated ozone inside thick clouds over landmass regions including southern Africa and India/east Asia.

  4. Workload differences across command levels and emergency response organizations during a major joint training exercise.

    Science.gov (United States)

    Prytz, Erik G; Rybing, Jonas; Jonson, Carl-Oscar

    2016-01-01

    This study reports on an initial test using a validated workload measurement method, the NASA Task Load Index (TLX), as an indicator of joint emergency exercise effectiveness. Prior research on emergency exercises indicates that exercises must be challenging, ie, result in high workload, to be effective. However, this is often problematic with some participants being underloaded and some overloaded. The NASA TLX was used to test for differences in workload between commanders and subordinates and among three different emergency response organizations during a joint emergency exercise. Questionnaire-based evaluation with professional emergency responders. The study was performed in conjunction with a large-scale interorganizational joint emergency exercise in Sweden. A total of 20 participants from the rescue services, 12 from the emergency medical services, and 12 from the police participated in the study (N=44). Ten participants had a command-level role during the exercise and the remaining 34 were subordinates. The main outcome measures were the workload subscales of the NASA TLX: mental demands, physical demands, temporal demands, performance, effort, and frustration. The results showed that the organizations experienced different levels of workload, that the commanders experienced a higher workload than the subordinates, and that two out of three organizations fell below the twenty-fifth percentile of average workload scores compiled from 237 prior studies. The results support the notion that the NASA TLX could be a useful complementary tool to evaluate exercise designs and outcomes. This should be further explored and verified in additional studies.

  5. Hysteresis in Mental Workload and Task Performance: The Influence of Demand Transitions and Task Prioritization.

    Science.gov (United States)

    Jansen, Reinier J; Sawyer, Ben D; van Egmond, René; de Ridder, Huib; Hancock, Peter A

    2016-12-01

    We examine how transitions in task demand are manifested in mental workload and performance in a dual-task setting. Hysteresis has been defined as the ongoing influence of demand levels prior to a demand transition. Authors of previous studies predominantly examined hysteretic effects in terms of performance. However, little is known about the temporal development of hysteresis in mental workload. A simulated driving task was combined with an auditory memory task. Participants were instructed to prioritize driving or to prioritize both tasks equally. Three experimental conditions with low, high, and low task demands were constructed by manipulating the frequency of lane changing. Multiple measures of subjective mental workload were taken during experimental conditions. Contrary to our prediction, no hysteretic effects were found after the high- to low-demand transition. However, a hysteretic effect in mental workload was found within the high-demand condition, which degraded toward the end of the high condition. Priority instructions were not reflected in performance. Online assessment of both performance and mental workload demonstrates the transient nature of hysteretic effects. An explanation for the observed hysteretic effect in mental workload is offered in terms of effort regulation. An informed arrival at the scene is important in safety operations, but peaks in mental workload should be avoided to prevent buildup of fatigue. Therefore, communication technologies should incorporate the historical profile of task demand. © 2016, Human Factors and Ergonomics Society.

  6. Workload and associated factors: a study in maritime port in Brazil

    Directory of Open Access Journals (Sweden)

    Marta Regina Cezar-Vaz

    Full Text Available ABSTRACT Objective: to identify the effect of the mental, physical, temporal, performance, total effort and frustration demands in the overall workload, and in the same way analyze the global burden of port labor and associated factors that contribute most to their decrease or increase. Method: a cross-sectional, quantitative study, developed with 232 dock workers. For data collection, a structured questionnaire with descriptive, occupational, smoking and illicit drug use variables was applied, as well as variables on the load on the tasks undertaken at work, based on the questionnaire NASA Task Load Index. For data analysis, we used the analysis of the Poisson regression model. Results: the demands physical demand and total effort showed greater effect on the overall workload, indicating high overall load on port work (134 employees - 58.8%. The following remained associated statistically with high levels of workload: age (p = 0.044, to be employee of the wharfage (p = 0.006, work only at night (p = 0.025, smoking (p = 0.037 and use of illegal drugs (p = 0.029. Conclusion: the workload in this type of activity was high, and the professional category and work shift the factors that contributed to the increase, while the age proved to be a factor associated with a decrease.

  7. +Cloud: An Agent-Based Cloud Computing Platform

    OpenAIRE

    González, Roberto; Hernández de la Iglesia, Daniel; de la Prieta Pintado, Fernando; Gil González, Ana Belén

    2017-01-01

    Cloud computing is revolutionizing the services provided through the Internet, and is continually adapting itself in order to maintain the quality of its services. This study presents the platform +Cloud, which proposes a cloud environment for storing information and files by following the cloud paradigm. This study also presents Warehouse 3.0, a cloud-based application that has been developed to validate the services provided by +Cloud.

  8. THE REAL NEED OF NURSES BASED ON WORKLOAD INDICATOR STAFF NEED (WISN

    Directory of Open Access Journals (Sweden)

    Ni Luh Ade Kusuma Ernawati

    2017-04-01

    Full Text Available Introduction: Nurses are health workers in hospitals that provide nursing care to patients for 24 hours. Workload of nurses was high due to insufficient numbers of nurses. It will have an impact on the decrease in work productivity that may affect nurses care for patients. To get the human resources necessary to suit the needs of nursing manpower planning to increase the competitiveness of hospitals in the era of globalization. The research objective was to analyze the real needs of nurses on staff workload indicators need (WISN. Method: The study design was observational analytic. Analysis of workload using the method of approach to time and motion study. Sample were 24 nurses who met the inclusion criteria. Analysis of the needs of staff nurses using the workload indicators need (WISN. Result: The results obtained based on the calculation of nurses with WISN method needs of nurses in the medical-surgical nurses as many as 54 people. Objective workload of nurses in the room medical surgery general state hospital of Bali is the average 82.61%, including height. The total time required to complete the productive activities of more than 80%. Discussion: Conclusion of this study show the number of nurses in the medical-surgical general hospital bali is still lacking as many as 30 people. It is suggest to the hospital management to increase gradually the number of nurses in the medical room.

  9. Analysis and modeling of social influence in high performance computing workloads

    KAUST Repository

    Zheng, Shuai

    2011-01-01

    Social influence among users (e.g., collaboration on a project) creates bursty behavior in the underlying high performance computing (HPC) workloads. Using representative HPC and cluster workload logs, this paper identifies, analyzes, and quantifies the level of social influence across HPC users. We show the existence of a social graph that is characterized by a pattern of dominant users and followers. This pattern also follows a power-law distribution, which is consistent with those observed in mainstream social networks. Given its potential impact on HPC workloads prediction and scheduling, we propose a fast-converging, computationally-efficient online learning algorithm for identifying social groups. Extensive evaluation shows that our online algorithm can (1) quickly identify the social relationships by using a small portion of incoming jobs and (2) can efficiently track group evolution over time. © 2011 Springer-Verlag.

  10. OpenID Connect as a security service in cloud-based medical imaging systems.

    Science.gov (United States)

    Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter

    2016-04-01

    The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.

  11. Silicon Photonics Cloud (SiCloud)

    DEFF Research Database (Denmark)

    DeVore, P. T. S.; Jiang, Y.; Lynch, M.

    2015-01-01

    Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths.......Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths....

  12. THE WORKLOAD ANALYSIS OF EMPLOYEE BY USING NATIONAL AERONAUTICS AND SPACE ADMINISTRATION-TASK LOAD INDEX METHOD (NASA-TLX

    Directory of Open Access Journals (Sweden)

    Nur Azemil

    2017-09-01

    Full Text Available Development of manufacturing and service institutions can not be separated from the role of human resources. Human resources have an important role in fulfilling vision and mission. University of A is one of the private educational institutions in East Java to achieve the goal must be managed properly that can be utilized optimally, this can be done by analyzing workload and performance or optimizing the number of employees. The purpose this research is measure workload and effect the employee’s performance. Measurement of workload is using National Aeronautics and Space Administration-Task Load Index (NASA-TLX method, NASA-TLX method is rating multidimentional subjective mental workload  that divides the workload based on the average load of 6 dimensions, and the measurement of performance is using questionnaire with 5 scales by likert scale. The results showed that employees who have Medium workload is 8%, High workload is 84% and Very high workload is 8%. The result of the questionnaire showed the category of employee’s performance, simply performance is 24% and satisfactory performance is 76%. From the statistical test by using Chi Square method, it is known that the value = 5,9915 and = 2,2225, the result shows  < , then  is accepted and  is rejected. Thus, there is influence between the workload of employees and the employees’s performance.

  13. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    Science.gov (United States)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  14. Cloud Processed CCN Suppress Stratus Cloud Drizzle

    Science.gov (United States)

    Hudson, J. G.; Noble, S. R., Jr.

    2017-12-01

    Conversion of sulfur dioxide to sulfate within cloud droplets increases the sizes and decreases the critical supersaturation, Sc, of cloud residual particles that had nucleated the droplets. Since other particles remain at the same sizes and Sc a size and Sc gap is often observed. Hudson et al. (2015) showed higher cloud droplet concentrations (Nc) in stratus clouds associated with bimodal high-resolution CCN spectra from the DRI CCN spectrometer compared to clouds associated with unimodal CCN spectra (not cloud processed). Here we show that CCN spectral shape (bimodal or unimodal) affects all aspects of stratus cloud microphysics and drizzle. Panel A shows mean differential cloud droplet spectra that have been divided according to traditional slopes, k, of the 131 measured CCN spectra in the Marine Stratus/Stratocumulus Experiment (MASE) off the Central California coast. K is generally high within the supersaturation, S, range of stratus clouds (< 0.5%). Because cloud processing decreases Sc of some particles, it reduces k. Panel A shows higher concentrations of small cloud droplets apparently grown on lower k CCN than clouds grown on higher k CCN. At small droplet sizes the concentrations follow the k order of the legend, black, red, green, blue (lowest to highest k). Above 13 µm diameter the lines cross and the hierarchy reverses so that blue (highest k) has the highest concentrations followed by green, red and black (lowest k). This reversed hierarchy continues into the drizzle size range (panel B) where the most drizzle drops, Nd, are in clouds grown on the least cloud-processed CCN (blue), while clouds grown on the most processed CCN (black) have the lowest Nd. Suppression of stratus cloud drizzle by cloud processing is an additional 2nd indirect aerosol effect (IAE) that along with the enhancement of 1st IAE by higher Nc (panel A) are above and beyond original IAE. However, further similar analysis is needed in other cloud regimes to determine if MASE was

  15. Operator’s cognitive, communicative and operative activities based workload measurement of advanced main control room

    International Nuclear Information System (INIS)

    Kim, Seunghwan; Kim, Yochan; Jung, Wondea

    2014-01-01

    Highlights: • An advanced MMIS in the advanced MCR requires new roles and tasks of operators. • A new workload evaluation framework is needed for a new MMIS environment. • This work suggests a new workload measurement approach (COCOA) for an advanced MCR. • COCOA enables 3-dimensional measurement of cognition, communication and operation. • COCOA workload evaluation of the reference plant through simulation was performed. - Abstract: An advanced man–machine interface system (MMIS) with a computer-based procedure system and high-tech control/alarm system is installed in the advanced main control room (MCR) of a nuclear power plant. Accordingly, though the task of the operators has been changed a great deal, owing to a lack of appropriate guidelines on the role allocation or communication method of the operators, operators should follow the operating strategies of conventional MCR and the problem of an unbalanced workload for each operator can be raised. Thus, it is necessary to enhance the operation capability and improve the plant safety by developing guidelines on the role definition and communication of operators in an advanced MCR. To resolve this problem, however, a method for measuring the workload according to the work execution of the operators is needed, but an applicable method is not available. In this research, we propose a COgnitive, Communicative and Operational Activities measurement approach (COCOA) to measure and evaluate the workload of operators in an advanced MCR. This paper presents the taxonomy for additional operation activities of the operators to use the computerized procedures and soft control added to an advanced MCR, which enables an integrated measurement of the operator workload in various dimensions of cognition, communication, and operation. To check the applicability of COCOA, we evaluated the operator workload of an advanced MCR of a reference power plant through simulation training experiments. As a result, the amount

  16. Evaluation of Workload and its Impact on Satisfaction Among Pharmacy Academicians in Southern India.

    Science.gov (United States)

    Ahmad, Akram; Khan, Muhammad Umair; Srikanth, Akshaya B; Patel, Isha; Nagappa, Anantha Naik; Jamshed, Shazia Qasim

    2015-06-01

    The purpose of this study was to determine the level of workload among pharmacy academicians working in public and private sector universities in India. The study also aimed to assess the satisfaction of academicians towards their workload. A cross-sectional study was conducted for a period of 2 months among pharmacy academicians in Karnataka state of Southern India. Convenience sampling was used to select a sample and was contacted via email and/or social networking sites. Questionnaire designed by thorough review literature was used as a tool to collect data on workload (teaching, research, extracurricular services) and satisfaction. Of 214 participants, 95 returned the filled questionnaire giving the response rate of 44.39%. Private sector academicians had more load of teaching (p=0.046) and they appeared to be less involved in research activities (p=0.046) as compared to public sector academicians. More than half of the respondents (57.9%) were satisfied with their workload with Assistant Professors were least satisfied as compared to Professors (p=0.01). Overall, private sector academicians are more burdened by teaching load and also are less satisfied of their workload. Revision of private universities policies may aid in addressing this issue.

  17. OpenID connect as a security service in Cloud-based diagnostic imaging systems

    Science.gov (United States)

    Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter

    2015-03-01

    The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.

  18. Use of the RoboFlag synthetic task environment to investigate workload and stress responses in UAV operation.

    Science.gov (United States)

    Guznov, Svyatoslav; Matthews, Gerald; Funke, Gregory; Dukes, Allen

    2011-09-01

    Use of unmanned aerial vehicles (UAVs) is an increasingly important element of military missions. However, controlling UAVs may impose high stress and workload on the operator. This study evaluated the use of the RoboFlag simulated environment as a means for profiling multiple dimensions of stress and workload response to a task requiring control of multiple vehicles (robots). It tested the effects of two workload manipulations, environmental uncertainty (i.e., UAV's visual view area) and maneuverability, in 64 participants. The findings confirmed that the task produced substantial workload and elevated distress. Dissociations between the stress and performance effects of the manipulations confirmed the utility of a multivariate approach to assessment. Contrary to expectations, distress and some aspects of workload were highest in the low-uncertainty condition, suggesting that overload of information may be an issue for UAV interface designers. The strengths and limitations of RoboFlag as a methodology for investigating stress and workload responses are discussed.

  19. Multiplexing Low and High QoS Workloads in Virtual Environments

    Science.gov (United States)

    Verboven, Sam; Vanmechelen, Kurt; Broeckhove, Jan

    Virtualization technology has introduced new ways for managing IT infrastructure. The flexible deployment of applications through self-contained virtual machine images has removed the barriers for multiplexing, suspending and migrating applications with their entire execution environment, allowing for a more efficient use of the infrastructure. These developments have given rise to an important challenge regarding the optimal scheduling of virtual machine workloads. In this paper, we specifically address the VM scheduling problem in which workloads that require guaranteed levels of CPU performance are mixed with workloads that do not require such guarantees. We introduce a framework to analyze this scheduling problem and evaluate to what extent such mixed service delivery is beneficial for a provider of virtualized IT infrastructure. Traditionally providers offer IT resources under a guaranteed and fixed performance profile, which can lead to underutilization. The findings of our simulation study show that through proper tuning of a limited set of parameters, the proposed scheduling algorithm allows for a significant increase in utilization without sacrificing on performance dependability.

  20. Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?

    Science.gov (United States)

    Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming

    2017-01-01

    A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males' mental workloads were significantly higher than females'. For males, high-value products' mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio-visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio-visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference.

  1. Analysis of Mental Workload in Online Shopping: Are Augmented and Virtual Reality Consistent?

    Science.gov (United States)

    Zhao, Xiaojun; Shi, Changxiu; You, Xuqun; Zong, Chenming

    2017-01-01

    A market research company (Nielsen) reported that consumers in the Asia-Pacific region have become the most active group in online shopping. Focusing on augmented reality (AR), which is one of three major techniques used to change the method of shopping in the future, this study used a mixed design to discuss the influences of the method of online shopping, user gender, cognitive style, product value, and sensory channel on mental workload in virtual reality (VR) and AR situations. The results showed that males’ mental workloads were significantly higher than females’. For males, high-value products’ mental workload was significantly higher than that of low-value products. In the VR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference was reduced under audio–visual conditions. In the AR situation, the visual mental workload of field-independent and field-dependent consumers showed a significant difference, but the difference increased under audio–visual conditions. This study provided a psychological study of online shopping with AR and VR technology with applications in the future. Based on the perspective of embodied cognition, AR online shopping may be potential focus of research and market application. For the future design of online shopping platforms and the updating of user experience, this study provides a reference. PMID:28184207

  2. The smartphone and the driver's cognitive workload: A comparison of Apple, Google, and Microsoft's intelligent personal assistants.

    Science.gov (United States)

    Strayer, David L; Cooper, Joel M; Turrill, Jonna; Coleman, James R; Hopman, Rachel J

    2017-06-01

    The goal of this research was to examine the impact of voice-based interactions using 3 different intelligent personal assistants (Apple's Siri , Google's Google Now for Android phones, and Microsoft's Cortana ) on the cognitive workload of the driver. In 2 experiments using an instrumented vehicle on suburban roadways, we measured the cognitive workload of drivers when they used the voice-based features of each smartphone to place a call, select music, or send text messages. Cognitive workload was derived from primary task performance through video analysis, secondary-task performance using the Detection Response Task (DRT), and subjective mental workload. We found that workload was significantly higher than that measured in the single-task drive. There were also systematic differences between the smartphones: The Google system placed lower cognitive demands on the driver than the Apple and Microsoft systems, which did not differ. Video analysis revealed that the difference in mental workload between the smartphones was associated with the number of system errors, the time to complete an action, and the complexity and intuitiveness of the devices. Finally, surprisingly high levels of cognitive workload were observed when drivers were interacting with the devices: "on-task" workload measures did not systematically differ from that associated with a mentally demanding Operation Span (OSPAN) task. The analysis also found residual costs associated using each of the smartphones that took a significant time to dissipate. The data suggest that caution is warranted in the use of smartphone voice-based technology in the vehicle because of the high levels of cognitive workload associated with these interactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds

    Science.gov (United States)

    Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.

    2008-05-01

    CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.

  4. Simulation-based computation of the workload correlation function in a Levy-driven queue

    NARCIS (Netherlands)

    P. Glynn; M.R.H. Mandjes (Michel)

    2009-01-01

    htmlabstractIn this paper we consider a single-server queue with Levy input, and in particular its workload process (Q_t), focusing on its correlation structure. With the correlation function defined as r(t) := Cov(Q_0, Q_t)/Var Q_0 (assuming the workload process is in stationarity at time 0), we

  5. Postural Control in Workplace Safety: Role of Occupational Footwear and Workload

    Directory of Open Access Journals (Sweden)

    Harish Chander

    2017-08-01

    Full Text Available Maintaining postural stability is crucial, especially in hazardous occupational environments. The purpose of the study was to assess the role of three occupational footwear (low top shoe (LT; tactical work boot (TB and steel-toed work boot (WB on postural stability when exposed to an occupational workload (4-h involving standing/walking using the sensory organization test (SOT equilibrium (EQ scores and comparing current results with previously published postural sway variables from the same study. Fourteen male adults were tested on three separate days wearing a randomized occupational footwear, at the beginning (pre and every 30 min of the 4-h workload until 240th min. SOT EQ scores were analyzed using a 3 × 9 repeated measures analysis of variance at an alpha level of 0.05. Significant differences between footwear was found in eyes open (p = 0.03 and eyes closed (p = 0.001 conditions. Pairwise comparisons revealed that LT had significantly lower postural stability compared to TB and WB. No other significant differences were found between footwear and over time. Significant differences between footwear can be attributed to design characteristics of footwear. Lack of significant differences over time suggests that, even though the average EQ scores decreased during the workload implying less postural stability, SOT EQ scores alone may not be sufficient to detect postural stability changes over the 4-h workload.

  6. The gLite Workload Management System

    International Nuclear Information System (INIS)

    Marco, Cecchi; Fabio, Capannini; Alvise, Dorigo; Antonia, Ghiselli; Alessio, Gianelle; Francesco, Giacomini; Elisabetta, Molinari; Salvatore, Monforte; Alessandro, Maraschini; Luca, Petronzio

    2010-01-01

    The gLite Workload Management System represents a key entry point to high-end services available on a Grid. Being designed as part of the european Grid within the six years long EU-funded EGEE project, now at its third phase, the WMS is meant to provide reliable and efficient distribution and management of end-user requests. This service basically translates user requirements and preferences into specific operations and decisions - dictated by the general status of all other Grid services - while taking responsibility to bring requests to successful completion. The WMS has become a reference implementation of the 'early binding' approach to meta-scheduling as a neat, Grid-aware solution, able to optimise resource access and to satisfy requests for computation together with data. Several added value features are provided for job submission, different job types are supported from simple batch to a variety of compounds. In this paper we outline what has been achieved to provide adequate workload and management components, suitable to be deployed in a production-quality Grid, while covering the design and development of the gLite WMS and focusing on the most recently achieved results.

  7. Evaluation of Mental Workload among ICU Ward's Nurses

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-12-01

    Conclusion: Various performance obstacles are correlated with nurses' workload, affirms the signifi­cance of nursing work system characteristics. Interventions are recommended based on the results of this study in the work settings of nurses in ICUs.

  8. A cloud-based X73 ubiquitous mobile healthcare system: design and implementation.

    Science.gov (United States)

    Ji, Zhanlin; Ganchev, Ivan; O'Droma, Máirtín; Zhang, Xin; Zhang, Xueji

    2014-01-01

    Based on the user-centric paradigm for next generation networks, this paper describes a ubiquitous mobile healthcare (uHealth) system based on the ISO/IEEE 11073 personal health data (PHD) standards (X73) and cloud computing techniques. A number of design issues associated with the system implementation are outlined. The system includes a middleware on the user side, providing a plug-and-play environment for heterogeneous wireless sensors and mobile terminals utilizing different communication protocols and a distributed "big data" processing subsystem in the cloud. The design and implementation of this system are envisaged as an efficient solution for the next generation of uHealth systems.

  9. Developing Verification Systems for Building Information Models of Heritage Buildings with Heterogeneous Datasets

    Science.gov (United States)

    Chow, L.; Fai, S.

    2017-08-01

    The digitization and abstraction of existing buildings into building information models requires the translation of heterogeneous datasets that may include CAD, technical reports, historic texts, archival drawings, terrestrial laser scanning, and photogrammetry into model elements. In this paper, we discuss a project undertaken by the Carleton Immersive Media Studio (CIMS) that explored the synthesis of heterogeneous datasets for the development of a building information model (BIM) for one of Canada's most significant heritage assets - the Centre Block of the Parliament Hill National Historic Site. The scope of the project included the development of an as-found model of the century-old, six-story building in anticipation of specific model uses for an extensive rehabilitation program. The as-found Centre Block model was developed in Revit using primarily point cloud data from terrestrial laser scanning. The data was captured by CIMS in partnership with Heritage Conservation Services (HCS), Public Services and Procurement Canada (PSPC), using a Leica C10 and P40 (exterior and large interior spaces) and a Faro Focus (small to mid-sized interior spaces). Secondary sources such as archival drawings, photographs, and technical reports were referenced in cases where point cloud data was not available. As a result of working with heterogeneous data sets, a verification system was introduced in order to communicate to model users/viewers the source of information for each building element within the model.

  10. Teaching Chinese in heterogeneous classrooms: strategies and practices

    Directory of Open Access Journals (Sweden)

    Rong Zhang Fernandez

    2014-12-01

    Full Text Available The heterogeneous nature of the Chinese classroom is a reality in the teaching of Chinese in France, both in secondary and higher education. This heterogeneity is due to several reasons: different levels of language knowledge, different origins and backgrounds of the students, different teaching/learning objectives, different cultural and family background, and social factors. Our research has been conducted in  a final-year LIE college class (langue inter-établissement; in a French secondary school. In our study, the following questions have been posed: How to best adapt the teaching of Chinese to fit the needs of all students? Would differentiated instruction be a solution? What would be the best strategies and practices, in view of the CEFR requirements related to teaching content, to tasks and to assessment? Taking into account a detailed analysis of the class in question in terms of the type of students, the differences in their knowledge of language, and their learning goals, , we adopt  the theory of differentiated instruction –  its main ideas strategies, its overall methodology and practical techniques to address the difficulties ensuing from classroom heterogeneity. The differentiation is implemented at the level of content, task selection, course structure and evaluation. Are there any limitations to differentiated instruction? Strong discrepancies in the levels of students’ knowledge is potentially a problem, and differences in their work pace as well as the teachers’ increased workload can also present difficulties. New ways of organizing language classes such as grouping students on the basis of their various language skills could help solve these issues.

  11. Determining Nurse Aide Staffing Requirements to Provide Care Based on Resident Workload: A Discrete Event Simulation Model.

    Science.gov (United States)

    Schnelle, John F; Schroyer, L Dale; Saraf, Avantika A; Simmons, Sandra F

    2016-11-01

    Nursing aides provide most of the labor-intensive activities of daily living (ADL) care to nursing home (NH) residents. Currently, most NHs do not determine nurse aide staffing requirements based on the time to provide ADL care for their unique resident population. The lack of an objective method to determine nurse aide staffing requirements suggests that many NHs could be understaffed in their capacity to provide consistent ADL care to all residents in need. Discrete event simulation (DES) mathematically models key work parameters (eg, time to provide an episode of care and available staff) to predict the ability of the work setting to provide care over time and offers an objective method to determine nurse aide staffing needs in NHs. This study had 2 primary objectives: (1) to describe the relationship between ADL workload and the level of nurse aide staffing reported by NHs; and, (2) to use a DES model to determine the relationship between ADL workload and nurse aide staffing necessary for consistent, timely ADL care. Minimum Data Set data related to the level of dependency on staff for ADL care for residents in over 13,500 NHs nationwide were converted into 7 workload categories that captured 98% of all residents. In addition, data related to the time to provide care for the ADLs within each workload category was used to calculate a workload score for each facility. The correlation between workload and reported nurse aide staffing levels was calculated to determine the association between staffing reported by NHs and workload. Simulations to project staffing requirements necessary to provide ADL care were then conducted for 65 different workload scenarios, which included 13 different nurse aide staffing levels (ranging from 1.6 to 4.0 total hours per resident day) and 5 different workload percentiles (ranging from the 5th to the 95th percentile). The purpose of the simulation model was to determine the staffing necessary to provide care within each workload

  12. A FIRE-ACE/SHEBA Case Study of Mixed-Phase Arctic Boundary Layer Clouds: Entrainment Rate Limitations on Rapid Primary Ice Nucleation Processes

    Science.gov (United States)

    Fridlin, Ann; vanDiedenhoven, Bastiaan; Ackerman, Andrew S.; Avramov, Alexander; Mrowiec, Agnieszka; Morrison, Hugh; Zuidema, Paquita; Shupe, Matthew D.

    2012-01-01

    Observations of long-lived mixed-phase Arctic boundary layer clouds on 7 May 1998 during the First International Satellite Cloud Climatology Project (ISCCP) Regional Experiment (FIRE)Arctic Cloud Experiment (ACE)Surface Heat Budget of the Arctic Ocean (SHEBA) campaign provide a unique opportunity to test understanding of cloud ice formation. Under the microphysically simple conditions observed (apparently negligible ice aggregation, sublimation, and multiplication), the only expected source of new ice crystals is activation of heterogeneous ice nuclei (IN) and the only sink is sedimentation. Large-eddy simulations with size-resolved microphysics are initialized with IN number concentration N(sub IN) measured above cloud top, but details of IN activation behavior are unknown. If activated rapidly (in deposition, condensation, or immersion modes), as commonly assumed, IN are depleted from the well-mixed boundary layer within minutes. Quasi-equilibrium ice number concentration N(sub i) is then limited to a small fraction of overlying N(sub IN) that is determined by the cloud-top entrainment rate w(sub e) divided by the number-weighted ice fall speed at the surface v(sub f). Because w(sub c) 10 cm/s, N(sub i)/N(sub IN)<< 1. Such conditions may be common for this cloud type, which has implications for modeling IN diagnostically, interpreting measurements, and quantifying sensitivity to increasing N(sub IN) (when w(sub e)/v(sub f)< 1, entrainment rate limitations serve to buffer cloud system response). To reproduce observed ice crystal size distributions and cloud radar reflectivities with rapidly consumed IN in this case, the measured above-cloud N(sub IN) must be multiplied by approximately 30. However, results are sensitive to assumed ice crystal properties not constrained by measurements. In addition, simulations do not reproduce the pronounced mesoscale heterogeneity in radar reflectivity that is observed.

  13. An Investigation of the Workload and Job Satisfaction of North Carolina's Special Education Directors

    Science.gov (United States)

    Cash, Jennifer Brown

    2013-01-01

    Keywords: special education directors, workload, job satisfaction, special education administration. The purpose of this mixed methods research study was to investigate employee characteristics, workload, and job satisfaction of special education directors employed by local education agencies in North Carolina (N = 115). This study illuminates the…

  14. Heart Rate Variability as a Measure of Airport Ramp-Traffic Controllers Workload

    Science.gov (United States)

    Hayashi, Miwa; Dulchinos, Victoria Lee

    2016-01-01

    Heart Rate Variability (HRV) has been reported to reflect the person's cognitive and emotional stress levels, and may offer an objective measure of human-operator's workload levels, which are recorded continuously and unobtrusively to the task performance. The present paper compares the HRV data collected during a human-in-the-loop simulation of airport ramp-traffic control operations with the controller participants' own verbal self-reporting ratings of their workload.

  15. Training loads and injury risk in Australian football—differing acute: chronic workload ratios influence match injury risk

    Science.gov (United States)

    Carey, David L; Blanch, Peter; Ong, Kok-Leong; Crossley, Kay M; Crow, Justin; Morris, Meg E

    2017-01-01

    Aims (1) To investigate whether a daily acute:chronic workload ratio informs injury risk in Australian football players; (2) to identify which combination of workload variable, acute and chronic time window best explains injury likelihood. Methods Workload and injury data were collected from 53 athletes over 2 seasons in a professional Australian football club. Acute:chronic workload ratios were calculated daily for each athlete, and modelled against non-contact injury likelihood using a quadratic relationship. 6 workload variables, 8 acute time windows (2–9 days) and 7 chronic time windows (14–35 days) were considered (336 combinations). Each parameter combination was compared for injury likelihood fit (using R2). Results The ratio of moderate speed running workload (18–24 km/h) in the previous 3 days (acute time window) compared with the previous 21 days (chronic time window) best explained the injury likelihood in matches (R2=0.79) and in the immediate 2 or 5 days following matches (R2=0.76–0.82). The 3:21 acute:chronic workload ratio discriminated between high-risk and low-risk athletes (relative risk=1.98–2.43). Using the previous 6 days to calculate the acute workload time window yielded similar results. The choice of acute time window significantly influenced model performance and appeared to reflect the competition and training schedule. Conclusions Daily workload ratios can inform injury risk in Australian football. Clinicians and conditioning coaches should consider the sport-specific schedule of competition and training when choosing acute and chronic time windows. For Australian football, the ratio of moderate speed running in a 3-day or 6-day acute time window and a 21-day chronic time window best explained injury risk. PMID:27789430

  16. Observations of ice nuclei and heterogeneous freezing in a Western Pacific extratropical storm

    Directory of Open Access Journals (Sweden)

    J. L. Stith

    2011-07-01

    Full Text Available In situ airborne sampling of refractory black carbon (rBC particles and Ice Nuclei (IN was conducted in and near an extratropical cyclonic storm in the western Pacific Ocean during the Pacific Dust Experiment, PACDEX, in the spring of 2007. Airmass origins were from Eastern Asia. Clouds associated primarily with the warm sector of the storm were sampled at various locations and altitudes. Cloud hydrometeors were evaporated by a counterflow virtual impactor (CVI and the residuals were sampled by a single particle soot photometer (SP2 instrument, a continuous flow diffusion chamber ice nucleus detector (CFDC and collected for electron microscope analysis. In clouds containing large ice particles, multiple residual particles were observed downstream of the CVI for each ice particle sampled on average. The fraction of rBC compared to total particles in the residual particles increased with decreasing condensed water content, while the fraction of IN compared to total particles did not, suggesting that the scavenging process for rBC is different than for IN. In the warm sector storm midlevels at temperatures where heterogeneous freezing is expected to be significant (here −24 to −29 °C, IN concentrations from ice particle residuals generally agreed with simultaneous measurements of total ice concentrations or were higher in regions where aggregates of crystals were found, suggesting heterogeneous freezing as the dominant ice formation process in the mid levels of these warm sector clouds. Lower in the storm, at warmer temperatures, ice concentrations were affected by aggregation and were somewhat less than measured IN concentrations at colder temperatures. The results are consistent with ice particles forming at storm mid-levels by heterogeneous freezing on IN, followed by aggregation and sedimentation to lower altitudes. Compositional analysis of the aerosol and back trajectories of the air in the warm sector suggested a possible biomass

  17. Effects of work zone configurations and traffic density on performance variables and subjective workload.

    Science.gov (United States)

    Shakouri, Mahmoud; Ikuma, Laura H; Aghazadeh, Fereydoun; Punniaraj, Karthy; Ishak, Sherif

    2014-10-01

    This paper investigates the effect of changing work zone configurations and traffic density on performance variables and subjective workload. Data regarding travel time, average speed, maximum percent braking force and location of lane changes were collected by using a full size driving simulator. The NASA-TLX was used to measure self-reported workload ratings during the driving task. Conventional lane merge (CLM) and joint lane merge (JLM) were modeled in a driving simulator, and thirty participants (seven female and 23 male), navigated through the two configurations with two levels of traffic density. The mean maximum braking forces was 34% lower in the JLM configuration, and drivers going through the JLM configuration remained in the closed lane longer. However, no significant differences in speed were found between the two merge configurations. The analysis of self-reported workload ratings show that participants reported 15.3% lower total workload when driving through the JLM. In conclusion, the implemented changes in the JLM make it a more favorable merge configuration in both high and low traffic densities in terms of optimizing traffic flow by increasing the time and distance cars use both lanes, and in terms of improving safety due to lower braking forces and lower reported workload. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. PEMILIHAN DAN MIGRASI VM MENGGUNAKAN MCDM UNTUK PENINGKATAN KINERJA LAYANAN PADA CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Abdullah Fadil

    2016-08-01

    Full Text Available Komputasi awan atau cloud computing merupakan lingkungan yang heterogen dan terdistribusi, tersusun atas gugusan jaringan server dengan berbagai kapasitas sumber daya komputasi yang berbeda-beda guna menopang model layanan yang ada di atasnya. Virtual machine (VM dijadikan sebagai representasi dari ketersediaan sumber daya komputasi dinamis yang dapat dialokasikan dan direalokasikan sesuai dengan permintaan. Mekanisme live migration VM di antara server fisik yang terdapat di dalam data center cloud digunakan untuk mencapai konsolidasi dan memaksimalkan utilisasi VM. Pada prosedur konsoidasi vm, pemilihan dan penempatan VM sering kali menggunakan kriteria tunggal dan statis. Dalam penelitian ini diusulkan pemilihan dan penempatan VM menggunakan multi-criteria decision making (MCDM pada prosedur konsolidasi VM dinamis di lingkungan cloud data center guna meningkatkan layanan cloud computing. Pendekatan praktis digunakan dalam mengembangkan lingkungan cloud computing berbasis OpenStack Cloud dengan mengintegrasikan VM selection dan VM Placement pada prosedur konsolidasi VM menggunakan OpenStack-Neat. Hasil penelitian menunjukkan bahwa metode pemilihan dan penempatan VM melalui live migration mampu menggantikan kerugian yang disebabkan oleh down-times sebesar 11,994 detik dari waktu responnya. Peningkatan response times terjadi sebesar 6 ms ketika terjadi proses live migration VM dari host asal ke host tujuan. Response times rata-rata setiap vm yang tersebar pada compute node setelah terjadi proses live migration sebesar 67 ms yang menunjukkan keseimbangan beban pada sistem cloud computing.

  19. Development and validation of a multilevel model for predicting workload under routine and nonroutine conditions in an air traffic management center.

    Science.gov (United States)

    Neal, Andrew; Hannah, Sam; Sanderson, Penelope; Bolland, Scott; Mooij, Martijn; Murphy, Sean

    2014-03-01

    The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. A multilevel workload model was developed in Study I with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters.The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs.Tactical uses include the dynamic reallocation of resources to meet changes in demand.

  20. Subjective health complaints and self-rated health: are expectancies more important than socioeconomic status and workload?

    Science.gov (United States)

    Ree, Eline; Odeen, Magnus; Eriksen, Hege R; Indahl, Aage; Ihlebæk, Camilla; Hetland, Jørn; Harris, Anette

    2014-06-01

    The associations between socioeconomic status (SES), physical and psychosocial workload and health are well documented. According to The Cognitive Activation Theory of Stress (CATS), learned response outcome expectancies (coping, helplessness, and hopelessness) are also important contributors to health. This is in part as independent factors for health, but coping may also function as a buffer against the impact different demands have on health. The purpose of this study was to investigate the relative effect of SES (as measured by level of education), physical workload, and response outcome expectancies on subjective health complaints (SHC) and self-rated health, and if response outcome expectancies mediate the effects of education and physical workload on SHC and self-rated health. A survey was carried out among 1,746 Norwegian municipal employees (mean age 44.2, 81 % females). Structural Equation Models with SHC and self-rated health as outcomes were conducted. Education, physical workload, and response outcome expectancies, were the independent 28 variables in the model. Helplessness/hopelessness had a stronger direct effect on self-rated health and SHC than education and physical workload, for both men and women. Helplessness/hopelessness fully mediated the effect of physical workload on SHC for men (0.121), and mediated 30 % of a total effect of 0.247 for women. For women, education had a small but significant indirect effect through helplessness/hopelessness on self-rated health (0.040) and SHC (-0.040), but no direct effects were found. For men, there was no effect of education on SHC, and only a direct effect on self-rated health (0.134). The results indicated that helplessness/hopelessness is more important for SHC and health than well-established measures on SES such as years of education and perceived physical workload in this sample. Helplessness/hopelessness seems to function as a mechanism between physical workload and health.

  1. A theoretical framework for modeling dilution enhancement of non-reactive solutes in heterogeneous porous media.

    Science.gov (United States)

    de Barros, F P J; Fiori, A; Boso, F; Bellin, A

    2015-01-01

    Spatial heterogeneity of the hydraulic properties of geological porous formations leads to erratically shaped solute clouds, thus increasing the edge area of the solute body and augmenting the dilution rate. In this study, we provide a theoretical framework to quantify dilution of a non-reactive solute within a steady state flow as affected by the spatial variability of the hydraulic conductivity. Embracing the Lagrangian concentration framework, we obtain explicit semi-analytical expressions for the dilution index as a function of the structural parameters of the random hydraulic conductivity field, under the assumptions of uniform-in-the-average flow, small injection source and weak-to-mild heterogeneity. Results show how the dilution enhancement of the solute cloud is strongly dependent on both the statistical anisotropy ratio and the heterogeneity level of the porous medium. The explicit semi-analytical solution also captures the temporal evolution of the dilution rate; for the early- and late-time limits, the proposed solution recovers previous results from the literature, while at intermediate times it reflects the increasing interplay between large-scale advection and local-scale dispersion. The performance of the theoretical framework is verified with high resolution numerical results and successfully tested against the Cape Cod field data. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Changes in Stratiform Clouds of Mesoscale Convective Complex Introduced by Dust Aerosols

    Science.gov (United States)

    Lin, B.; Min, Q.-L.; Li, R.

    2010-01-01

    Aerosols influence the earth s climate through direct, indirect, and semi-direct effects. There are large uncertainties in quantifying these effects due to limited measurements and observations of aerosol-cloud-precipitation interactions. As a major terrestrial source of atmospheric aerosols, dusts may serve as a significant climate forcing for the changing climate because of its effect on solar and thermal radiation as well as on clouds and precipitation processes. Latest satellites measurements enable us to determine dust aerosol loadings and cloud distributions and can potentially be used to reduce the uncertainties in the estimations of aerosol effects on climate. This study uses sensors on various satellites to investigate the impact of mineral dust on cloud microphysical and precipitation processes in mesoscale convective complex (MCC). A trans-Atlantic dust outbreak of Saharan origin occurring in early March 2004 is considered. For the observed MCCs under a given convective strength, small hydrometeors were found more prevalent in the dusty stratiform regions than in those regions that were dust free. Evidence of abundant cloud ice particles in the dust regions, particularly at altitudes where heterogeneous nucleation of mineral dust prevails, further supports the observed changes of clouds and precipitation. The consequences of the microphysical effects of the dust aerosols were to shift the size spectrum of precipitation-sized hydrometeors from heavy precipitation to light precipitation and ultimately to suppress precipitation and increase the lifecycle of cloud systems, especially over stratiform areas.

  3. Using theta and alpha band power to assess cognitive workload in multitasking environments.

    Science.gov (United States)

    Puma, Sébastien; Matton, Nadine; Paubel, Pierre-V; Raufaste, Éric; El-Yagoubi, Radouane

    2018-01-01

    Cognitive workload is of central importance in the fields of human factors and ergonomics. A reliable measurement of cognitive workload could allow for improvements in human machine interface designs and increase safety in several domains. At present, numerous studies have used electroencephalography (EEG) to assess cognitive workload, reporting the rise in cognitive workload to be associated with increases in theta band power and decreases in alpha band power. However, results have been inconsistent with some failing to reach the required level of significance. We hypothesized that the lack of consistency could be related to individual differences in task performance and/or to the small sample sizes in most EEG studies. In the present study we used EEG to assess the increase in cognitive workload occurring in a multitasking environment while taking into account differences in performance. Twenty participants completed a task commonly used in airline pilot recruitment, which included an increasing number of concurrent sub-tasks to be processed from one to four. Subjective ratings, performances scores, pupil size and EEG signals were recorded. Results showed that increases in EEG alpha and theta band power reflected increases in the involvement of cognitive resources for the completion of one to three subtasks in a multitasking environment. These values reached a ceiling when performances dropped. Consistent differences in levels of alpha and theta band power were associated to levels of task performance: highest performance was related to lowest band power. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Scale dependence of cirrus horizontal heterogeneity effects on TOA measurements – Part I: MODIS brightness temperatures in the thermal infrared

    Directory of Open Access Journals (Sweden)

    T. Fauchez

    2017-07-01

    Full Text Available This paper presents a study on the impact of cirrus cloud heterogeneities on MODIS simulated thermal infrared (TIR brightness temperatures (BTs at the top of the atmosphere (TOA as a function of spatial resolution from 50 m to 10 km. A realistic 3-D cirrus field is generated by the 3DCLOUD model (average optical thickness of 1.4, cloud-top and base altitudes at 10 and 12 km, respectively, consisting of aggregate column crystals of Deff = 20 µm, and 3-D thermal infrared radiative transfer (RT is simulated with the 3DMCPOL code. According to previous studies, differences between 3-D BT computed from a heterogenous pixel and 1-D RT computed from a homogeneous pixel are considered dependent at nadir on two effects: (i the optical thickness horizontal heterogeneity leading to the plane-parallel homogeneous bias (PPHB and the (ii horizontal radiative transport (HRT leading to the independent pixel approximation error (IPAE. A single but realistic cirrus case is simulated and, as expected, the PPHB mainly impacts the low-spatial-resolution results (above ∼ 250 m with averaged values of up to 5–7 K, while the IPAE mainly impacts the high-spatial-resolution results (below ∼ 250 m with average values of up to 1–2 K. A sensitivity study has been performed in order to extend these results to various cirrus optical thicknesses and heterogeneities by sampling the cirrus in several ranges of parameters. For four optical thickness classes and four optical heterogeneity classes, we have found that, for nadir observations, the spatial resolution at which the combination of PPHB and HRT effects is the smallest, falls between 100 and 250 m. These spatial resolutions thus appear to be the best choice to retrieve cirrus optical properties with the smallest cloud heterogeneity-related total bias in the thermal infrared. For off-nadir observations, the average total effect is increased and the minimum is shifted to coarser spatial

  5. JINR CLOUD SERVICE FOR SCIENTIFIC AND ENGINEERING COMPUTATIONS

    Directory of Open Access Journals (Sweden)

    Nikita A. Balashov

    2018-03-01

    Full Text Available Pretty often small research scientific groups do not have access to powerful enough computational resources required for their research work to be productive. Global computational infrastructures used by large scientific collaborations can be challenging for small research teams because of bureaucracy overhead as well as usage complexity of underlying tools. Some researchers buy a set of powerful servers to cover their own needs in computational resources. A drawback of such approach is a necessity to take care about proper hosting environment for these hardware and maintenance which requires a certain level of expertise. Moreover a lot of time such resources may be underutilized because а researcher needs to spend a certain amount of time to prepare computations and to analyze results as well as he doesn’t always need all resources of modern multi-core CPUs servers. The JINR cloud team developed a service which provides an access for scientists of small research groups from JINR and its Member State organizations to computational resources via problem-oriented (i.e. application-specific web-interface. It allows a scientist to focus on his research domain by interacting with the service in a convenient way via browser and abstracting away from underlying infrastructure as well as its maintenance. A user just sets a required values for his job via web-interface and specify a location for uploading a result. The computational workloads are done on the virtual machines deployed in the JINR cloud infrastructure.

  6. Analysis of the workload of bank tellers of a Brazilian public institution.

    Science.gov (United States)

    Serikawa, Simoni S; Albieri, Ana Carolina S; Bonugli, Gustavo P; Greghi, Marina F

    2012-01-01

    During the last decades there have been many changes in the banking sector organization. It has been also observed the mutual growing of musculoskeletal and mental disorders. This study investigated the workload of bank tellers at a Brazilian public institution. It was performed the Ergonomic Work Analysis (EWA). Three employees participated in this study. During the analysis process, three research instruments were applied: Inventory of Work and Risk of Illness, Yoshitake Fatigue Questionnaire and Nordic Musculoskeletal Questionnaire, beyond the realization of footage recordings and the self-confrontation. The results indicated the existence of an excess of workload on the evaluated workstations, mainly in relation to mental order constraints, that overlaps the physical aspects. Thereby it was found that the employees tend to adopt strategies trying to reduce the impacts of the excess of workload, in order to regulate it.

  7. Modeling and Security in Cloud Ecosystems

    Directory of Open Access Journals (Sweden)

    Eduardo B. Fernandez

    2016-04-01

    Full Text Available Clouds do not work in isolation but interact with other clouds and with a variety of systems either developed by the same provider or by external entities with the purpose to interact with them; forming then an ecosystem. A software ecosystem is a collection of software systems that have been developed to coexist and evolve together. The stakeholders of such a system need a variety of models to give them a perspective of the possibilities of the system, to evaluate specific quality attributes, and to extend the system. A powerful representation when building or using software ecosystems is the use of architectural models, which describe the structural aspects of such a system. These models have value for security and compliance, are useful to build new systems, can be used to define service contracts, find where quality factors can be monitored, and to plan further expansion. We have described a cloud ecosystem in the form of a pattern diagram where its components are patterns and reference architectures. A pattern is an encapsulated solution to a recurrent problem. We have recently expanded these models to cover fog systems and containers. Fog Computing is a highly-virtualized platform that provides compute, storage, and networking services between end devices and Cloud Computing Data Centers; a Software Container provides an execution environment for applications sharing a host operating system, binaries, and libraries with other containers. We intend to use this architecture to answer a variety of questions about the security of this system as well as a reference to design interacting combinations of heterogeneous components. We defined a metamodel to relate security concepts which is being expanded.

  8. Job scheduling in a heterogenous grid environment

    Energy Technology Data Exchange (ETDEWEB)

    Oliker, Leonid; Biswas, Rupak; Shan, Hongzhang; Smith, Warren

    2004-02-11

    Computational grids have the potential for solving large-scale scientific problems using heterogeneous and geographically distributed resources. However, a number of major technical hurdles must be overcome before this potential can be realized. One problem that is critical to effective utilization of computational grids is the efficient scheduling of jobs. This work addresses this problem by describing and evaluating a grid scheduling architecture and three job migration algorithms. The architecture is scalable and does not assume control of local site resources. The job migration policies use the availability and performance of computer systems, the network bandwidth available between systems, and the volume of input and output data associated with each job. An extensive performance comparison is presented using real workloads from leading computational centers. The results, based on several key metrics, demonstrate that the performance of our distributed migration algorithms is significantly greater than that of a local scheduling framework and comparable to a non-scalable global scheduling approach.

  9. Exploring Individual Differences in Workload Assessment

    Science.gov (United States)

    2014-12-26

    recall their workload accurately. However, it has been shown that the bias shown in subjective ratings can actually provide insight into significant...or subconsciously and embark on load shedding, postponing a task to permit another decision action to be completed in a required timeframe (Smith...or slow heart rate or unique physiological measure will not add unnecessary bias to the data. Individual baseline measures are typically taken at the

  10. Assessment of mental workload and academic motivation in medical students.

    Science.gov (United States)

    Atalay, Kumru Didem; Can, Gulin Feryal; Erdem, Saban Remzi; Muderrisoglu, Ibrahim Haldun

    2016-05-01

    To investigate the level of correlation and direction of linearity between academic motivation and subjective workload. The study was conducted at Baskent University School of Medicine, Ankara, Turkey, from December 2013 to February 2014, and comprised Phase 5 Phase 6 medical students. Subjective workload level was determined by using National Aeronautics and Space Administration Task Load Index scale that was adapted to Turkish. Academic motivation values were obtained with the help of Academic Motivation Scale university form. SPSS 17 was used for statistical analysis. Of the total 105 subjects, 65(62%) students were in Phase 5 and 40(38%) were in Phase 6. Of the Phase 5 students, 18(27.7%) were boys and 47(72.3%) were girls, while of the Phase 6 students, 16(40%) were boys and 24(60%) were girls. There were significant differences in Phase 5 and Phase 6 students for mental effort (p=0.00) and physical effort (p=0.00). The highest correlation in Phase 5 was between mental effort and intrinsic motivation (r=0.343). For Phase 6, highest correlation was between effort and amotivation (r= -0.375). Subjective workload affected academic motivation in medical students.

  11. Nonparametric inference from the M/G/1 workload

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    2006-01-01

    Consider an M/G/1 queue with unknown service-time distribution and unknown traffic intensity ρ. Given systematically sampled observations of the workload, we construct estimators of ρ and of the service-time distribution function, and we study asymptotoic properties of these estimators....

  12. Nonparametric inference from the M/G/1 workload

    DEFF Research Database (Denmark)

    Hansen, Martin Bøgsted; Pitts, Susan M.

    Consider an M/G/1 queue with unknown service-time distribution and unknown traffic intensity $\\rho$. Given systematically sampled observations of the workload, we construct estimators of $\\rho$ and of the service-time distribution function, and we study asymptotic properties of these estimators....

  13. Quantification of crew workload imposed by communications-related tasks in commercial transport aircraft

    Science.gov (United States)

    Acton, W. H.; Crabtree, M. S.; Simons, J. C.; Gomer, F. E.; Eckel, J. S.

    1983-01-01

    Information theoretic analysis and subjective paired-comparison and task ranking techniques were employed in order to scale the workload of 20 communications-related tasks frequently performed by the captain and first officer of transport category aircraft. Tasks were drawn from taped conversations between aircraft and air traffic controllers (ATC). Twenty crewmembers performed subjective message comparisons and task rankings on the basis of workload. Information theoretic results indicated a broad range of task difficulty levels, and substantial differences between captain and first officer workload levels. Preliminary subjective data tended to corroborate these results. A hybrid scale reflecting the results of both the analytical and the subjective techniques is currently being developed. The findings will be used to select representative sets of communications for use in high fidelity simulation.

  14. Measurement and analysis of workload effects on fault latency in real-time systems

    Science.gov (United States)

    Woodbury, Michael H.; Shin, Kang G.

    1990-01-01

    The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.

  15. Stratocumulus Cloud Top Radiative Cooling and Cloud Base Updraft Speeds

    Science.gov (United States)

    Kazil, J.; Feingold, G.; Balsells, J.; Klinger, C.

    2017-12-01

    Cloud top radiative cooling is a primary driver of turbulence in the stratocumulus-topped marine boundary. A functional relationship between cloud top cooling and cloud base updraft speeds may therefore exist. A correlation of cloud top radiative cooling and cloud base updraft speeds has been recently identified empirically, providing a basis for satellite retrieval of cloud base updraft speeds. Such retrievals may enable analysis of aerosol-cloud interactions using satellite observations: Updraft speeds at cloud base co-determine supersaturation and therefore the activation of cloud condensation nuclei, which in turn co-determine cloud properties and precipitation formation. We use large eddy simulation and an off-line radiative transfer model to explore the relationship between cloud-top radiative cooling and cloud base updraft speeds in a marine stratocumulus cloud over the course of the diurnal cycle. We find that during daytime, at low cloud water path (CWP correlated, in agreement with the reported empirical relationship. During the night, in the absence of short-wave heating, CWP builds up (CWP > 50 g m-2) and long-wave emissions from cloud top saturate, while cloud base heating increases. In combination, cloud top cooling and cloud base updrafts become weakly anti-correlated. A functional relationship between cloud top cooling and cloud base updraft speed can hence be expected for stratocumulus clouds with a sufficiently low CWP and sub-saturated long-wave emissions, in particular during daytime. At higher CWPs, in particular at night, the relationship breaks down due to saturation of long-wave emissions from cloud top.

  16. Laboratory and Cloud Chamber Studies of Formation Processes and Properties of Atmospheric Ice Particles

    Science.gov (United States)

    Leisner, T.; Abdelmonem, A.; Benz, S.; Brinkmann, M.; Möhler, O.; Rzesanke, D.; Saathoff, H.; Schnaiter, M.; Wagner, R.

    2009-04-01

    The formation of ice in tropospheric clouds controls the evolution of precipitation and thereby influences climate and weather via a complex network of dynamical and microphysical processes. At higher altitudes, ice particles in cirrus clouds or contrails modify the radiative energy budget by direct interaction with the shortwave and longwave radiation. In order to improve the parameterisation of the complex microphysical and dynamical processes leading to and controlling the evolution of tropospheric ice, laboratory experiments are performed at the IMK Karlsruhe both on a single particle level and in the aerosol and cloud chamber AIDA. Single particle experiments in electrodynamic levitation lend themselves to the study of the interaction between cloud droplets and aerosol particles under extremely well characterized and static conditions in order to obtain microphysical parameters as freezing nucleation rates for homogeneous and heterogeneous ice formation. They also allow the observation of the freezing dynamics and of secondary ice formation and multiplication processes under controlled conditions and with very high spatial and temporal resolution. The inherent droplet charge in these experiments can be varied over a wide range in order to assess the influence of the electrical state of the cloud on its microphysics. In the AIDA chamber on the other hand, these processes are observable under the realistic dynamic conditions of an expanding and cooling cloud- parcel with interacting particles and are probed simultaneously by a comprehensive set of analytical instruments. By this means, microphysical processes can be studied in their complex interplay with dynamical processes as for example coagulation or particle evaporation and growth via the Bergeron - Findeisen process. Shortwave scattering and longwave absorption properties of the nucleating and growing ice crystals are probed by in situ polarised laser light scattering measurements and infrared extinction

  17. Status and Evolution of ATLAS Workload Management System PanDA

    CERN Document Server

    AUTHOR|(CDS)2067365; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC uses a sophisticated workload management system, PanDA, to provide access for thousands of physicists to distributed computing resources of unprecedented scale. This system has proved to be robust and scalable during three years of LHC operations. We describe the design and performance of PanDA in ATLAS. The features which make PanDA successful in ATLAS could be applicable to other exabyte scale scientific projects. We describe plans to evolve PanDA towards a general workload management system for the new BigData initiative announced by the US government. Other planned future improvements to PanDA will also be described

  18. Aerosol processing in stratiform clouds in ECHAM6-HAM

    Science.gov (United States)

    Neubauer, David; Lohmann, Ulrike; Hoose, Corinna

    2013-04-01

    Aerosol processing in stratiform clouds by uptake into cloud particles, collision-coalescence, chemical processing inside the cloud particles and release back into the atmosphere has important effects on aerosol concentration, size distribution, chemical composition and mixing state. Aerosol particles can act as cloud condensation nuclei. Cloud droplets can take up further aerosol particles by collisions. Atmospheric gases may also be transferred into the cloud droplets and undergo chemical reactions, e.g. the production of atmospheric sulphate. Aerosol particles are also processed in ice crystals. They may be taken up by homogeneous freezing of cloud droplets below -38° C or by heterogeneous freezing above -38° C. This includes immersion freezing of already immersed aerosol particles in the droplets and contact freezing of particles colliding with a droplet. Many clouds do not form precipitation and also much of the precipitation evaporates before it reaches the ground. The water soluble part of the aerosol particles concentrates in the hydrometeors and together with the insoluble part forms a single, mixed, larger particle, which is released. We have implemented aerosol processing into the current version of the general circulation model ECHAM6 (Stevens et al., 2013) coupled to the aerosol module HAM (Stier et al., 2005). ECHAM6-HAM solves prognostic equations for the cloud droplet number and ice crystal number concentrations. In the standard version of HAM, seven modes are used to describe the total aerosol. The modes are divided into soluble/mixed and insoluble modes and the number concentrations and masses of different chemical components (sulphate, black carbon, organic carbon, sea salt and mineral dust) are prognostic variables. We extended this by an explicit representation of aerosol particles in cloud droplets and ice crystals in stratiform clouds similar to Hoose et al. (2008a,b). Aerosol particles in cloud droplets are represented by 5 tracers for the

  19. Cloud networking understanding cloud-based data center networks

    CERN Document Server

    Lee, Gary

    2014-01-01

    Cloud Networking: Understanding Cloud-Based Data Center Networks explains the evolution of established networking technologies into distributed, cloud-based networks. Starting with an overview of cloud technologies, the book explains how cloud data center networks leverage distributed systems for network virtualization, storage networking, and software-defined networking. The author offers insider perspective to key components that make a cloud network possible such as switch fabric technology and data center networking standards. The final chapters look ahead to developments in architectures

  20. Is This Work Sustainable? Teacher Turnover and Perceptions of Workload in Charter Management Organizations

    Science.gov (United States)

    Torres, A. Chris

    2016-01-01

    An unsustainable workload is considered the primary cause of teacher turnover at Charter Management Organizations (CMOs), yet most reports provide anecdotal evidence to support this claim. This study uses 2010-2011 survey data from one large CMO and finds that teachers' perceptions of workload are significantly associated with decisions to leave…