WorldWideScience

Sample records for grid today clouds

  1. Grid today, clouds on the horizon

    Science.gov (United States)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  2. Grids, Clouds, and Virtualization

    Science.gov (United States)

    Cafaro, Massimo; Aloisio, Giovanni

    This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.

  3. Cloud and Grid: more connected than you might think?

    CERN Multimedia

    Stephanie McClellan

    2013-01-01

    You may perceive the grid and the cloud to be two separate technologies: the grid as physical hardware and the cloud as virtual hardware simulated by running software. So how are the grid and the cloud being integrated at CERN?   CERN Computer Centre. The LHC generates a large amount of data that needs to be stored, distributed and analysed. Grid technology is used for the mass physical data processing needed for the LHC supported by many data centres around the world as part of the Worldwide LHC Computing Grid. Beyond the technology itself, the Grid represents a collaboration of all these centres working towards a common goal. Cloud technology uses virtualisation techniques, which allow one physical machine to represent many virtual machines. This technology is being used today to develop and deploy a range of IT services (such as Service Now, a cloud hosted service), allowing for a great deal of operational flexibility. Such services are available at CERN through Openstack. &...

  4. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    International Nuclear Information System (INIS)

    Cass, Tony

    2012-01-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  5. Grid today, clouds on the horizon

    CERN Document Server

    Shiers, Jamie

    2009-01-01

    By the time of CCP 2008, the largest scientific machine in the world – the Large Hadron Collider – had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our “Higgs in one basket” – that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219–223]. After many years of preparation, 2008 saw a final “Common Computing Readiness Challenge” (CCRC'08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which...

  6. Grids Today, Clouds on the Horizon

    CERN Document Server

    Shiers, J

    2008-01-01

    By the time of CCP 2008, the largest scientific machine in the world -– the Large Hadron Collider -– had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5 + 5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" -– that of Grid computing. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC’08) -– aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, inc...

  7. Grids Today, Clouds on the Horizon

    CERN Document Server

    Shiers, J

    2008-01-01

    By the time of CCP 2008, the world’s largest scientific machine – the Large Hadron Collider – should have been cooled down to its operational temperature of below 20K and injection tests should have started. Collisions of proton beams at 5 + 5 TeV are expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) now foreseen for 2009. In order to process the data from this world machine, we have put our â€ワHiggs in one basket” – that of Grid computing. After many years of preparation, 2008 has seen a final â€ワCommon Computing Readiness Challenge” (CCRC’08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relies on a world‐wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which in Europe has been through 3 generations of EGEE projects, together with related projects in other part...

  8. Grids, virtualization, and clouds at Fermilab

    International Nuclear Information System (INIS)

    Timm, S; Chadwick, K; Garzoglio, G; Noh, S

    2014-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  9. Grids, virtualization, and clouds at Fermilab

    Science.gov (United States)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  10. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    Kennedy, J; Walker, R; Olszewski, A; Nderitu, S; Serfon, C; Duckeck, G

    2010-01-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  11. Automated Grid Monitoring for LHCb through HammerCloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  12. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  13. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  14. Generating Free-Form Grid Truss Structures from 3D Scanned Point Clouds

    Directory of Open Access Journals (Sweden)

    Hui Ding

    2017-01-01

    Full Text Available Reconstruction, according to physical shape, is a novel way to generate free-form grid truss structures. 3D scanning is an effective means of acquiring physical form information and it generates dense point clouds on surfaces of objects. However, generating grid truss structures from point clouds is still a challenge. Based on the advancing front technique (AFT which is widely used in Finite Element Method (FEM, a scheme for generating grid truss structures from 3D scanned point clouds is proposed in this paper. Based on the characteristics of point cloud data, the search box is adopted to reduce the search space in grid generating. A front advancing procedure suit for point clouds is established. Delaunay method and Laplacian method are used to improve the quality of the generated grids, and an adjustment strategy that locates grid nodes at appointed places is proposed. Several examples of generating grid truss structures from 3D scanned point clouds of seashells are carried out to verify the proposed scheme. Physical models of the grid truss structures generated in the examples are manufactured by 3D print, which solidifies the feasibility of the scheme.

  15. Can Clouds replace Grids? Will Clouds replace Grids?

    Energy Technology Data Exchange (ETDEWEB)

    Shiers, J D, E-mail: Jamie.Shiers@cern.c [CERN, 1211 Geneva 23 (Switzerland)

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9{sup o}K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  16. Can Clouds replace Grids? Will Clouds replace Grids?

    International Nuclear Information System (INIS)

    Shiers, J D

    2010-01-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9 o K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  17. Can Clouds replace Grids? Will Clouds replace Grids?

    Science.gov (United States)

    Shiers, J. D.

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  18. A Survey on Cloud Security Issues and Techniques

    OpenAIRE

    Sharma, Shubhanjali; Gupta, Garima; Laxmi, P. R.

    2014-01-01

    Today, cloud computing is an emerging way of computing in computer science. Cloud computing is a set of resources and services that are offered by the network or internet. Cloud computing extends various computing techniques like grid computing, distributed computing. Today cloud computing is used in both industrial field and academic field. Cloud facilitates its users by providing virtual resources via internet. As the field of cloud computing is spreading the new techniques are developing. ...

  19. An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua; Prasanna, Viktor K.

    2011-07-09

    Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Grid Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.

  20. Automated Grid Monitoring for the LHCb Experiment Through HammerCloud

    CERN Document Server

    Dice, Bradley

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  1. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  2. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  3. Cloud feedback studies with a physics grid

    Energy Technology Data Exchange (ETDEWEB)

    Dipankar, Anurag [Max Planck Institute for Meteorology Hamburg; Stevens, Bjorn [Max Planck Institute for Meteorology Hamburg

    2013-02-07

    During this project the investigators implemented a fully parallel version of dual-grid approach in main frame code ICON, implemented a fully conservative first-order interpolation scheme for horizontal remapping, integrated UCLA-LES micro-scale model into ICON to run parallely in selected columns, and did cloud feedback studies on aqua-planet setup to evaluate the classical parameterization on a small domain. The micro-scale model may be run in parallel with the classical parameterization, or it may be run on a "physics grid" independent of the dynamics grid.

  4. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  5. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  6. ATLAS operations in the GridKa T1/T2 Cloud

    International Nuclear Information System (INIS)

    Duckeck, G; Serfon, C; Walker, R; Harenberg, T; Kalinin, S; Schultes, J; Kawamura, G; Leffhalm, K; Meyer, J; Nderitu, S; Olszewski, A; Petzold, A; Sundermann, J E

    2011-01-01

    The ATLAS GridKa cloud consists of the GridKa Tier1 centre and 12 Tier2 sites from five countries associated to it. Over the last years a well defined and tested operation model evolved. Several core cloud services need to be operated and closely monitored: distributed data management, involving data replication, deletion and consistency checks; support for ATLAS production activities, which includes Monte Carlo simulation, reprocessing and pilot factory operation; continuous checks of data availability and performance for user analysis; software installation and database setup. Of crucial importance is good communication between sites, operations team and ATLAS as well as efficient cloud level monitoring tools. The paper gives an overview of the operations model and ATLAS services within the cloud.

  7. On the influence of cloud fraction diurnal cycle and sub-grid cloud optical thickness variability on all-sky direct aerosol radiative forcing

    International Nuclear Information System (INIS)

    Min, Min; Zhang, Zhibo

    2014-01-01

    The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors

  8. The International Symposium on Grids and Clouds and the Open Grid Forum

    Science.gov (United States)

    The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be

  9. A Cloud Associated Smart Grid Admin Dashboard

    Directory of Open Access Journals (Sweden)

    P. Naveen

    2018-02-01

    Full Text Available Intelligent smart grid system undertakes electricity demand in a sustainable, reliable, economical and environmentally friendly manner. As smart grid involves, it has the liability of meeting the changing consumer needs on the day-to-day basis. Modern energy consumers like to vivaciously regulate their consumption patterns more competently and intelligently than current provided ways. To fulfill the consumers’ needs, smart meters and sensors make the grid infrastructure more efficient and resilient in energy data collection and management even with the ever-changing renewable power generation. Though cloud acts as an outlet for the energy consumers to retrieve energy data from the grid, the information systems available are technically constrained and not user-friendly. Hence, a simple technology enabled utility-consumer interactive information system in the form of a dashboard is presented to cater the electric consumer needs.

  10. Can Clouds Replace Grids? Will Clouds Replace Grids?

    CERN Document Server

    Shiers, J

    2010-01-01

    The world’s largest scientific machine – comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground – currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared “open” and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability – as seen by the experiments, as opposed to that measured by the official tools – still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently “Cloud Computing” – in terms of pay-per-use fabric provisioning – has emerged as a potentially viable al...

  11. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  12. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  13. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  14. Grid site testing for ATLAS with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, J; Hönig, F; Legger, F; LLamas, R Medrano; Sciacca, F G; Ster, D van der

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  15. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  16. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2013-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  17. The International Symposium on Grids and Clouds

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.

  18. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  19. Simulation modeling of cloud computing for smart grid using CloudSim

    Directory of Open Access Journals (Sweden)

    Sandeep Mehmi

    2017-05-01

    Full Text Available In this paper a smart grid cloud has been simulated using CloudSim. Various parameters like number of virtual machines (VM, VM Image size, VM RAM, VM bandwidth, cloudlet length, and their effect on cost and cloudlet completion time in time-shared and space-shared resource allocation policy have been studied. As the number of cloudlets increased from 68 to 178, greater number of cloudlets completed their execution with high cloudlet completion time in time-shared allocation policy as compared to space-shared allocation policy. Similar trend has been observed when VM bandwidth is increased from 1 Gbps to 10 Gbps and VM RAM is increased from 512 MB to 5120 MB. The cost of processing increased linearly with respect to increase in number of VMs, VM Image size and cloudlet length.

  20. How to deal with petabytes of data: the LHC Grid project

    International Nuclear Information System (INIS)

    Britton, D; Lloyd, S L

    2014-01-01

    We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing. (review article)

  1. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    Science.gov (United States)

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.

  2. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Science.gov (United States)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  3. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Directory of Open Access Journals (Sweden)

    Chakravarthy Srinivas R.

    2018-03-01

    Full Text Available Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  4. Cloud Computing and Smart Grids

    Directory of Open Access Journals (Sweden)

    Janina POPEANGĂ

    2012-10-01

    Full Text Available Increasing concern about energy consumption is leading to infrastructure that supports real-time, two-way communication between utilities and consumers, and allows software systems at both ends to control and manage power use. To manage communications to millions of endpoints in a secure, scalable and highly-available environment and to achieve these twin goals of ‘energy conservation’ and ‘demand response’, utilities must extend the same communication network management processes and tools used in the data center to the field.This paper proposes that cloud computing technology, because of its low cost, flexible and redundant architecture and fast response time, has the functionality needed to provide the security, interoperability and performance required for large-scale smart grid applications.

  5. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  6. Use of VMware for providing cloud infrastructure for the Grid

    International Nuclear Information System (INIS)

    Long, Robin; Storey, Matthew

    2014-01-01

    The need to maximise computing resources whilst maintaining versatile setups leads to the need for flexible on demand facilities through the use of cloud computing. GridPP is currently investigating the role that Cloud Computing, in the form of Virtual Machines, can play in supporting Particle Physics analyses. As part of this research we look at the ability of VMware's ESXi hyper-visors[6] to provide such an infrastructure through the use of Virtual Machines (VMs); the advantages of such systems and their potential performance compared to physical environments.

  7. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  8. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  9. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  10. An Authentication Gateway for Integrated Grid and Cloud Access

    International Nuclear Information System (INIS)

    Ciaschini, V; Salomoni, D

    2011-01-01

    The WNoDeS architecture, providing distributed, integrated access to both Cloud and Grid resources through virtualization technologies, makes use of an Authentication Gateway to support diverse authentication mechanisms. Three main use cases are foreseen, covering access via X.509 digital certificates, federated services like Shibboleth or Kerberos, and credit-based access. In this paper, we describe the structure of the WNoDeS authentication gateway.

  11. Integration of cloud, grid and local cluster resources with DIRAC

    International Nuclear Information System (INIS)

    Fifield, Tom; Sevior, Martin; Carmona, Ana; Casajús, Adrián; Graciani, Ricardo

    2011-01-01

    Grid computing was developed to provide users with uniform access to large-scale distributed resources. This has worked well, however there are significant resources available to the scientific community that do not follow this paradigm - those on cloud infrastructure providers, HPC supercomputers or local clusters. DIRAC (Distributed Infrastructure with Remote Agent Control) was originally designed to support direct submission to the Local Resource Management Systems (LRMS) of such clusters for LHCb, matured to support grid workflows and has recently been updated to support Amazon's Elastic Compute Cloud. This raises a number of new possibilities - by opening avenues to new resources, virtual organisations can change their resources with usage patterns and use these dedicated facilities for a given time. For example, user communities such as High Energy Physics experiments, have computing tasks with a wide variety of requirements in terms of CPU, data access or memory consumption, and their usage profile is never constant throughout the year. Having the possibility to transparently absorb peaks on the demand for these kinds of tasks using Cloud resources could allow a reduction in the overall cost of the system. This paper investigates interoperability by following a recent large-scale production exercise utilising resources from these three different paradigms, during the 2010 Belle Monte Carlo run. Through this, it discusses the challenges and opportunities of such a model.

  12. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  13. International Symposium on Grids and Clouds (ISGC) 2016

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  14. International Symposium on Grids and Clouds (ISGC) 2014

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).

  15. Grid and Cloud for Developing Countries

    Science.gov (United States)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  16. Cloud Computing Benefits for Educational Institutions

    OpenAIRE

    Lakshminarayanan, Ramkumar; Kumar, Binod; Raju, M.

    2013-01-01

    Education today is becoming completely associated with the Information Technology on the content delivery, communication and collaboration. The need for servers, storage and software are highly demanding in the universities, colleges and schools. Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on-demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service...

  17. The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services

    Science.gov (United States)

    Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas

    Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.

  18. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Document Server

    Van der Ster , D; Medrano Llamas, R; Legger , F; Sciaba, A; Sciacca, G; Ubeda Garca , M

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion p...

  19. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion ...

  20. Cloud vector mapping using MODIS 09 Climate Modeling Grid (CMG) for the year 2010 and 2011

    International Nuclear Information System (INIS)

    Jah, Asjad Asif; Farrukh, Yousaf Bin; Ali, Rao Muhammad Saeed

    2013-01-01

    An alternate use for MODIS images was sought by mapping cloud movement directions and dissipation time during the 2010 and 2011 floods. MODIS Level-02 daily CMG (Climate Modelling Grid) land-cover images were downloaded and subsequently rectified and clipped to the study area. These images were then put together to observe the direction of cloud movement and vectorize the observed paths. Initial findings suggest that usually cloud does not have a prolonged coverage period over the northern humid region of the country and dissipates within less than 24-hours. Additionally, this led to the development of a robust methodology for cloud motion analysis using FOSS and market leading GIS utilities

  1. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  2. Reaching for the cloud: on the lessons learned from grid computing technology transfer process to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Dickmann, Frank; Sax, Ulrich; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which led to the creation of the Grid. The inter domain transfer process of this technology has hitherto been an intuitive process without in depth analysis. Some difficulties facing the life science community in this transfer can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies which have achieved certain stability. Grid and Cloud solutions are technologies, which are still in flux. We show how Grid computing creates new difficulties in the transfer process that are not considered in Bozeman's model. We show why the success of healthgrids should be measured by the qualified scientific human capital and the opportunities created, and not primarily by the market impact. We conclude with recommendations that can help improve the adoption of Grid and Cloud solutions into the biomedical community. These results give a more concise explanation of the difficulties many life science IT projects are facing in the late funding periods, and show leveraging steps that can help overcoming the "vale of tears".

  3. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  4. Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence

    Science.gov (United States)

    Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; Wyant, Matthew C.; Khairoutdinov, Marat

    2017-07-01

    Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called "ultraparameterization" (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (˜14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers. Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.

  5. Edgeware Security Risk Management: A Three Essay Thesis on Cloud, Virtualization and Wireless Grid Vulnerabilities

    Science.gov (United States)

    Brooks, Tyson T.

    2013-01-01

    This thesis identifies three essays which contribute to the foundational understanding of the vulnerabilities and risk towards potentially implementing wireless grid Edgeware technology in a virtualized cloud environment. Since communication networks and devices are subject to becoming the target of exploitation by hackers (e.g. individuals who…

  6. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Legger, Federica; Llamas, Ramón Medrano; Sciabà, Andrea; García, Mario Úbeda; Ster, Daniel van der; Sciacca, Gianfranco

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  7. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  8. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  9. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  10. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  11. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  12. Challenges facing production grids

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Ruth; /Fermilab

    2007-06-01

    Today's global communities of users expect quality of service from distributed Grid systems equivalent to that their local data centers. This must be coupled to ubiquitous access to the ensemble of processing and storage resources across multiple Grid infrastructures. We are still facing significant challenges in meeting these expectations, especially in the underlying security, a sustainable and successful economic model, and smoothing the boundaries between administrative and technical domains. Using the Open Science Grid as an example, I examine the status and challenges of Grids operating in production today.

  13. Moving HammerCloud to CERN's private cloud

    CERN Document Server

    Barrand, Quentin

    2013-01-01

    HammerCloud is a testing framework for the Worldwide LHC Computing Grid. Currently deployed on about 20 hand-managed machines, it was desirable to move it to the Agile Infrastructure, CERN's OpenStack-based private cloud.

  14. Improving ATLAS grid site reliability with functional tests using HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  15. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  16. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment.

    Directory of Open Access Journals (Sweden)

    Jeongsu Oh

    Full Text Available High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs. The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD

  17. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment.

    Science.gov (United States)

    Oh, Jeongsu; Choi, Chi-Hwan; Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA

  18. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment

    Science.gov (United States)

    Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology–a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in

  19. Using a New Event-Based Simulation Framework for Investigating Resource Provisioning in Clouds

    Directory of Open Access Journals (Sweden)

    Simon Ostermann

    2011-01-01

    Full Text Available Today, Cloud computing proposes an attractive alternative to building large-scale distributed computing environments by which resources are no longer hosted by the scientists' computational facilities, but leased from specialised data centres only when and for how long they are needed. This new class of Cloud resources raises new interesting research questions in the fields of resource management, scheduling, fault tolerance, or quality of service, requiring hundreds to thousands of experiments for finding valid solutions. To enable such research, a scalable simulation framework is typically required for early prototyping, extensive testing and validation of results before the real deployment is performed. The scope of this paper is twofold. In the first part we present GroudSim, a Grid and Cloud simulation toolkit for scientific computing based on a scalable simulation-independent discrete-event engine. GroudSim provides a comprehensive set of features for complex simulation scenarios from simple job executions on leased computing resources to file transfers, calculation of costs and background load on resources. Simulations can be parameterised and are easily extendable by probability distribution packages for failures which normally occur in complex distributed environments. Experimental results demonstrate the improved scalability of GroudSim compared to a related process-based simulation approach. In the second part, we show the use of the GroudSim simulator to analyse the problem of dynamic provisioning of Cloud resources to scientific workflows that do not benefit from sufficient Grid resources as required by their computational demands. We propose and study four strategies for provisioning and releasing Cloud resources that take into account the general leasing model encountered in today's commercial Cloud environments based on resource bulks, fuzzy descriptions and hourly payment intervals. We study the impact of our techniques to the

  20. Power grid complex network evolutions for the smart grid

    NARCIS (Netherlands)

    Pagani, Giuliano Andrea; Aiello, Marco

    2014-01-01

    The shift towards an energy grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the electricity distribution infrastructure. Today the grid is a hierarchical one delivering energy from large scale facilities to end-users. Tomorrow it will be a

  1. The Determination of Jurisdiction in Grid and Cloud Service Level Agreements

    Science.gov (United States)

    Parrilli, Davide Maria

    Service Level Agreements in Grid and Cloud scenarios can be a source of disputes particularly in case of breach of the obligations arising under them. It is then important to determine where parties can litigate in relation with such agreements. The paper deals with this question in the peculiar context of the European Union, and so taking into consideration Regulation 44/2001. According to the rules on jurisdiction provided by the Regulation, two general distinctions are drawn in order to determine which (European) courts are competent to adjudicate disputes arising out of a Service Level Agreement. The former is between B2B and B2C transactions, and the latter regards contracts which provide a jurisdiction clause and contracts which do not.

  2. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Andreazza, A; Annovi, A; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Campana, S; Girolamo, A Di; Carlino, G; Doria, A; Merola, L; Musto, E; Ciocca, C; Jha, M K; Cobal, M; Pascolo, F; Salvo, A De; Luminari, L; Sanctis, U De; Galeazzi, F

    2011-01-01

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  3. First Tuesday@CERN - THE GRID GETS REAL !

    CERN Document Server

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups. Panel: - Les Robertson, Head of the LHC Computing Grid Project, IT ...

  4. First Thuesday - CERN, The Grid gets real

    CERN Multimedia

    Robertson, Leslie

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups.

  5. Thundercloud: Domain specific information security training for the smart grid

    Science.gov (United States)

    Stites, Joseph

    In this paper, we describe a cloud-based virtual smart grid test bed: ThunderCloud, which is intended to be used for domain-specific security training applicable to the smart grid environment. The test bed consists of virtual machines connected using a virtual internal network. ThunderCloud is remotely accessible, allowing students to undergo educational exercises online. We also describe a series of practical exercises that we have developed for providing the domain-specific training using ThunderCloud. The training exercises and attacks are designed to be realistic and to reflect known vulnerabilities and attacks reported in the smart grid environment. We were able to use ThunderCloud to offer practical domain-specific security training for smart grid environment to computer science students at little or no cost to the department and no risk to any real networks or systems.

  6. When STAR meets the Clouds-Virtualization and Cloud Computing Experiences

    International Nuclear Information System (INIS)

    Lauret, J; Hajdu, L; Walker, M; Balewski, J; Goasguen, S; Stout, L; Fenn, M; Keahey, K

    2011-01-01

    In recent years, Cloud computing has become a very attractive paradigm and popular model for accessing distributed resources. The Cloud has emerged as the next big trend. The burst of platform and projects providing Cloud resources and interfaces at the very same time that Grid projects are entering a production phase in their life cycle has however raised the question of the best approach to handling distributed resources. Especially, are Cloud resources scaling at the levels shown by Grids? Are they performing at the same level? What is their overhead on the IT teams and infrastructure? Rather than seeing the two as orthogonal, the STAR experiment has viewed them as complimentary and has studied merging the best of the two worlds with Grid middleware providing the aggregation of both Cloud and traditional resources. Since its first use of Cloud resources on Amazon EC2 in 2008/2009 using a Nimbus/EC2 interface, the STAR software team has tested and experimented with many novel approaches: from a traditional, native EC2 approach to the Virtual Organization Cluster (VOC) at Clemson University and Condor/VM on the GLOW resources at the University of Wisconsin. The STAR team is also planning to run as part of the DOE/Magellan project. In this paper, we will present an overview of our findings from using truly opportunistic resources and scaling-out two orders of magnitude in both tests and practical usage.

  7. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  8. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  9. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  10. Cloud@Home: A New Enhanced Computing Paradigm

    Science.gov (United States)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  11. The Benefits of Grid Networks

    Science.gov (United States)

    Tennant, Roy

    2005-01-01

    In the article, the author talks about the benefits of grid networks. In speaking of grid networks the author is referring to both networks of computers and networks of humans connected together in a grid topology. Examples are provided of how grid networks are beneficial today and the ways in which they have been used.

  12. Grid interoperability: joining grid information systems

    International Nuclear Information System (INIS)

    Flechl, M; Field, L

    2008-01-01

    A grid is defined as being 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations'. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which are using slightly different middleware. Grid interoperation is trying to bridge these differences and enable Virtual Organizations to access resources at the institutions independent of their grid project affiliation. Grid interoperation is usually a bilateral activity between two grid infrastructures. Recently within the Open Grid Forum, the Grid Interoperability Now (GIN) Community Group is trying to build upon these bilateral activities. The GIN group is a focal point where all the infrastructures can come together to share ideas and experiences on grid interoperation. It is hoped that each bilateral activity will bring us one step closer to the overall goal of a uniform grid landscape. A fundamental aspect of a grid is the information system, which is used to find available grid services. As different grids use different information systems, interoperation between these systems is crucial for grid interoperability. This paper describes the work carried out to overcome these differences between a number of grid projects and the experiences gained. It focuses on the different techniques used and highlights the important areas for future standardization

  13. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  14. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  15. Integration of End-User Cloud Storage for CMS Analysis

    CERN Document Server

    Riahi, Hassen; Álvarez Ayllón, Alejandro; Balcas, Justas; Ciangottini, Diego; Hernández, José M; Keeble, Oliver; Magini, Nicolò; Manzi, Andrea; Mascetti, Luca; Mascheroni, Marco; Tanasijczuk, Andres Jorge; Vaandering, Eric Wayne

    2018-01-01

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achieve results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with...

  16. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  17. Security and Cloud Outsourcing Framework for Economic Dispatch

    International Nuclear Information System (INIS)

    Sarker, Mushfiqur R.; Wang, Jianhui

    2017-01-01

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for the Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.

  18. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  19. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  20. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  1. CERES Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_TRMM-PFM-VIRS_Beta1)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  2. Cloud Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  3. Cloud Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  4. Dynamic federation of grid and cloud storage

    Science.gov (United States)

    Furano, Fabrizio; Keeble, Oliver; Field, Laurence

    2016-09-01

    The Dynamic Federations project ("Dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol-agnostic, we have focused our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. The Data Bridge has been deployed in production with good results. We believe that the features of a loosely coupled federation of open-protocolbased storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.

  5. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    Science.gov (United States)

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time

  6. Eucalyptus Cloud to Remotely Provision e-Governance Applications

    Directory of Open Access Journals (Sweden)

    Sreerama Prabhu Chivukula

    2011-01-01

    Full Text Available Remote rural areas are constrained by lack of reliable power supply, essential for setting up advanced IT infrastructure as servers or storage; therefore, cloud computing comprising an Infrastructure-as-a-Service (IaaS is well suited to provide such IT infrastructure in remote rural areas. Additional cloud layers of Platform-as-a-Service (PaaS and Software-as-a-Service (SaaS can be added above IaaS. Cluster-based IaaS cloud can be set up by using open-source middleware Eucalyptus in data centres of NIC. Data centres of the central and state governments can be integrated with State Wide Area Networks and NICNET together to form the e-governance grid of India. Web service repositories at centre, state, and district level can be built over the national e-governance grid of India. Using Globus Toolkit, we can achieve stateful web services with speed and security. Adding the cloud layer over the e-governance grid will make a grid-cloud environment possible through Globus Nimbus. Service delivery can be in terms of web services delivery through heterogeneous client devices. Data mining using Weka4WS and DataMiningGrid can produce meaningful knowledge discovery from data. In this paper, a plan of action is provided for the implementation of the above proposed architecture.

  7. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  8. Aerosol-cloud interactions in a multi-scale modeling framework

    Science.gov (United States)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the

  9. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  10. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Iov, Florin; Hägerling, Christian

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the qu......The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...

  11. A Development of Lightweight Grid Interface

    International Nuclear Information System (INIS)

    Iwai, G; Kawai, Y; Sasaki, T; Watase, Y

    2011-01-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  12. GEWEX cloud assessment: A review

    Science.gov (United States)

    Stubenrauch, Claudia; Rossow, William B.; Kinne, Stefan; Ackerman, Steve; Cesana, Gregory; Chepfer, Hélène; Di Girolamo, Larry; Getzewich, Brian; Guignard, Anthony; Heidinger, Andy; Maddux, Brent; Menzel, Paul; Minnis, Patrick; Pearl, Cindy; Platnick, Steven; Poulsen, Caroline; Riedi, Jérôme; Sayer, Andrew; Sun-Mack, Sunny; Walther, Andi; Winker, Dave; Zeng, Shen; Zhao, Guangyu

    2013-05-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the entire globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years; however, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provides the first coordinated intercomparison of publicly available, global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multi-angle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. The monthly, gridded database presented here facilitates further assessments, climate studies, and the evaluation of climate models.

  13. Characterization of Cloud Water-Content Distribution

    Science.gov (United States)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  14. Dynamic virtual AliEn Grid sites on Nimbus with CernVM

    International Nuclear Information System (INIS)

    Harutyunyan, A; Buncic, P; Freeman, T; Keahey, K

    2010-01-01

    We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented.

  15. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  16. Smart grid communication-enabled intelligence for the electric power grid

    CERN Document Server

    Bush, Stephen F

    2014-01-01

    This book bridges the divide between the fields of power systems engineering and computer communication through the new field of power system information theory. Written by an expert with vast experience in the field, this book explores the smart grid from generation to consumption, both as it is planned today and how it will evolve tomorrow. The book focuses upon what differentiates the smart grid from the ""traditional"" power grid as it has been known for the last century. Furthermore, the author provides the reader with a fundamental understanding of both power systems and communication ne

  17. Smart grid security

    Energy Technology Data Exchange (ETDEWEB)

    Cuellar, Jorge (ed.) [Siemens AG, Muenchen (Germany). Corporate Technology

    2013-11-01

    The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains the following papers: 1. IT Security Architecture Approaches for Smart Metering and Smart Grid. 2. Smart Grid Information Exchange - Securing the Smart Grid from the Ground. 3. A Tool Set for the Evaluation of Security and Reliability in Smart Grids. 4. A Holistic View of Security and Privacy Issues in Smart Grids. 5. Hardware Security for Device Authentication in the Smart Grid. 6. Maintaining Privacy in Data Rich Demand Response Applications. 7. Data Protection in a Cloud-Enabled Smart Grid. 8. Formal Analysis of a Privacy-Preserving Billing Protocol. 9. Privacy in Smart Metering Ecosystems. 10. Energy rate at home Leveraging ZigBee to Enable Smart Grid in Residential Environment.

  18. Meet the Grid

    CERN Multimedia

    Yurkewicz, Katie

    2005-01-01

    Today's cutting-edge scientific projects are larger, more complex, and more expensive than ever. Grid computing provides the resources that allow researchers to share knowledge, data, and computer processing power across boundaries

  19. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  20. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  1. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  2. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  3. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  4. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  5. CERES Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_Terra-FM1-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  6. CERES) Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_Terra-FM2-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2001-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  7. HP advances Grid Strategy for the adaptive enterprise

    CERN Multimedia

    2003-01-01

    "HP today announced plans to further enable its enterprise infrastructure technologies for grid computing. By leveraging open grid standards, HP plans to help customers simplify the use and management of distributed IT resources. The initiative will integrate industry grid standards, including the Globus Toolkit and Open Grid Services Architecture (OGSA), across HP's enterprise product lines" (1 page).

  8. Satin: A high-level and efficient grid programming model

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.; Wrzesinska, G.; Jacobs, C.J.H.; Bal, H.E.

    2010-01-01

    Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing parallel grid applications simply is too difficult. Grids introduce several problems not encountered

  9. Security and privacy in smart grids

    CERN Document Server

    Xiao, Yang

    2013-01-01

    Presenting the work of prominent researchers working on smart grids and related fields around the world, Security and Privacy in Smart Grids identifies state-of-the-art approaches and novel technologies for smart grid communication and security. It investigates the fundamental aspects and applications of smart grid security and privacy and reports on the latest advances in the range of related areas-making it an ideal reference for students, researchers, and engineers in these fields. The book explains grid security development and deployment and introduces novel approaches for securing today'

  10. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  11. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  12. Self-Awareness of Cloud Applications

    NARCIS (Netherlands)

    Iosup, Alexandru; Zhu, Xiaoyun; Merchant, Arif; Kalyvianaki, Eva; Maggio, Martina; Spinner, Simon; Abdelzaher, Tarek; Mengshoel, Ole; Bouchenak, Sara

    2016-01-01

    Cloud applications today deliver an increasingly larger portion of the Information and Communication Technology (ICT) services. To address the scale, growth, and reliability of cloud applications, self-aware management and scheduling are becoming commonplace. How are they used in practice? In this

  13. Making the most of cloud storage - a toolkit for exploitation by WLCG experiments

    Science.gov (United States)

    Alvarez Ayllon, Alejandro; Arsuaga Rios, Maria; Bitzes, Georgios; Furano, Fabrizio; Keeble, Oliver; Manzi, Andrea

    2017-10-01

    Understanding how cloud storage can be effectively used, either standalone or in support of its associated compute, is now an important consideration for WLCG. We report on a suite of extensions to familiar tools targeted at enabling the integration of cloud object stores into traditional grid infrastructures and workflows. Notable updates include support for a number of object store flavours in FTS3, Davix and gfal2, including mitigations for lack of vector reads; the extension of Dynafed to operate as a bridge between grid and cloud domains; protocol translation in FTS3; the implementation of extensions to DPM (also implemented by the dCache project) to allow 3rd party transfers over HTTP. The result is a toolkit which facilitates data movement and access between grid and cloud infrastructures, broadening the range of workflows suitable for cloud. We report on deployment scenarios and prototype experience, explaining how, for example, an Amazon S3 or Azure allocation can be exploited by grid workflows.

  14. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  15. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  16. Security in cloud computing and virtual environments

    OpenAIRE

    Aarseth, Raymond

    2015-01-01

    Cloud computing is a big buzzwords today. Just watch the commercials on TV and I can promise that you will hear the word cloud service at least once. With the growth of cloud technology steadily rising, and everything from cellphones to cars connected to the cloud, how secure is cloud technology? What are the caveats of using cloud technology? And how does it all work? This thesis will discuss cloud security and the underlying technology called Virtualization to ...

  17. Privacy Protection in Cloud Using Rsa Algorithm

    OpenAIRE

    Amandeep Kaur; Manpreet Kaur

    2014-01-01

    The cloud computing architecture has been on high demand nowadays. The cloud has been successful over grid and distributed environment due to its cost and high reliability along with high security. However in the area of research it is observed that cloud computing still has some issues in security regarding privacy. The cloud broker provide services of cloud to general public and ensures that data is protected however they sometimes lag security and privacy. Thus in this work...

  18. Community Cloud Computing

    Science.gov (United States)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  19. Can Clouds Replace Grids? A Real-Life Exabyte-Scale Test-Case

    CERN Document Server

    Shiers, J

    2008-01-01

    The world’s largest scientific machine – comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground – currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared â€ワopen” and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability – as seen by the experiments, as opposed to that measured by the official tools – still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently â€ワCloud Computing” – in terms of pay-per-use fabric provisioning – has...

  20. Security Audit Compliance for Cloud Computing

    OpenAIRE

    Doelitzscher, Frank

    2014-01-01

    Cloud computing has grown largely over the past three years and is widely popular amongst today's IT landscape. In a comparative study between 250 IT decision makers of UK companies they said, that they already use cloud services for 61% of their systems. Cloud vendors promise "infinite scalability and resources" combined with on-demand access from everywhere. This lets cloud users quickly forget, that there is still a real IT infrastructure behind a cloud. Due to virtualization and multi-ten...

  1. MULTI TENANCY SECURITY IN CLOUD COMPUTING

    OpenAIRE

    Manjinder Singh*, Charanjit Singh

    2017-01-01

    The word Cloud is used as a metaphor for the internet, based on standardised use of a cloud like shape to denote a network. Cloud Computing is advanced technology for resource sharing through network with less cost as compare to other technologies. Cloud infrastructure supports various models IAAS, SAAS, PAAS. The term virtualization in cloud computing is very useful today. With the help of virtualization, more than one operating system is supported with all resources on single H/W. We can al...

  2. Hidden in the Clouds: New Ideas in Cloud Computing

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the fullest degree. Please bring questions and opinions, and be ready to share both!   Bio: S...

  3. Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors

    Directory of Open Access Journals (Sweden)

    Claire E. Bulgin

    2018-04-01

    Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.

  4. The StratusLab cloud distribution: Use-cases and support for scientific applications

    Science.gov (United States)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take

  5. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  6. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  7. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  8. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  9. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  10. Cloud Computing and Security Issues

    OpenAIRE

    Rohan Jathanna; Dhanamma Jagli

    2017-01-01

    Cloud computing has become one of the most interesting topics in the IT world today. Cloud model of computing as a resource has changed the landscape of computing as it promises of increased greater reliability, massive scalability, and decreased costs have attracted businesses and individuals alike. It adds capabilities to Information Technology’s. Over the last few years, cloud computing has grown considerably in Information Technology. As more and more information of individuals and compan...

  11. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_TRMM-PFM-VIRS_Beta4)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  12. The Future of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anamaroa SIclovan

    2011-12-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offeredto the consumers as a product delivered online. This represents an advantage for the organization both regarding the cost and the opportunity for the new business. This paper presents the future perspectives in cloud computing. The paper presents some issues of the cloud computing paradigm. It is a theoretical paper.Keywords: Cloud Computing, Pay-per-use

  13. Evaluation results of the optimal estimation based, multi-sensor cloud property data sets derived from AVHRR heritage measurements in the Cloud_cci project.

    Science.gov (United States)

    Stapelberg, S.; Jerg, M.; Stengel, M.; Hollmann, R.

    2014-12-01

    In 2010 the ESA Climate Change Initiative (CCI) Cloud project was started with the objectives of generating a long-term coherent data set of cloud properties. The cloud properties considered are cloud mask, cloud top estimates, cloud optical thickness, cloud effective radius and post processed parameters such as cloud liquid and ice water path. During the first phase of the project 3 years of data spanning 2007 to 2009 have been produced on a global gridded daily and monthly mean basis. Next to the processing an extended evaluation study was started in order to gain a first understanding of the quality of the retrieved data. The critical discussion of the results of the evaluation holds a key role for the further development and improvement of the dataset's quality. The presentation will give a short overview of the evaluation study undertaken in the Cloud_cci project. The focus will be on the evaluation of gridded, monthly mean cloud fraction and cloud top data from the Cloud_cci AVHRR-heritage dataset with CLARA-A1, MODIS-Coll5, PATMOS-X and ISCCP data. Exemplary results will be shown. Strengths and shortcomings of the retrieval scheme as well as possible impacts of averaging approaches on the evaluation will be discussed. An Overview of Cloud_cci Phase 2 will be given.

  14. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  15. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  16. The Neighboring Column Approximation (NCA) – A fast approach for the calculation of 3D thermal heating rates in cloud resolving models

    International Nuclear Information System (INIS)

    Klinger, Carolin; Mayer, Bernhard

    2016-01-01

    Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to −150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only −100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5–2 higher compared to a 1D

  17. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    Science.gov (United States)

    De Salvo, A.; Kataoka, M.; Sanchez Pineda, A.; Smirnov, Y.

    2015-12-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original Workload Management Service (WMS) and the new PanDA modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over Wide Area Network. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation Database is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system is in production for ATLAS since 2013, having as main sites in HA the INFN Roma Tier 2 and the CERN Agile Infrastructure. The Light Job Submission Framework for Installation (LJSFi) v2 engine is directly interfacing with PanDA for the Job Management, the Atlas Grid Information System (AGIS) for the site parameter configurations, and CVMFS for both core components and the installation of the software itself. LJSFi2 is also able to use other plugins, and is essentially Virtual Organization (VO) agnostic, so can be directly used and extended to cope with the requirements of any Grid or Cloud enabled VO. In this work we will present the architecture, performance, status and possible evolutions to the system for the LHC Run2 and beyond.

  18. Interoperable Resource Management for establishing Federated Clouds

    OpenAIRE

    Kecskeméti, Gábor; Kertész, Attila; Marosi, Attila; Kacsuk, Péter

    2012-01-01

    Cloud Computing builds on the latest achievements of diverse research areas, such as Grid Computing, Service-oriented computing, business process modeling and virtualization. As this new computing paradigm was mostly lead by companies, several proprietary systems arose. Recently, alongside these commercial systems, several smaller-scale privately owned systems are maintained and developed. This chapter focuses on issues faced by users with interests on Multi-Cloud use and by Cloud providers w...

  19. Current Grid operation and future role of the Grid

    Science.gov (United States)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  20. Current Grid operation and future role of the Grid

    International Nuclear Information System (INIS)

    Smirnova, O

    2012-01-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  1. Development and Usage of Software as a Service for a Cloud and Non-Cloud Based Environment- An Empirical Study

    OpenAIRE

    Pratiyush Guleria Guleria; Vikas Sharma; Manish Arora

    2012-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. The computer applications nowadays are becoming more and more complex; there is an ever increasing demand for computing resources. As this demand has risen, the concepts of cloud computing and grid computing...

  2. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  3. GStat 2.0: Grid Information System Status Monitoring

    OpenAIRE

    Field, L; Huang, J; Tsai, M

    2009-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE info...

  4. Deliverable 1.1 Smart grid scenario

    DEFF Research Database (Denmark)

    Korman, Matus; Ekstedt, Mathias; Gehrke, Oliver

    2015-01-01

    The purpose of the SALVAGE project is to develop better support for managing and designing a secure future smart grid. This approach includes cyber security technologies dedicated to power grid operation as well as support for the migration to the future smart grid solutions, including the legacy...... of ICT that necessarily will be part of it. The objective is further to develop cyber security technology and methodology optimized with the particular needs and context of the power industry, something that is to a large extent lacking in general cyber security best practices and technologies today...

  5. Interoperable Cloud Networking for intelligent power supply; Interoperables Cloud Networking fuer intelligente Energieversorgung

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Dave [Invensys Operations Management, Foxboro, MA (United States)

    2010-09-15

    Intelligent power supply by a so-called Smart Grid will make it possible to control consumption by market-based pricing and signals for load reduction. This necessitates that both the energy rates and the energy information are distributed reliably and in real time to automation systems in domestic and other buildings and in industrial plants over a wide geographic range and across the most varied grid infrastructures. Effective communication at this level of complexity necessitates computer and grid resources that are normally only available in the computer centers of big industries. The cloud computing technology, which is described here in some detail, has all features to provide reliability, interoperability and efficiency for large-scale smart grid applications, at lower cost than traditional computer centers. (orig.)

  6. Progress in Grid Generation: From Chimera to DRAGON Grids

    Science.gov (United States)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  7. Cloud Computing for Technical and Online Organizations

    OpenAIRE

    Hagos Tesfahun Gebremichael; Dr.Vuda Sreenivasa Rao

    2016-01-01

    Cloud computing is a new computing model which is based on the grid computing, distributed computing, parallel computing and virtualization technologies define the shape of a new technology.It is the core technology of the next generation of network computing platform, especially in the field of education and online.Cloud computing as an exciting development in an educational Institute and online perspective.Cloud computing services are growing necessity for business organizations as well ...

  8. Securing the Data in Clouds with Hyperelliptic Curve Cryptography

    OpenAIRE

    Mukhopadhyay, Debajyoti; Shirwadkar, Ashay; Gaikar, Pratik; Agrawal, Tanmay

    2014-01-01

    In todays world, Cloud computing has attracted research communities as it provides services in reduced cost due to virtualizing all the necessary resources. Even modern business architecture depends upon Cloud computing .As it is a internet based utility, which provides various services over a network, it is prone to network based attacks. Hence security in clouds is the most important in case of cloud computing. Cloud Security concerns the customer to fully rely on storing data on clouds. Th...

  9. Micro Grid: A Smart Technology

    OpenAIRE

    Naveenkumar, M; Ratnakar, N

    2012-01-01

    Distributed Generation (DG) is an approach that employs small-scale technologies to produce electricity close to the end users of power. Todays DG technologies often consist of renewable generators, and offer a number of potential benefits. This paper presents a design of micro grid as part of Smart grid technologies with renewable energy resources like solar, wind and Diesel generator. The design of the microgrid with integration of Renewable energy sources are done in PSCAD/EMTDC.This paper...

  10. Integrating Flexible Sensor and Virtual Self-Organizing DC Grid Model With Cloud Computing for Blood Leakage Detection During Hemodialysis.

    Science.gov (United States)

    Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung

    2017-08-01

    Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.

  11. A principled approach to grid middleware

    DEFF Research Database (Denmark)

    Berthold, Jost; Bardino, Jonas; Vinter, Brian

    2011-01-01

    This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability and mini......This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability...... and minimal system requirements, applying strict principles to keep the middleware free of legacy burdens and overly complicated design. We provide an overview of MiG and describe its features in view of the Grid vision and its relation to more recent cloud computing trends....

  12. Smart grid applications and developments

    CERN Document Server

    Mah, Daphne; Li, Victor OK; Balme, Richard

    2014-01-01

    Meeting today's energy and climate challenges require not only technological advancement but also a good understanding of stakeholders' perceptions, political sensitivity, well-informed policy analyses and innovative interdisciplinary solutions. This book will fill this gap. This is an interdisciplinary informative book to provide a holistic and integrated understanding of the technology-stakeholder-policy interactions of smart grid technologies. The unique features of the book include the following: (a) interdisciplinary approach - by bringing in the policy dimensions to smart grid technologi

  13. International Symposium on Grids and Clouds (ISGC) 2017

    Science.gov (United States)

    2017-03-01

    The International Symposium on Grids and Clouds (ISGC) 2017 will be held at Academia Sinica in Taipei, Taiwan from 5-10 March 2017, with co- located events and workshops. The main theme of ISGC 2017 is "Global Challenges: From Open Data to Open Science". The unprecedented progress in ICT has transformed the way education is conducted and research is carried out. The emerging global e-Infrastructure, championed by global science communities such as High Energy Physics, Astronomy, and Bio- medicine, must permeate into other sciences. Many areas, such as climate change, disaster mitigation, and human sustainability and well-being, represent global challenges where collaboration over e-Infrastructure will presumably help resolve the common problems of the people who are impacted. Access to global e-Infrastructure helps also the less globally organized, long-tail sciences, with their own collaboration challenges. Open data are not only a political phenomenon serving government transparency; they also create an opportunity to eliminate access barriers to all scientific data, specifically data from global sciences and regional data that concern natural phenomena and people. In this regard, the purpose of open data is to improve sciences, accelerating specifically those that may benefit people. Nevertheless, to eliminate barriers to open data is itself a daunting task and the barriers to individuals, institutions and big collaborations are manifold. Open science is a step beyond open data, where the tools and understanding of scientific data must be made available to whoever is interested to participate in such scientific research. The promotion of open science may change the academic tradition practiced over the past few hundred years. This change of dynamics may contribute to the resolution of common challenges of human sustainability where the current pace of scientific progress is not sufficiently fast. ISGC 2017 created a face-to-face venue where individual

  14. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  15. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM1-MODIS_Edition2B)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2003-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  16. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM2-MODIS_Edition2A)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2003-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  17. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Aqua-FM3-MODIS_Edition2A)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  18. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM2-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  19. Cloud Based Educational Systems and Its Challenges and Opportunities and Issues

    Science.gov (United States)

    Paul, Prantosh Kr.; Lata Dangwal, Kiran

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…

  20. Statistical thermodynamics and the size distributions of tropical convective clouds.

    Science.gov (United States)

    Garrett, T. J.; Glenn, I. B.; Krueger, S. K.; Ferlay, N.

    2017-12-01

    Parameterizations for sub-grid cloud dynamics are commonly developed by using fine scale modeling or measurements to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to formulating these behaviors cloud state for use within a coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical thermodynamics. This second approach is quite widely used elsewhere in the atmospheric sciences: for example to explain the heat capacity of air, blackbody radiation, or even the density profile or air in the atmosphere. Here we describe how entrainment and detrainment across cloud perimeters is limited by the amount of available air and the range of moist static energy in the atmosphere, and that constrains cloud perimeter distributions to a power law with a -1 exponent along isentropes and to a Boltzmann distribution across isentropes. Further, the total cloud perimeter density in a cloud field is directly tied to the buoyancy frequency of the column. These simple results are shown to be reproduced within a complex dynamic simulation of a tropical convective cloud field and in passive satellite observations of cloud 3D structures. The implication is that equilibrium tropical cloud structures can be inferred from the bulk thermodynamic structure of the atmosphere without having to analyze computationally expensive dynamic simulations.

  1. Military clouds: utilization of cloud computing systems at the battlefield

    Science.gov (United States)

    Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai

    2012-05-01

    Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.

  2. Horizontal Variability of Water and Its Relationship to Cloud Fraction near the Tropical Tropopause: Using Aircraft Observations of Water Vapor to Improve the Representation of Grid-scale Cloud Formation in GEOS-5

    Science.gov (United States)

    Selkirk, Henry B.; Molod, Andrea M.

    2014-01-01

    Large-scale models such as GEOS-5 typically calculate grid-scale fractional cloudiness through a PDF parameterization of the sub-gridscale distribution of specific humidity. The GEOS-5 moisture routine uses a simple rectangular PDF varying in height that follows a tanh profile. While below 10 km this profile is informed by moisture information from the AIRS instrument, there is relatively little empirical basis for the profile above that level. ATTREX provides an opportunity to refine the profile using estimates of the horizontal variability of measurements of water vapor, total water and ice particles from the Global Hawk aircraft at or near the tropopause. These measurements will be compared with estimates of large-scale cloud fraction from CALIPSO and lidar retrievals from the CPL on the aircraft. We will use the variability measurements to perform studies of the sensitivity of the GEOS-5 cloud-fraction to various modifications to the PDF shape and to its vertical profile.

  3. Cloud Computing: Architecture and Services

    OpenAIRE

    Ms. Ravneet Kaur

    2018-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. It is a method for delivering information technology (IT) services where resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server. Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possib...

  4. Academic Training Lecture Regular Programme: Cloud Computing

    CERN Multimedia

    2012-01-01

    Cloud Computing (1/2), by Belmiro Rodrigues Moreira (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part).   Wednesday, May 30, 2012 from 11:00 to 12:00 (Europe/Zurich) at CERN ( 500-1-001 - Main Auditorium ) Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  5. A European Federated Cloud: Innovative distributed computing solutions by EGI

    Science.gov (United States)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research

  6. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  7. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  8. Re-thinking Grid Security Architecture

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Koeroo, O.; Groep, D.; van Engelen, R.; Govindaraju, M.; Cafaro, M.

    2008-01-01

    The security models used in Grid systems today strongly bear the marks of their diverse origin. Historically retrofitted to the distributed systems they are designed to protect and control, the security model is usually limited in scope and applicability, and its implementation tailored towards a

  9. A REVIEW ON SECURITY AND PRIVACY ISSUES IN CLOUD COMPUTING

    OpenAIRE

    Gulshan Kumar*, Dr.Vijay Laxmi

    2017-01-01

    Cloud computing is an upcoming paradigm that offers tremendous advantages in economical aspects, such as reduced time to market, flexible computing capabilities, and limitless computing power. To use the full potential of cloud computing, data is transferred, processed and stored by external cloud providers. However, data owners are very skeptical to place their data outside their own control sphere. Cloud computing is a new development of grid, parallel, and distributed computing with visual...

  10. Cloud portability and interoperability issues and current trends

    CERN Document Server

    Di Martino, Beniamino; Esposito, Antonio

    2015-01-01

    This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks

  11. Commercial trading of IaaS cloud resources

    CERN Multimedia

    CERN. Geneva; Dr. Watzl, Johannes

    2014-01-01

    Dr. Johannes Watzl is responsible for the Product Management at Deutsche Börse Cloud Exchange. His work is focused on the specification and introduction of new tradable products and and product features. Prior to his role at Deutsche Börse Cloud Exchange, Johannes was a researcher at Ludwig-Maximilians-Universität München where he worked at European Commission funded projects in the field of distributed computing and standardisation in grid and cloud computing and obtained his PhD. He started research on the...

  12. THE EXPANSION OF ACCOUNTING TO THE CLOUD

    OpenAIRE

    Otilia DIMITRIU; Marian MATEI

    2014-01-01

    The world today is witnessing an explosion of technologies that are remodelling our entire reality. The traditional way of thinking in the business field has shifted towards a new IT breakthrough: cloud computing. The cloud paradigm has emerged as a natural step in the evolution of the internet and has captivated everyone’s attention. The accounting profession itself has found a mean to optimize its activity through cloud-based applications. By reviewing the latest and most relevant studies a...

  13. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  14. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  15. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  16. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  17. Simulation For Synchronization Of A Micro-Grid With Three-Phase Systems

    OpenAIRE

    Mohammad Jafari Far

    2015-01-01

    Abstract today due to the high reliability of the micro-grids they have developed significantly. They have two states of operation the island state and connection to the main grid. Under certain circumstances the micro-grid is connected to or disconnected from the network. Synchronization of a micro-grid with the network must be done when its voltage is synchronized with the voltage in the main grid. Phase lock loops are responsible to identify the voltage phase of the micro-gird and the main...

  18. A Classification-oriented Method of Feature Image Generation for Vehicle-borne Laser Scanning Point Clouds

    Directory of Open Access Journals (Sweden)

    YANG Bisheng

    2016-02-01

    Full Text Available An efficient method of feature image generation of point clouds to automatically classify dense point clouds into different categories is proposed, such as terrain points, building points. The method first uses planar projection to sort points into different grids, then calculates the weights and feature values of grids according to the distribution of laser scanning points, and finally generates the feature image of point clouds. Thus, the proposed method adopts contour extraction and tracing means to extract the boundaries and point clouds of man-made objects (e.g. buildings and trees in 3D based on the image generated. Experiments show that the proposed method provides a promising solution for classifying and extracting man-made objects from vehicle-borne laser scanning point clouds.

  19. Security Issues Model on Cloud Computing: A Case of Malaysia

    OpenAIRE

    Komeil Raisian; Jamaiah Yahaya

    2015-01-01

    By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security v...

  20. The Pose Estimation of Mobile Robot Based on Improved Point Cloud Registration

    Directory of Open Access Journals (Sweden)

    Yanzi Miao

    2016-03-01

    Full Text Available Due to GPS restrictions, an inertial sensor is usually used to estimate the location of indoor mobile robots. However, it is difficult to achieve high-accuracy localization and control by inertial sensors alone. In this paper, a new method is proposed to estimate an indoor mobile robot pose with six degrees of freedom based on an improved 3D-Normal Distributions Transform algorithm (3D-NDT. First, point cloud data are captured by a Kinect sensor and segmented according to the distance to the robot. After the segmentation, the input point cloud data are processed by the Approximate Voxel Grid Filter algorithm in different sized voxel grids. Second, the initial registration and precise registration are performed respectively according to the distance to the sensor. The most distant point cloud data use the 3D-Normal Distributions Transform algorithm (3D-NDT with large-sized voxel grids for initial registration, based on the transformation matrix from the odometry method. The closest point cloud data use the 3D-NDT algorithm with small-sized voxel grids for precise registration. After the registrations above, a final transformation matrix is obtained and coordinated. Based on this transformation matrix, the pose estimation problem of the indoor mobile robot is solved. Test results show that this method can obtain accurate robot pose estimation and has better robustness.

  1. GridWise Standards Mapping Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bosquet, Mia L.

    2004-04-01

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis upon which this vision must build.

  2. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  3. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    Science.gov (United States)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current

  4. Early experience on using glideinWMS in the cloud

    International Nuclear Information System (INIS)

    Andrews, W; Dost, J; Martin, T; McCrea, A; Pi, H; Sfiligoi, I; Würthwein, F; Bockelman, B; Weitzel, D; Bradley, D; Frey, J; Livny, M; Tannenbaum, T; Evans, D; Fisk, I; Holzman, B; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  5. Early experience on using glidein WMS in the cloud

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, W. [UC, San Diego; Bockelman, B. [Nebraska U.; Bradley, D. [Wisconsin U., Madison; Dost, J. [UC, San Diego; Evans, D. [Fermilab; Fisk, I. [Fermilab; Frey, J. [Wisconsin U., Madison; Holzman, B. [Fermilab; Livny, M. [Wisconsin U., Madison; Martin, T. [UC, San Diego; McCrea, A. [UC, San Diego; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Pi, H. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Sheldon, P. [Vanderbilt U.; Tannenbaum, T. [Wisconsin U., Madison; Tiradani, A. [Fermilab; Wurthwein, F. [UC, San Diego; Weitzel, D. [Nebraska U.

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  6. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  7. MCloud: Secure Provenance for Mobile Cloud Users

    Science.gov (United States)

    2016-10-03

    Feasibility of Smartphone Clouds, 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). 04-MAY- 15, Shenzhen, China...compromised kernel, with the highest privilege. MCloud context data gathered by smartphone sensors can now be relayed correctly and with integrity...aspects of people’s daily online and physical activities. Yet, in critical settings it is especially difficult to ascertain and assert an acceptable level

  8. Distributed Optimization of Sustainable Power Dispatch and Flexible Consumer Loads for Resilient Power Grid Operations

    Science.gov (United States)

    Srikantha, Pirathayini

    Today's electric grid is rapidly evolving to provision for heterogeneous system components (e.g. intermittent generation, electric vehicles, storage devices, etc.) while catering to diverse consumer power demand patterns. In order to accommodate this changing landscape, the widespread integration of cyber communication with physical components can be witnessed in all tenets of the modern power grid. This ubiquitous connectivity provides an elevated level of awareness and decision-making ability to system operators. Moreover, devices that were typically passive in the traditional grid are now `smarter' as these can respond to remote signals, learn about local conditions and even make their own actuation decisions if necessary. These advantages can be leveraged to reap unprecedented long-term benefits that include sustainable, efficient and economical power grid operations. Furthermore, challenges introduced by emerging trends in the grid such as high penetration of distributed energy sources, rising power demands, deregulations and cyber-security concerns due to vulnerabilities in standard communication protocols can be overcome by tapping onto the active nature of modern power grid components. In this thesis, distributed constructs in optimization and game theory are utilized to design the seamless real-time integration of a large number of heterogeneous power components such as distributed energy sources with highly fluctuating generation capacities and flexible power consumers with varying demand patterns to achieve optimal operations across multiple levels of hierarchy in the power grid. Specifically, advanced data acquisition, cloud analytics (such as prediction), control and storage systems are leveraged to promote sustainable and economical grid operations while ensuring that physical network, generation and consumer comfort requirements are met. Moreover, privacy and security considerations are incorporated into the core of the proposed designs and these

  9. The evolution of cloud computing how to plan for change

    CERN Document Server

    Longbottom, Clive

    2017-01-01

    Cloud computing has been positioned as today's ideal IT platform. This book looks at what cloud promises and how it's likely to evolve in the future. Readers will be able to ensure that decisions made now will hold them in good stead in the future and will gain an understanding of how cloud can deliver the best outcome for their organisations.

  10. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    Science.gov (United States)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  11. Usage of Cloud Computing Simulators and Future Systems For Computational Research

    OpenAIRE

    Lakshminarayanan, Ramkumar; Ramalingam, Rajasekar

    2016-01-01

    Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) are used as a business model for Cloud Computing. Nowadays, the adoption and deployment of Cloud Computing is increasing in various domains, forcing researchers to conduct research in the area of Cloud Computing ...

  12. Transition to the Cloud

    DEFF Research Database (Denmark)

    Hedman, Jonas; Xiao, Xiao

    2016-01-01

    The rising of cloud computing has dramatically changed the way software companies provide and distribute their IT product and related services over the last decades. Today, most software is bought offthe-shelf and distributed over the Internet. This transition is greatly influencing how software...... companies operate. In this paper, we present a case study of an ERP vendor for SMB (small and mediumsize business) in making a transition towards a cloud-based business model. Through the theoretical lens of ecosystem, we are able to analyze the evolution of the vendor and its business network as a whole......, and find that the relationship between vendor and Value-added-Reseller (VAR) is greatly affected. We conclude by presenting critical issues and challenges for managing such cloud transition....

  13. Testing as a Service with HammerCloud

    CERN Document Server

    Medrano Llamas, Ramón; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco; Sciabà, Andrea; van der Ster, Daniel

    2014-01-01

    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements....

  14. Sharing lessons learned on developing and operating smart grid pilots with households

    NARCIS (Netherlands)

    Kobus, C.B.A.; Klaassen, E.A.M.; Kohlmann, J.; Knigge, J.D.; Boots, S.

    2013-01-01

    Today, technology is still leading Smart Grid development. Nevertheless, the awareness that it should be a multidisciplinary effort to foster public acceptance and even desirability of Smart Grids is increasing. This paper illustrates the added value of a multidisciplinary approach by sharing the

  15. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    Science.gov (United States)

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework

  16. Simulation For Synchronization Of A Micro-Grid With Three-Phase Systems

    Directory of Open Access Journals (Sweden)

    Mohammad Jafari Far

    2015-08-01

    Full Text Available Abstract today due to the high reliability of the micro-grids they have developed significantly. They have two states of operation the island state and connection to the main grid. Under certain circumstances the micro-grid is connected to or disconnected from the network. Synchronization of a micro-grid with the network must be done when its voltage is synchronized with the voltage in the main grid. Phase lock loops are responsible to identify the voltage phase of the micro-gird and the main grid and when these two voltages are in the same phase they connect the micro-grid to the main grid. In this research the connection of a micro-grid to the main grid in the two phases of synchronous and asynchronous voltage is simulated and investigated.

  17. CERN Computing Colloquium | Hidden in the Clouds: New Ideas in Cloud Computing | 30 May

    CERN Multimedia

    2013-01-01

    by Dr. Shevek (NEBULA) Thursday 30 May 2013 from 2 p.m. to 4 p.m. at CERN ( 40-S2-D01 - Salle Dirac ) Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the ful...

  18. Cloud Based Educational Systems 
And Its Challenges And Opportunities And Issues

    OpenAIRE

    PAUL, Prantosh Kr.; DANGWAL, Kiran LATA

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and smarter tools and technological gradients. Healthy Cloud Computing helps in sharing of software, hardware, application and other packages with the help o...

  19. THE EXPANSION OF ACCOUNTING TO THE CLOUD

    Directory of Open Access Journals (Sweden)

    Otilia DIMITRIU

    2014-06-01

    Full Text Available The world today is witnessing an explosion of technologies that are remodelling our entire reality. The traditional way of thinking in the business field has shifted towards a new IT breakthrough: cloud computing. The cloud paradigm has emerged as a natural step in the evolution of the internet and has captivated everyone’s attention. The accounting profession itself has found a mean to optimize its activity through cloud-based applications. By reviewing the latest and most relevant studies and practitioners’ reports, this paper is focused on the implications of cloud accounting, as the fusion between cloud technologies and accounting. We addressed this innovative topic through a business-oriented approach and we brought forward a new accounting model that might revolutionize the economic landscape.

  20. Exploiting the Potential of Data Centers in the Smart Grid

    Science.gov (United States)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  1. A Comparison of MODIS/VIIRS Cloud Masks over Ice-Bearing River: On Achieving Consistent Cloud Masking and Improved River Ice Mapping

    Directory of Open Access Journals (Sweden)

    Simon Kraatz

    2017-03-01

    Full Text Available The capability of frequently and accurately monitoring ice on rivers is important, since it may be possible to timely identify ice accumulations corresponding to ice jams. Ice jams are dam-like structures formed from arrested ice floes, and may cause rapid flooding. To inform on this potential hazard, the CREST River Ice Observing System (CRIOS produces ice cover maps based on MODIS and VIIRS overpass data at several locations, including the Susquehanna River. CRIOS uses the respective platform’s automatically produced cloud masks to discriminate ice/snow covered grid cells from clouds. However, since cloud masks are produced using each instrument’s data, and owing to differences in detector performance, it is quite possible that identical algorithms applied to even nearly identical instruments may produce substantially different cloud masks. Besides detector performance, cloud identification can be biased due to local (e.g., land cover, viewing geometry, and transient conditions (snow and ice. Snow/cloud confusions and large view angles can result in substantial overestimates of clouds and ice. This impacts algorithms, such as CRIOS, since false cloud cover precludes the determination of whether an otherwise reasonably cloud free grid consists of water or ice. Especially for applications aiming to frequently classify or monitor a location it is important to evaluate cloud masking, including false cloud detections. We present an assessment of three cloud masks via the parameter of effective revisit time. A 100 km stretch of up to 1.6 km wide river was examined with daily data sampled at 500 m resolution, examined over 317 days during winter. Results show that there are substantial differences between each of the cloud mask products, especially while the river bears ice. A contrast-based cloud screening approach was found to provide improved and consistent cloud and ice identification within the reach (95%–99% correlations, and 3%–7% mean

  2. RACORO Extended-Term Aircraft Observations of Boundary-Layer Clouds

    Science.gov (United States)

    Vogelmann, Andrew M.; McFarquhar, Greg M.; Ogren, John A.; Turner, David D.; Comstock, Jennifer M.; Feingold, Graham; Long, Charles N.; Jonsson, Haflidi H.; Bucholtz, Anthony; Collins, Don R.; hide

    2012-01-01

    Small boundary-layer clouds are ubiquitous over many parts of the globe and strongly influence the Earths radiative energy balance. However, our understanding of these clouds is insufficient to solve pressing scientific problems. For example, cloud feedback represents the largest uncertainty amongst all climate feedbacks in general circulation models (GCM). Several issues complicate understanding boundary-layer clouds and simulating them in GCMs. The high spatial variability of boundary-layer clouds poses an enormous computational challenge, since their horizontal dimensions and internal variability occur at spatial scales much finer than the computational grids used in GCMs. Aerosol-cloud interactions further complicate boundary-layer cloud measurement and simulation. Additionally, aerosols influence processes such as precipitation and cloud lifetime. An added complication is that at small scales (order meters to 10s of meters) distinguishing cloud from aerosol is increasingly difficult, due to the effects of aerosol humidification, cloud fragments and photon scattering between clouds.

  3. Cloud Interaction and Safety Features of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Mirsat Yeşiltepe

    2018-02-01

    Full Text Available In this paper, two current popular mobile operating system, still in relation to the conceptof cloud began to supplant the internet almost Word today, the differences, the concept of cloudsecurity mechanisms they use for themselves and are dealt with in this environment. One ofcomparing mobile operation system is representing open source and the other for close source one.The other issue discussed in this article is how the mobile environment interacts with the cloud thanthe cloud communication with the computers.

  4. Cloud Computing: Should It Be Integrated into the Curriculum?

    Science.gov (United States)

    Changchit, Chuleeporn

    2015-01-01

    Cloud computing has become increasingly popular among users and businesses around the world, and education is no exception. Cloud computing can bring an increased number of benefits to an educational setting, not only for its cost effectiveness, but also for the thirst for technology that college students have today, which allows learning and…

  5. Automated cloud tracking system for the Akatsuki Venus Climate Orbiter data

    Science.gov (United States)

    Ogohara, Kazunori; Kouyama, Toru; Yamamoto, Hiroki; Sato, Naoki; Takagi, Masahiro; Imamura, Takeshi

    2012-02-01

    Japanese Venus Climate Orbiter, Akatsuki, is cruising to approach to Venus again although its first Venus orbital insertion (VOI) has been failed. At present, we focus on the next opportunity of VOI and the following scientific observations.We have constructed an automated cloud tracking system for processing data obtained by Akatsuki in the present study. In this system, correction of the pointing of the satellite is essentially important for improving accuracy of the cloud motion vectors derived using the cloud tracking. Attitude errors of the satellite are reduced by fitting an ellipse to limb of an imaged Venus disk. Next, longitude-latitude distributions of brightness (cloud patterns) are calculated to make it easy to derive the cloud motion vectors. The grid points are distributed at regular intervals in the longitude-latitude coordinate. After applying the solar zenith correction and a highpass filter to the derived longitude-latitude distributions of brightness, the cloud features are tracked using pairs of images. As a result, we obtain cloud motion vectors on longitude-latitude grid points equally spaced. These entire processes are pipelined and automated, and are applied to all data obtained by combinations of cameras and filters onboard Akatsuki. It is shown by several tests that the cloud motion vectors are determined with a sufficient accuracy. We expect that longitude-latitude data sets created by the automated cloud tracking system will contribute to the Venus meteorology.

  6. a Point Cloud Classification Approach Based on Vertical Structures of Ground Objects

    Science.gov (United States)

    Zhao, Y.; Hu, Q.; Hu, W.

    2018-04-01

    This paper proposes a novel method for point cloud classification using vertical structural characteristics of ground objects. Since urbanization develops rapidly nowadays, urban ground objects also change frequently. Conventional photogrammetric methods cannot satisfy the requirements of updating the ground objects' information efficiently, so LiDAR (Light Detection and Ranging) technology is employed to accomplish this task. LiDAR data, namely point cloud data, can obtain detailed three-dimensional coordinates of ground objects, but this kind of data is discrete and unorganized. To accomplish ground objects classification with point cloud, we first construct horizontal grids and vertical layers to organize point cloud data, and then calculate vertical characteristics, including density and measures of dispersion, and form characteristic curves for each grids. With the help of PCA processing and K-means algorithm, we analyze the similarities and differences of characteristic curves. Curves that have similar features will be classified into the same class and point cloud correspond to these curves will be classified as well. The whole process is simple but effective, and this approach does not need assistance of other data sources. In this study, point cloud data are classified into three classes, which are vegetation, buildings, and roads. When horizontal grid spacing and vertical layer spacing are 3 m and 1 m respectively, vertical characteristic is set as density, and the number of dimensions after PCA processing is 11, the overall precision of classification result is about 86.31 %. The result can help us quickly understand the distribution of various ground objects.

  7. The Grid is operational – it’s official!

    CERN Multimedia

    2008-01-01

    On Friday, 3 October, CERN and its many partners around the world officially marked the end of seven years of development and deployment of the Worldwide LHC Computing Grid (WLCG) and the beginning of continuous operations with an all-day Grid Fest. Wolfgang von Rüden unveils the WLCG sculpture. Les Robertson speaking at the Grid Fest. At the LHC Grid Fest, Bob Jones highlights the far-reaching uses of grid computing. Over 250 grid-enthusiasts gathered in the Globe, including large delegations from the press and from industrial partners, as well as many of the people around the world who manage the distributed operations of the WLCG, which today comprises more than 140 computer centres in 33 countries. As befits a cutting-edge information technology, many participants joined virtually, by video, to mark the occasion. Unlike the start-up of the LHC, there was no single moment of high dram...

  8. Practical Experiences With Torque Meta-Scheduling In The Czech National Grid

    Directory of Open Access Journals (Sweden)

    Simon Toth

    2012-01-01

    Full Text Available The Czech National Grid Infrastructure went through a complex transition inthe last year. The production environment has been switched from a commercialbatch system PBSPro, which was replaced by an open source alternative Torquebatch system.This paper concentrates on two aspects of this transition. First, we will presentour practical experience with Torque being used as a production ready batchsystem. Our modified version of Torque, with all the necessary PBSPro ex-clusive features re-implemented and further extended with new features likecloud-like behaviour, was deployed across the entire production environment,covering the entire Czech Republic for almost a full year.In the second part, we will present our work on meta-scheduling. This in-volves our work on distributed architecture and cloud-grid convergence. Thedistributed architecture was designed to overcome the limitations of a centralserver setup, which was originally used and presented stability and performanceissues. While this paper does not discuss the inclusion of cloud interfaces intogrids, it does present the dynamic infrastructure, which is a requirement forsharing the grid infrastructure between a batch system and a cloud gateway.We are also inviting everyone to try out our fork of the Torque batch system,which is now publicly available.

  9. Testing as a service with HammerCloud

    International Nuclear Information System (INIS)

    Llamas, Ramón Medrano; Barrand, Quentin; Sciabà, Andrea; Ster, Daniel van der; Elmsheuser, Johannes; Legger, Federica; Sciacca, Gianfranco

    2014-01-01

    HammerCloud was designed and born under the needs of the grid community to test the resources and automate operations from a user perspective. The recent developments in the IT space propose a shift to the software defined data centres, in which every layer of the infrastructure can be offered as a service. Testing and monitoring is an integral part of the development, validation and operations of big systems, like the grid. This area is not escaping the paradigm shift and we are starting to perceive as natural the Testing as a Service (TaaS) offerings, which allow testing any infrastructure service, such as the Infrastructure as a Service (IaaS) platforms being deployed in many grid sites, both from the functional and stressing perspectives. This work will review the recent developments in HammerCloud and its evolution to a TaaS conception, in particular its deployment on the Agile Infrastructure platform at CERN and the testing of many IaaS providers across Europe in the context of experiment requirements. The first section will review the architectural changes that a service running in the cloud needs, such an orchestration service or new storage requirements in order to provide functional and stress testing. The second section will review the first tests of infrastructure providers on the perspective of the challenges discovered from the architectural point of view. Finally, the third section will evaluate future requirements of scalability and features to increase testing productivity.

  10. The Integration of CloudStack and OCCI/OpenNebula with DIRAC

    International Nuclear Information System (INIS)

    Méndez Muñoz, Víctor; Merino Arévalo, Gonzalo; Fernández Albor, Víctor; Saborido Silva, Juan José; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás

    2012-01-01

    The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License

  11. Smart grid in Denmark 2.0. Implementing three key recommendations from the Smart Grid Network. [DanGrid]; Smart Grid i Danmark 2.0. Implementering af tre centrale anbefalinger fra Smart Grid netvaerket

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-01

    smart grid technology. The second barrier is that network companies today do not have a real opportunity to use price signals as an instrument to recover customers' flexibility. This report has developed a roadmap with special focus on grid companies' role, describing the most important steps towards a smart grid. (LN)

  12. SIRTA, a ground-based atmospheric observatory for cloud and aerosol research

    Directory of Open Access Journals (Sweden)

    M. Haeffelin

    2005-02-01

    Full Text Available Ground-based remote sensing observatories have a crucial role to play in providing data to improve our understanding of atmospheric processes, to test the performance of atmospheric models, and to develop new methods for future space-borne observations. Institut Pierre Simon Laplace, a French research institute in environmental sciences, created the Site Instrumental de Recherche par Télédétection Atmosphérique (SIRTA, an atmospheric observatory with these goals in mind. Today SIRTA, located 20km south of Paris, operates a suite a state-of-the-art active and passive remote sensing instruments dedicated to routine monitoring of cloud and aerosol properties, and key atmospheric parameters. Detailed description of the state of the atmospheric column is progressively archived and made accessible to the scientific community. This paper describes the SIRTA infrastructure and database, and provides an overview of the scientific research associated with the observatory. Researchers using SIRTA data conduct research on atmospheric processes involving complex interactions between clouds, aerosols and radiative and dynamic processes in the atmospheric column. Atmospheric modellers working with SIRTA observations develop new methods to test their models and innovative analyses to improve parametric representations of sub-grid processes that must be accounted for in the model. SIRTA provides the means to develop data interpretation tools for future active remote sensing missions in space (e.g. CloudSat and CALIPSO. SIRTA observation and research activities take place in networks of atmospheric observatories that allow scientists to access consistent data sets from diverse regions on the globe.

  13. Monitoring the EGEE/WLCG grid services

    International Nuclear Information System (INIS)

    Duarte, A; Nyczyk, P; Retico, A; Vicinanza, D

    2008-01-01

    Grids have the potential to revolutionise computing by providing ubiquitous, on demand access to computational services and resources. They promise to allow for on demand access and composition of computational services provided by multiple independent sources. Grids can also provide unprecedented levels of parallelism for high-performance applications. On the other hand, grid characteristics, such as high heterogeneity, complexity and distribution create many new technical challenges. Among these technical challenges, failure management is a key area that demands much progress. A recent survey revealed that fault diagnosis is still a major problem for grid users. When a failure appears at the user screen, it becomes very difficult for the user to identify whether the problem is in the application, somewhere in the grid middleware, or even lower in the fabric that comprises the grid. In this paper we present a tool able to check if a given grid service works as expected for a given set of users (Virtual Organisation) on the different resources available on a grid. Our solution deals with grid services as single components that should produce an expected output to a pre-defined input, what is quite similar to unit testing. The tool, called Service Availability Monitoring or SAM, is being currently used by several different Virtual Organizations to monitor more than 300 grid sites belonging to the largest grids available today. We also discuss how this tool is being used by some of those VOs and how it is helping in the operation of the EGEE/WLCG grid

  14. Security in Cloud Computing For Service Delivery Models: Challenges and Solutions

    OpenAIRE

    Preeti Barrow; Runni Kumari; Prof. Manjula R

    2016-01-01

    Cloud computing, undoubtedly, is a path to expand the limits or add powerful capabilities on-demand with almost no investment in new framework, training new staff, or authorizing new software. Though today everyone is talking about cloud but, organizations are still in dilemma whether it’s safe to deploy their business on cloud. The reason behind it; is nothing but Security. No cloud service provider provides 100% security assurance to its customers and therefore, businesses are h...

  15. Using Cloud-to-Ground Lightning Climatologies to Initialize Gridded Lightning Threat Forecasts for East Central Florida

    Science.gov (United States)

    Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew

    2005-01-01

    Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on

  16. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  17. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  18. Datacenter Changes vs. Employment Rates for Datacenter Managers In the Cloud Computing Era

    OpenAIRE

    Mirzoev, Timur; Benson, Bruce; Hillhouse, David; Lewis, Mickey

    2014-01-01

    Due to the evolving Cloud Computing paradigm, there is a prevailing concern that in the near future data center managers may be in short supply. Cloud computing, as a whole, is becoming more prevalent into today s computing world. In fact, cloud computing has become so popular that some are now referring to data centers as cloud centers. How does this interest in cloud computing translate into employment rates for data center managers? The popularity of the public and private cloud models are...

  19. INTERFACING INTERACTIVE DATA ANALYSIS TOOLS WITH THE GRID: THE PPDG CS-11 ACTIVITY

    International Nuclear Information System (INIS)

    Perl, Joseph

    2003-01-01

    For today's physicists, who work in large geographically distributed collaborations, the data grid promises significantly greater capabilities for analysis of experimental data and production of physics results than is possible with today's ''remote access'' technologies. The goal of letting scientists at their home institutions interact with and analyze data as if they were physically present at the major laboratory that houses their detector and computer center has yet to be accomplished. The Particle Physics Data Grid project (www.ppdg.net) has recently embarked on an effort to ''Interface and Integrate Interactive Data Analysis Tools with the grid and identify Common Components and Services''. The initial activities are to collect known and identify new requirements for grid services and analysis tools from a range of current and future experiments to determine if existing plans for tools and services meet these requirements. Follow-on activities will foster the interaction between grid service developers, analysis tool developers, experiment analysis framework developers and end user physicists, and will identify and carry out specific development/integration work so that interactive analysis tools utilizing grid services actually provide the capabilities that users need. This talk will summarize what we know of requirements for analysis tools and grid services, as well as describe the identified areas where more development work is needed

  20. Evaluation of NCMRWF unified model vertical cloud structure with CloudSat over the Indian summer monsoon region

    Science.gov (United States)

    Jayakumar, A.; Mamgain, Ashu; Jisesh, A. S.; Mohandas, Saji; Rakhi, R.; Rajagopal, E. N.

    2016-05-01

    Representation of rainfall distribution and monsoon circulation in the high resolution versions of NCMRWF Unified model (NCUM-REG) for the short-range forecasting of extreme rainfall event is vastly dependent on the key factors such as vertical cloud distribution, convection and convection/cloud relationship in the model. Hence it is highly relevant to evaluate the vertical structure of cloud and precipitation of the model over the monsoon environment. In this regard, we utilized the synergy of the capabilities of CloudSat data for long observational period, by conditioning it for the synoptic situation of the model simulation period. Simulations were run at 4-km grid length with the convective parameterization effectively switched off and on. Since the sample of CloudSat overpasses through the monsoon domain is small, the aforementioned methodology may qualitatively evaluate the vertical cloud structure for the model simulation period. It is envisaged that the present study will open up the possibility of further improvement in the high resolution version of NCUM in the tropics for the Indian summer monsoon associated rainfall events.

  1. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  2. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  3. Cloud Computing - A Unified Approach for Surveillance Issues

    Science.gov (United States)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  4. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  5. POINT CLOUD ORIENTED SHOULDER LINE EXTRACTION IN LOESS HILLY AREA

    Directory of Open Access Journals (Sweden)

    L. Min

    2016-06-01

    Full Text Available Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains. Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i ground points were selected by using a grid filter in order to remove most of noisy points. (ii Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains, using Natural Break Classified method. (iii The common boundary between two slopes is extracted as shoulder line candidate. (iv Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  6. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    Science.gov (United States)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  7. Exploring the factors influencing the cloud computing adoption: a systematic study on cloud migration.

    Science.gov (United States)

    Rai, Rashmi; Sahoo, Gadadhar; Mehfuz, Shabana

    2015-01-01

    Today, most of the organizations trust on their age old legacy applications, to support their business-critical systems. However, there are several critical concerns, as maintainability and scalability issues, associated with the legacy system. In this background, cloud services offer a more agile and cost effective platform, to support business applications and IT infrastructure. As the adoption of cloud services has been increasing recently and so has been the academic research in cloud migration. However, there is a genuine need of secondary study to further strengthen this research. The primary objective of this paper is to scientifically and systematically identify, categorize and compare the existing research work in the area of legacy to cloud migration. The paper has also endeavored to consolidate the research on Security issues, which is prime factor hindering the adoption of cloud through classifying the studies on secure cloud migration. SLR (Systematic Literature Review) of thirty selected papers, published from 2009 to 2014 was conducted to properly understand the nuances of the security framework. To categorize the selected studies, authors have proposed a conceptual model for cloud migration which has resulted in a resource base of existing solutions for cloud migration. This study concludes that cloud migration research is in seminal stage but simultaneously it is also evolving and maturing, with increasing participation from academics and industry alike. The paper also identifies the need for a secure migration model, which can fortify organization's trust into cloud migration and facilitate necessary tool support to automate the migration process.

  8. Digital Forensics in Cloud Computing

    Directory of Open Access Journals (Sweden)

    PATRASCU, A.

    2014-05-01

    Full Text Available Cloud Computing is a rather new technology which has the goal of efficiently usage of datacenter resources and offers them to the users on a pay per use model. In this equation we need to know exactly where and how a piece of information is stored or processed. In today's cloud deployments this task is becoming more and more a necessity and a must because we need a way to monitor user activity, and furthermore, in case of legal actions, we must be able to present digital evidence in a way in which it is accepted. In this paper we are going to present a modular and distributed architecture that can be used to implement a cloud digital forensics framework on top of new or existing datacenters.

  9. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  10. Subtropical Low Cloud Response to a Warmer Climate in an Superparameterized Climate Model: Part I. Regime Sorting and Physical Mechanisms

    Directory of Open Access Journals (Sweden)

    Peter N Blossey

    2009-07-01

    Full Text Available The subtropical low cloud response to a climate with SST uniformly warmed by 2 K is analyzed in the SP- CAM superparameterized climate model, in which each grid column is replaced by a two-dimensional cloud-resolving model (CRM. Intriguingly, SP-CAM shows substantial low cloud increases over the subtropical oceans in the warmer climate. The paper aims to understand the mechanism for these increases. The subtropical low cloud increase is analyzed by sorting grid-column months of the climate model into composite cloud regimes using percentile ranges of lower tropospheric stability (LTS. LTS is observed to be well correlated to subtropical low cloud amount and boundary layer vertical structure. The low cloud increase in SP-CAM is attributed to boundary-layer destabilization due to increased clear-sky radiative cooling in the warmer climate. This drives more shallow cumulus convection and a moister boundary layer, inducing cloud increases and further increasing the radiative cooling. The boundary layer depth does not change substantially, due to compensation between increased radiative cooling (which promotes more turbulent mixing and boundary-layer deepening and slight strengthening of the boundary-layer top inversion (which inhibits turbulent entrainment and promotes a shallower boundary layer. The widespread changes in low clouds do not appear to be driven by changes in mean subsidence.
    In a companion paper we use column-mode CRM simulations based on LTS-composite profiles to further study the low cloud response mechanisms and to explore the sensitivity of low cloud response to grid resolution in SP-CAM.

  11. The Impact of Cloud Computing Technologies in E-learning

    Directory of Open Access Journals (Sweden)

    Hosam Farouk El-Sofany

    2013-01-01

    Full Text Available Cloud computing is a new computing model which is based on the grid computing, distributed computing, parallel computing and virtualization technologies define the shape of a new technology. It is the core technology of the next generation of network computing platform, especially in the field of education, cloud computing is the basic environment and platform of the future E-learning. It provides secure data storage, convenient internet services and strong computing power. This article mainly focuses on the research of the application of cloud computing in E-learning environment. The research study shows that the cloud platform is valued for both students and instructors to achieve the course objective. The paper presents the nature, benefits and cloud computing services, as a platform for e-learning environment.

  12. Cloudbus Toolkit for Market-Oriented Cloud Computing

    Science.gov (United States)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  13. Grid deformation strategies for CFD analysis of screw compressors

    OpenAIRE

    Rane, S.; Kovacevic, A.; Stosic, N.; Kethidi, M.

    2013-01-01

    Customized grid generation of twin screw machines for CFD analysis is widely used by the refrigeration and air-conditioning industry today, but is currently not suitable for topologies such as those of single screw, variable pitch or tri screw rotors. This paper investigates a technique called key-frame re-meshing that supplies pre-generated unstructured grids to the CFD solver at different time steps. To evaluate its accuracy, the results of an isentropic compression-expansion process in a r...

  14. Grid production with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2018-01-01

    ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.

  15. Security and Privacy Issues in Cloud Computing

    OpenAIRE

    Sen, Jaydip

    2013-01-01

    Today, cloud computing is defined and talked about across the ICT industry under different contexts and with different definitions attached to it. It is a new paradigm in the evolution of Information Technology, as it is one of the biggest revolutions in this field to have taken place in recent times. According to the National Institute for Standards and Technology (NIST), “cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing ...

  16. GStat 2.0: Grid Information System Status Monitoring

    CERN Document Server

    Field, L; Tsai, M; CERN. Geneva. IT Department

    2010-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE information system, which is a hierarchical system built out of more than 260 site-level and approximately 70 global aggregation services. It also checks the information content and presents summary and history displays for Grid Operators and System Administrators. A major new version, GStat 2.0, aims to build on the production experience of GStat and provides additional functionality, which enables it to be extended and combined with other tools

  17. New experiment to investigate cosmic connection to clouds

    CERN Multimedia

    United Kingdom. Particle Physics and Astronomy Research Council

    2006-01-01

    "A novel experiment, known as CLOUD (Cosmics Leaving OUtdoor Droplets), begins taking its first data today with a prototype detector in a prticle beam at CERN, the world's largest laboratory for particle physics." (1,5 page)

  18. Contrasting the co-variability of daytime cloud and precipitation over tropical land and ocean

    Science.gov (United States)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin; Cho, Nayeong; Tan, Jackson

    2018-03-01

    The co-variability of cloud and precipitation in the extended tropics (35° N-35° S) is investigated using contemporaneous data sets for a 13-year period. The goal is to quantify potential relationships between cloud type fractions and precipitation events of particular strength. Particular attention is paid to whether the relationships exhibit different characteristics over tropical land and ocean. A primary analysis metric is the correlation coefficient between fractions of individual cloud types and frequencies within precipitation histogram bins that have been matched in time and space. The cloud type fractions are derived from Moderate Resolution Imaging Spectroradiometer (MODIS) joint histograms of cloud top pressure and cloud optical thickness in 1° grid cells, and the precipitation frequencies come from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) data set aggregated to the same grid.It is found that the strongest coupling (positive correlation) between clouds and precipitation occurs over ocean for cumulonimbus clouds and the heaviest rainfall. While the same cloud type and rainfall bin are also best correlated over land compared to other combinations, the correlation magnitude is weaker than over ocean. The difference is attributed to the greater size of convective systems over ocean. It is also found that both over ocean and land the anti-correlation of strong precipitation with weak (i.e., thin and/or low) cloud types is of greater absolute strength than positive correlations between weak cloud types and weak precipitation. Cloud type co-occurrence relationships explain some of the cloud-precipitation anti-correlations. Weak correlations between weaker rainfall and clouds indicate poor predictability for precipitation when cloud types are known, and this is even more true over land than over ocean.

  19. Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel

    Science.gov (United States)

    Stubenrauch, C. J.; Rossow, W. B.; Kinne, S.; Ackerman, S.; Cesana, G.; Chepfer, H.; Getzewich, B.; Di Girolamo, L.; Guignard, A.; Heidinger, A.; hide

    2012-01-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the whole globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years in length. However, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provided the first coordinated intercomparison of publically available, standard global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multiangle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. A monthly, gridded database, in common format, facilitates further assessments, climate studies and the evaluation of climate models.

  20. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  1. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case

    International Nuclear Information System (INIS)

    Ciaschini, Vincenzo; Dal Pra, Stefano; Dell'Agnello, Luca

    2015-01-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF. (paper)

  2. Dynamic partitioning as a way to exploit new computing paradigms: the cloud use case.

    Science.gov (United States)

    Ciaschini, Vincenzo; Dal Pra, Stefano; dell'Agnello, Luca

    2015-12-01

    The WLCG community and many groups in the HEP community have based their computing strategy on the Grid paradigm, which proved successful and still ensures its goals. However, Grid technology has not spread much over other communities; in the commercial world, the cloud paradigm is the emerging way to provide computing services. WLCG experiments aim to achieve integration of their existing current computing model with cloud deployments and take advantage of the so-called opportunistic resources (including HPC facilities) which are usually not Grid compliant. One missing feature in the most common cloud frameworks, is the concept of job scheduler, which plays a key role in a traditional computing centre, by enabling a fairshare based access at the resources to the experiments in a scenario where demand greatly outstrips availability. At CNAF we are investigating the possibility to access the Tier-1 computing resources as an OpenStack based cloud service. The system, exploiting the dynamic partitioning mechanism already being used to enable Multicore computing, allowed us to avoid a static splitting of the computing resources in the Tier-1 farm, while permitting a share friendly approach. The hosts in a dynamically partitioned farm may be moved to or from the partition, according to suitable policies for request and release of computing resources. Nodes being requested in the partition switch their role and become available to play a different one. In the cloud use case hosts may switch from acting as Worker Node in the Batch system farm to cloud compute node member, made available to tenants. In this paper we describe the dynamic partitioning concept, its implementation and integration with our current batch system, LSF.

  3. The Representation of Tropical Cyclones Within the Global William Putman Non-Hydrostatic Goddard Earth Observing System Model (GEOS-5) at Cloud-Permitting Resolutions

    Science.gov (United States)

    Putman, William M.

    2010-01-01

    The Goddard Earth Observing System Model (GEOS-S), an earth system model developed in the NASA Global Modeling and Assimilation Office (GMAO), has integrated the non-hydrostatic finite-volume dynamical core on the cubed-sphere grid. The extension to a non-hydrostatic dynamical framework and the quasi-uniform cubed-sphere geometry permits the efficient exploration of global weather and climate modeling at cloud permitting resolutions of 10- to 4-km on today's high performance computing platforms. We have explored a series of incremental increases in global resolution with GEOS-S from irs standard 72-level 27-km resolution (approx.5.5 million cells covering the globe from the surface to 0.1 hPa) down to 3.5-km (approx. 3.6 billion cells).

  4. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    Science.gov (United States)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  5. Cold H I clouds near the supernova remnant W44

    International Nuclear Information System (INIS)

    Sato, F.

    1986-01-01

    The cold H I clouds near the supernova remnant W44 are investigated by the use of the Maryland-Green Bank Survey (Westerhout 1973). Several clouds with a mean diameter of about 20 pc are distributed in the region. They do not seem to make a shell around W44, contrary to the suggestion by Knapp and Kerr (1974) based on the low-resolution data at coarse grids. Some of them form a chain, about 100 pc in length, extending approximately along the galactic equator. It resembles the cold H I cloud near W3 and W4. The major constituent of the clouds is probably the hydrogen molecule, and the total mass of the entire complex amounts to 25,000 81,000 solar masses. The estimated Jeans mass indicates that they will contract to dense molecular clouds. Therefore, it may safely be concluded that the cold H1 cloud complex near W44 is a giant molecular cloud at an early evolutionary stage. 14 references

  6. Technical Research on the Electric Power Big Data Platform of Smart Grid

    OpenAIRE

    Ruiguang MA; Haiyan Wang; Quanming Zhang; Yuan Liang

    2017-01-01

    Through elaborating on the associated relationship among electric power big data, cloud computing and smart grid, this paper put forward general framework of electric power big data platform based on the smart grid. The general framework of the platform is divided into five layers, namely data source layer, data integration and storage layer, data processing and scheduling layer, data analysis layer and application layer. This paper makes in-depth exploration and studies the integrated manage...

  7. Smart grids clouds, communications, open source, and automation

    CERN Document Server

    Bakken, David

    2014-01-01

    The utilization of sensors, communications, and computer technologies to create greater efficiency in the generation, transmission, distribution, and consumption of electricity will enable better management of the electric power system. As the use of smart grid technologies grows, utilities will be able to automate meter reading and billing and consumers will be more aware of their energy usage and the associated costs. The results will require utilities and their suppliers to develop new business models, strategies, and processes.With an emphasis on reducing costs and improving return on inve

  8. An Informatics Approach to Demand Response Optimization in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Aman, Saima; Cao, Baohua; Giakkoupis, Mike; Kumbhare, Alok; Zhou, Qunzhi; Paul, Donald; Fern, Carol; Sharma, Aditya; Prasanna, Viktor K

    2011-03-03

    Power utilities are increasingly rolling out “smart” grids with the ability to track consumer power usage in near real-time using smart meters that enable bidirectional communication. However, the true value of smart grids is unlocked only when the veritable explosion of data that will become available is ingested, processed, analyzed and translated into meaningful decisions. These include the ability to forecast electricity demand, respond to peak load events, and improve sustainable use of energy by consumers, and are made possible by energy informatics. Information and software system techniques for a smarter power grid include pattern mining and machine learning over complex events and integrated semantic information, distributed stream processing for low latency response,Cloud platforms for scalable operations and privacy policies to mitigate information leakage in an information rich environment. Such an informatics approach is being used in the DoE sponsored Los Angeles Smart Grid Demonstration Project, and the resulting software architecture will lead to an agile and adaptive Los Angeles Smart Grid.

  9. Scheduling strategies for cycle scavenging in multicluster grid systems

    NARCIS (Netherlands)

    Sonmez, O.O.; Grundeken, B.; Mohamed, H.H.; Iosup, A.; Epema, D.H.J.

    2009-01-01

    The use of today's multicluster grids exhibits periods of submission bursts with periods of normal use and even of idleness. To avoid resource contention, many users employ observational scheduling, that is, they postpone the submission of relatively low-priority jobs until a cluster becomes

  10. A Novel Market-Oriented Dynamic Collaborative Cloud Service Platform

    Science.gov (United States)

    Hassan, Mohammad Mehedi; Huh, Eui-Nam

    In today's world the emerging Cloud computing (Weiss, 2007) offer a new computing model where resources such as computing power, storage, online applications and networking infrastructures can be shared as "services" over the internet. Cloud providers (CPs) are incentivized by the profits to be made by charging consumers for accessing these services. Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating costs associated with "in-house" provision of these services.

  11. A Review of Systems and Technologies for Smart Homes and Smart Grids

    Directory of Open Access Journals (Sweden)

    Gabriele Lobaccaro

    2016-05-01

    Full Text Available In the actual era of smart homes and smart grids, advanced technological systems that allow the automation of domestic tasks are developing rapidly. There are numerous technologies and applications that can be installed in smart homes today. They enable communication between home appliances and users, and enhance home appliances’ automation, monitoring and remote control capabilities. This review article, by introducing the concept of the smart home and the advent of the smart grid, investigates technologies for smart homes. The technical descriptions of the systems are presented and point out advantages and disadvantages of each technology and product today available on the market. Barriers, challenges, benefits and future trends regarding the technologies and the role of users have also been discussed.

  12. Hierarchical Threshold Adaptive for Point Cloud Filter Algorithm of Moving Surface Fitting

    Directory of Open Access Journals (Sweden)

    ZHU Xiaoxiao

    2018-02-01

    Full Text Available In order to improve the accuracy,efficiency and adaptability of point cloud filtering algorithm,a hierarchical threshold adaptive for point cloud filter algorithm of moving surface fitting was proposed.Firstly,the noisy points are removed by using a statistic histogram method.Secondly,the grid index is established by grid segmentation,and the surface equation is set up through the lowest point among the neighborhood grids.The real height and fit are calculated.The difference between the elevation and the threshold can be determined.Finally,in order to improve the filtering accuracy,hierarchical filtering is used to change the grid size and automatically set the neighborhood size and threshold until the filtering result reaches the accuracy requirement.The test data provided by the International Photogrammetry and Remote Sensing Society (ISPRS is used to verify the algorithm.The first and second error and the total error are 7.33%,10.64% and 6.34% respectively.The algorithm is compared with the eight classical filtering algorithms published by ISPRS.The experiment results show that the method has well-adapted and it has high accurate filtering result.

  13. On transferring the grid technology to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Sax, Ulrich; Dickmann, Frank; Lippert, Joerg; Solodenko, Juri; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which resulted in the Grid. The inter domain transfer process of this technology has been an intuitive process. Some difficulties facing the life science community can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies that have achieved certain stability. Grid and Cloud solutions are technologies that are still in flux. We illustrate how Grid computing creates new difficulties for the technology transfer process that are not considered in Bozeman's model. We show why the success of health Grids should be measured by the qualified scientific human capital and opportunities created, and not primarily by the market impact. With two examples we show how the Grid technology transfer theory corresponds to the reality. We conclude with recommendations that can help improve the adoption of Grid solutions into the biomedical community. These results give a more concise explanation of the difficulties most life science IT projects are facing in the late funding periods, and show some leveraging steps which can help to overcome the "vale of tears".

  14. CLOUD-BASED PLATFORM FOR CREATING AND SHARING WEB MAPS

    Directory of Open Access Journals (Sweden)

    Jean Pierre Gatera

    2014-01-01

    Full Text Available The rise of cloud computing is one the most important thing happening in information technology today. While many things are moving into the cloud, this trend has also reached the Geographic Information System (GIS world. For the users of GIS technology, the cloud opens new possibilities for sharing web maps, applications and spatial data. The goal of this presentation/demo is to demonstrate ArcGIS Online which is a cloud-based collaborative platform that allows to easily and quickly create interactive web maps that you can share with anyone. With ready-to-use content, apps, and templates you can produce web maps right away. And no matter what you use - desktops, browsers, smartphones, or tablets - you always have access to your content.

  15. NASA Cloud-Based Climate Data Services

    Science.gov (United States)

    McInerney, M. A.; Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, W. D., III; Thompson, J. H.; Gill, R.; Jasen, J. E.; Samowich, B.; Pobre, Z.; Salmon, E. M.; Rumney, G.; Schardt, T. D.

    2012-12-01

    Cloud-based scientific data services are becoming an important part of NASA's mission. Our technological response is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service (VaaS). A virtual climate data server (vCDS) is an Open Archive Information System (OAIS) compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have deployed vCDS Version 1.0 in the Amazon EC2 cloud using S3 object storage and are using the system to deliver a subset of NASA's Intergovernmental Panel on Climate Change (IPCC) data products to the latest CentOS federated version of Earth System Grid Federation (ESGF), which is also running in the Amazon cloud. vCDS-managed objects are exposed to ESGF through FUSE (Filesystem in User Space), which presents a POSIX-compliant filesystem abstraction to applications such as the ESGF server that require such an interface. A vCDS manages data as a distinguished collection for a person, project, lab, or other logical unit. A vCDS can manage a collection across multiple storage resources using rules and microservices to enforce collection policies. And a vCDS can federate with other vCDSs to manage multiple collections over multiple resources, thereby creating what can be thought of as an ecosystem of managed collections. With the vCDS approach, we are trying to enable the full information lifecycle management of scientific data collections and make tractable the task of providing diverse climate data services. In this presentation, we describe our approach, experiences, lessons learned, and plans for the future.; (A) vCDS/ESG system stack. (B) Conceptual architecture for NASA cloud-based data services.

  16. GStat 2.0: Grid Information System Status Monitoring

    International Nuclear Information System (INIS)

    Field, Laurence; Huang, Joanna; Tsai, Min

    2010-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE information system, which is a hierarchical system built out of more than 260 site-level and approximately 70 global aggregation services. It also checks the information content and presents summary and history displays for Grid Operators and System Administrators. A major new version, GStat 2.0, aims to build on the production experience of GStat and provides additional functionality, which enables it to be extended and combined with other tools. This paper describes the new architecture used for GStat 2.0 and how it can be used at all levels to help provide a reliable information system.

  17. Fine-scale application of WRF-CAM5 during a dust storm episode over East Asia: Sensitivity to grid resolutions and aerosol activation parameterizations

    Science.gov (United States)

    Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin

    2018-03-01

    An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on

  18. Designing a Secure Storage Repository for Sharing Scientific Datasets using Public Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhare, Alok [Univ. of Southern California, Los Angeles, CA (United States); Simmhan, Yogesth [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor [Univ. of Southern California, Los Angeles, CA (United States)

    2011-11-14

    As Cloud platforms gain increasing traction among scientific and business communities for outsourcing storage, computing and content delivery, there is also growing concern about the associated loss of control over private data hosted in the Cloud. In this paper, we present an architecture for a secure data repository service designed on top of a public Cloud infrastructure to support multi-disciplinary scientific communities dealing with personal and human subject data, motivated by the smart power grid domain. Our repository model allows users to securely store and share their data in the Cloud without revealing the plain text to unauthorized users, the Cloud storage provider or the repository itself. The system masks file names, user permissions and access patterns while providing auditing capabilities with provable data updates.

  19. Scanning Cloud Radar Observations at Azores: Preliminary 3D Cloud Products

    Energy Technology Data Exchange (ETDEWEB)

    Kollias, P.; Johnson, K.; Jo, I.; Tatarevic, A.; Giangrande, S.; Widener, K.; Bharadwaj, N.; Mead, J.

    2010-03-15

    The deployment of the Scanning W-Band ARM Cloud Radar (SWACR) during the AMF campaign at Azores signals the first deployment of an ARM Facility-owned scanning cloud radar and offers a prelude for the type of 3D cloud observations that ARM will have the capability to provide at all the ARM Climate Research Facility sites by the end of 2010. The primary objective of the deployment of Scanning ARM Cloud Radars (SACRs) at the ARM Facility sites is to map continuously (operationally) the 3D structure of clouds and shallow precipitation and to provide 3D microphysical and dynamical retrievals for cloud life cycle and cloud-scale process studies. This is a challenging task, never attempted before, and requires significant research and development efforts in order to understand the radar's capabilities and limitations. At the same time, we need to look beyond the radar meteorology aspects of the challenge and ensure that the hardware and software capabilities of the new systems are utilized for the development of 3D data products that address the scientific needs of the new Atmospheric System Research (ASR) program. The SWACR observations at Azores provide a first look at such observations and the challenges associated with their analysis and interpretation. The set of scan strategies applied during the SWACR deployment and their merit is discussed. The scan strategies were adjusted for the detection of marine stratocumulus and shallow cumulus that were frequently observed at the Azores deployment. Quality control procedures for the radar reflectivity and Doppler products are presented. Finally, preliminary 3D-Active Remote Sensing of Cloud Locations (3D-ARSCL) products on a regular grid will be presented, and the challenges associated with their development discussed. In addition to data from the Azores deployment, limited data from the follow-up deployment of the SWACR at the ARM SGP site will be presented. This effort provides a blueprint for the effort required

  20. Off grid Solar power supply: the real green development

    International Nuclear Information System (INIS)

    Dellinger, B.; Mansard, M.

    2010-01-01

    Solar experience is now 30 years. In spite of the tremendous growth of the developed world grid connect market, quite a number of companies remain seriously involved in the off grid sector. Solar started in the field as the sole solution to give access to energy and water to rural communities. With major actors involved at early stage, a number of reliable technical solutions were developed and implemented. These solutions have gradually drawn the attention of industrial companies investing in emerging countries and needing reliable energy sources. On top of improving standard of living, Off grid solar solutions also create economical opportunity for the local private sector getting involved in maintenance and services around the energy system. As at today, hundreds thousand of sites daily operate on site. However the needs remain extremely high. That is the reasons why off grid solar remains a major tool for sustainable development. (author)

  1. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  2. Strengthen Cloud Computing Security with Federal Identity Management Using Hierarchical Identity-Based Cryptography

    Science.gov (United States)

    Yan, Liang; Rong, Chunming; Zhao, Gansen

    More and more companies begin to provide different kinds of cloud computing services for Internet users at the same time these services also bring some security problems. Currently the majority of cloud computing systems provide digital identity for users to access their services, this will bring some inconvenience for a hybrid cloud that includes multiple private clouds and/or public clouds. Today most cloud computing system use asymmetric and traditional public key cryptography to provide data security and mutual authentication. Identity-based cryptography has some attraction characteristics that seem to fit well the requirements of cloud computing. In this paper, by adopting federated identity management together with hierarchical identity-based cryptography (HIBC), not only the key distribution but also the mutual authentication can be simplified in the cloud.

  3. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    Science.gov (United States)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  4. Multi-Spectral Cloud Retrievals from Moderate Image Spectrometer (MODIS)

    Science.gov (United States)

    Platnick, Steven

    2004-01-01

    MODIS observations from the NASA EOS Terra spacecraft (1030 local time equatorial sun-synchronous crossing) launched in December 1999 have provided a unique set of Earth observation data. With the launch of the NASA EOS Aqua spacecraft (1330 local time crossing! in May 2002: two MODIS daytime (sunlit) and nighttime observations are now available in a 24-hour period allowing some measure of diurnal variability. A comprehensive set of remote sensing algorithms for cloud masking and the retrieval of cloud physical and optical properties has been developed by members of the MODIS atmosphere science team. The archived products from these algorithms have applications in climate modeling, climate change studies, numerical weather prediction, as well as fundamental atmospheric research. In addition to an extensive cloud mask, products include cloud-top properties (temperature, pressure, effective emissivity), cloud thermodynamic phase, cloud optical and microphysical parameters (optical thickness, effective particle radius, water path), as well as derived statistics. An overview of the instrument and cloud algorithms will be presented along with various examples, including an initial analysis of several operational global gridded (Level-3) cloud products from the two platforms. Statistics of cloud optical and microphysical properties as a function of latitude for land and Ocean regions will be shown. Current algorithm research efforts will also be discussed.

  5. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-cloud Aerosols over Ocean Using CALIOP and MODIS Data

    Science.gov (United States)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2013-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4%. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  6. Processing of Cloud Databases for the Development of an Automated Global Cloud Climatology

    Science.gov (United States)

    1991-06-30

    cloud amounts in each DOE grid box. The actual population values were coded into one- and two- digit codes primarily for printing purposes. For example...IPIALES 72652 43.07 -95.53 0423 PICKSTOWNE S.D. 80110 6.22 -75.60 1498 MEDELLIN 72424 37.90 -85.97 0233 FT. KNOX KY 80069 7.00 -74.72 0610 AMALFI...12 According to Lund, Grantham, and Davis (1980), the quality of the whole sky photographs used in producing the WSP digital data ensemble was

  7. A Review of Systems and Technologies for Smart Homes and Smart Grids

    OpenAIRE

    Lobaccaro, Gabriele; Carlucci, Salvatore; Löfström, Erica

    2016-01-01

    In the actual era of smart homes and smart grids, advanced technological systems that allow the automation of domestic tasks are developing rapidly. There are numerous technologies and applications that can be installed in smart homes today. They enable communication between home appliances and users, and enhance home appliances’ automation, monitoring and remote control capabilities. This review article, by introducing the concept of the smart home and the advent of the smart grid, investiga...

  8. Analisis Perbandingan Antara Cloud Computing Dengan Sistem Informasi Konvensional

    OpenAIRE

    Harsono, Bagoes

    2011-01-01

    In this era of globalization, everything can not be separated from technology. The development of advanced technologies that make things easier and cheaper. This is what happens in the development of information technology today. The presence of a new paradigm keeps everyone interested in something new. Cloud computing comes to the middle of the community by presenting some of the highlights. Although still a novelty, not a few people are also who have get the benefited from this cloud. Still...

  9. Integration of cloud-based storage in BES III computing environment

    International Nuclear Information System (INIS)

    Wang, L; Hernandez, F; Deng, Z

    2014-01-01

    We present an on-going work that aims to evaluate the suitability of cloud-based storage as a supplement to the Lustre file system for storing experimental data for the BES III physics experiment and as a backend for storing files belonging to individual members of the collaboration. In particular, we discuss our findings regarding the support of cloud-based storage in the software stack of the experiment. We report on our development work that improves the support of CERN' s ROOT data analysis framework and allows efficient remote access to data through several cloud storage protocols. We also present our efforts providing the experiment with efficient command line tools for navigating and interacting with cloud storage-based data repositories both from interactive sessions and grid jobs.

  10. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  11. Prognostic cloud water in the Los Alamos general circulation model

    International Nuclear Information System (INIS)

    Kristjansson, J.E.; Kao, C.Y.J.

    1993-01-01

    Most of today's general circulation models (GCMS) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, there has arisen an urgent need for improvements in the treatment of clouds in GCMS, especially as the clouds relate to radiation. In the present paper, we investigate the effects of introducing pregnostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the long wave emissivity calculations. Results from several sensitivity simulations show that realistic cloud water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds become stronger, due to more realistic tropical convection

  12. A fuzzy neural network model to forecast the percent cloud coverage and cloud top temperature maps

    Directory of Open Access Journals (Sweden)

    Y. Tulunay

    2008-12-01

    Full Text Available Atmospheric processes are highly nonlinear. A small group at the METU in Ankara has been working on a fuzzy data driven generic model of nonlinear processes. The model developed is called the Middle East Technical University Fuzzy Neural Network Model (METU-FNN-M. The METU-FNN-M consists of a Fuzzy Inference System (METU-FIS, a data driven Neural Network module (METU-FNN of one hidden layer and several neurons, and a mapping module, which employs the Bezier Surface Mapping technique. In this paper, the percent cloud coverage (%CC and cloud top temperatures (CTT are forecast one month ahead of time at 96 grid locations. The probable influence of cosmic rays and sunspot numbers on cloudiness is considered by using the METU-FNN-M.

  13. THE MASS-LOSS RETURN FROM EVOLVED STARS TO THE LARGE MAGELLANIC CLOUD. IV. CONSTRUCTION AND VALIDATION OF A GRID OF MODELS FOR OXYGEN-RICH AGB STARS, RED SUPERGIANTS, AND EXTREME AGB STARS

    International Nuclear Information System (INIS)

    Sargent, Benjamin A.; Meixner, M.; Srinivasan, S.

    2011-01-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the 'Grid of Red supergiant and Asymptotic giant branch star ModelS' (GRAMS). This model grid explores four parameters-stellar effective temperature from 2100 K to 4700 K; luminosity from 10 3 to 10 6 L sun ; dust shell inner radii of 3, 7, 11, and 15 R star ; and 10.0 μm optical depth from 10 -4 to 26. From an initial grid of ∼1200 2Dust models, we create a larger grid of ∼69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  14. Electron cloud observations at the ISIS Proton Synchrotron

    CERN Document Server

    Pertica, A.

    2013-04-22

    The build up of electron clouds inside a particle accelerator vacuum chamber can produce strong transverse and longitudinal beam instabilities which in turn can lead to high levels of beam loss often requiring the accelerator to be run below its design specification. To study the behaviour of electron clouds at the ISIS Proton Synchrotron, a Micro-Channel Plate (MCP) based electron cloud detector has been developed. The detector is based on the Retarding Field Analyser (RFA) design and consists of a retarding grid, which allows energy analysis of the electron signal, and a MCP assembly placed in front of the collector plate. The MCP assembly provides a current gain over the range 300 to 25K, thereby increasing the signal to noise ratio and dynamic range of the measurements. This paper presents the first electron cloud observations at the ISIS Proton Synchrotron. These results are compared against signals from a beam position monitor and a fast beam loss monitor installed at the same location.

  15. Towards autonomous vehicular clouds

    Directory of Open Access Journals (Sweden)

    Stephan Olariu

    2011-09-01

    Full Text Available The dawn of the 21st century has seen a growing interest in vehicular networking and its myriad potential applications. The initial view of practitioners and researchers was that radio-equipped vehicles could keep the drivers informed about potential safety risks and increase their awareness of road conditions. The view then expanded to include access to the Internet and associated services. This position paper proposes and promotes a novel and more comprehensive vision namely, that advances in vehicular networks, embedded devices and cloud computing will enable the formation of autonomous clouds of vehicular computing, communication, sensing, power and physical resources. Hence, we coin the term, autonomous vehicular clouds (AVCs. A key feature distinguishing AVCs from conventional cloud computing is that mobile AVC resources can be pooled dynamically to serve authorized users and to enable autonomy in real-time service sharing and management on terrestrial, aerial, or aquatic pathways or theaters of operations. In addition to general-purpose AVCs, we also envision the emergence of specialized AVCs such as mobile analytics laboratories. Furthermore, we envision that the integration of AVCs with ubiquitous smart infrastructures including intelligent transportation systems, smart cities and smart electric power grids will have an enormous societal impact enabling ubiquitous utility cyber-physical services at the right place, right time and with right-sized resources.

  16. Cloud Computing Boosts Business Intelligence of Telecommunication Industry

    Science.gov (United States)

    Xu, Meng; Gao, Dan; Deng, Chao; Luo, Zhiguo; Sun, Shaoling

    Business Intelligence becomes an attracting topic in today's data intensive applications, especially in telecommunication industry. Meanwhile, Cloud Computing providing IT supporting Infrastructure with excellent scalability, large scale storage, and high performance becomes an effective way to implement parallel data processing and data mining algorithms. BC-PDM (Big Cloud based Parallel Data Miner) is a new MapReduce based parallel data mining platform developed by CMRI (China Mobile Research Institute) to fit the urgent requirements of business intelligence in telecommunication industry. In this paper, the architecture, functionality and performance of BC-PDM are presented, together with the experimental evaluation and case studies of its applications. The evaluation result demonstrates both the usability and the cost-effectiveness of Cloud Computing based Business Intelligence system in applications of telecommunication industry.

  17. Understanding the Benefits of Dispersed Grid-Connected Photovoltaics: From Avoiding the Next Major Outage to Taming Wholesale Power Markets

    International Nuclear Information System (INIS)

    Letendre, Steven E.; Perez, Richard

    2006-01-01

    Thanks to new solar resource assessment techniques using cloud cover data available from geostationary satellites, it is apparent that grid-connected PV installations can serve to enhance electric grid reliability, preventing or hastening recovery from major power outages and serving to mitigate extreme price spikes in wholesale energy markets. (author)

  18. Enterprise content management in the cloud

    Directory of Open Access Journals (Sweden)

    Jaroslava Klegová

    2013-01-01

    Full Text Available At present the attention of many organizations concentrates to the Enterprise Content Management system (ECM. Unstructured content grows exponentially, and Enterprise Content Management system helps to capture, store, manage, integrate and deliver all forms of content across the company. Today, decision makers have possibility to move ECM systems to the cloud and take advantages of cloud computing. Cloud solution can provide a crucial competitive advantage. For example, it can reduce fixed IT department cost and ensure faster ECM implementation.To achieve the maximum level of benefits from implementation of ECM in the cloud it is important to understand all possibilities and actions during the implementation. In this paper, the general model of the ECM implementation in the cloud is proposed and described. The risk may relate to all aspects of the implementation, such as cost, schedule or quality. This is the reason why the introduced model places emphasize on risk. The aim of the article is to identify risks of the ECM implementation in the cloud and quantify the impact of risk. The article is focused on the Monte Carlo method. Monte Carlo method is a technique that uses random numbers and probability to solve problems. Based on interviews with an IT managers there is created an example of possible scenarios and the risk is evaluated using the Monte Carlo method.

  19. Grid computing and e-science: a view from inside

    Directory of Open Access Journals (Sweden)

    Stefano Cozzini

    2008-06-01

    Full Text Available My intention is to analyze how, where and if grid computing technology is truly enabling a new way of doing science (so-called ‘e-science’. I will base my views on the experiences accumulated thus far in a number of scientific communities, which we have provided with the opportunity of using grid computing. I shall first define some basic terms and concepts and then discuss a number of specific cases in which the use of grid computing has actually made possible a new method for doing science. I will then present a case in which this did not result in a change in research methods. I will try to identify the reasons for these failures and analyze the future evolution of grid computing. I will conclude by introducing and commenting the concept of ‘cloud computing’, the approach offered and provided by major industrial actors (Google/IBM and Amazon being among the most important and what impact this technology might have on the world of research.

  20. Ten Years of Cloud Optical and Microphysical Retrievals from MODIS

    Science.gov (United States)

    Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana

    2010-01-01

    The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).

  1. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  2. Prognostic cloud water in the Los Alamos general circulation model

    International Nuclear Information System (INIS)

    Kristjansson, J.E.; Kao, C.Y.J.

    1994-01-01

    Most of today's general circulation models (GCMs) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, an urgent need for improvements in the treatment of clouds in GCMs has arisen, especially as the clouds relate to radiation. In this paper, we investigate the effects of introducing prognostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the longwave emissivity calculations. Results from several sensitivity simulations show that realistic water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds becomes stronger because of more realistic tropical convection

  3. 3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques

    Science.gov (United States)

    Johnson, C. L.

    2017-12-01

    Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.

  4. Intelligence by design in an entropic power grid

    Science.gov (United States)

    Negrete-Pincetic, Matias Alejandro

    In this work, the term Entropic Grid is coined to describe a power grid with increased levels of uncertainty and dynamics. These new features will require the reconsideration of well-established paradigms in the way of planning and operating the grid and its associated markets. New tools and models able to handle uncertainty and dynamics will form the required scaffolding to properly capture the behavior of the physical system, along with the value of new technologies and policies. The leverage of this knowledge will facilitate the design of new architectures to organize power and energy systems and their associated markets. This work presents several results, tools and models with the goal of contributing to that design objective. A central idea of this thesis is that the definition of products is critical in electricity markets. When markets are constructed with appropriate product definitions in mind, the interference between the physical and the market/financial systems seen in today's markets can be reduced. A key element of evaluating market designs is understanding the impact that salient features of an entropic grid---uncertainty, dynamics, constraints---can have on the electricity markets. Dynamic electricity market models tailored to capture such features are developed in this work. Using a multi-settlement dynamic electricity market, the impact of volatility is investigated. The results show the need to implement policies and technologies able to cope with the volatility of renewable sources. Similarly, using a dynamic electricity market model in which ramping costs are considered, the impacts of those costs on electricity markets are investigated. The key conclusion is that those additional ramping costs, in average terms, are not reflected in electricity prices. These results reveal several difficulties with today's real-time markets. Elements of an alternative architecture to organize these markets are also discussed.

  5. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    Science.gov (United States)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being

  6. Intercomparison of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds

    Science.gov (United States)

    Muhlbauer, A.; Hashino, T.; Xue, L.; Teller, A.; Lohmann, U.; Rasmussen, R. M.; Geresdi, I.; Pan, Z.

    2010-09-01

    Anthropogenic aerosols serve as a source of both cloud condensation nuclei (CCN) and ice nuclei (IN) and affect microphysical properties of clouds. Increasing aerosol number concentrations is hypothesized to retard the cloud droplet coalescence and the riming in mixed-phase clouds, thereby decreasing orographic precipitation. This study presents results from a model intercomparison of 2-D simulations of aerosol-cloud-precipitation interactions in stratiform orographic mixed-phase clouds. The sensitivity of orographic precipitation to changes in the aerosol number concentrations is analysed and compared for various dynamical and thermodynamical situations. Furthermore, the sensitivities of microphysical processes such as coalescence, aggregation, riming and diffusional growth to changes in the aerosol number concentrations are evaluated and compared. The participating numerical models are the model from the Consortium for Small-Scale Modeling (COSMO) with bulk microphysics, the Weather Research and Forecasting (WRF) model with bin microphysics and the University of Wisconsin modeling system (UWNMS) with a spectral ice habit prediction microphysics scheme. All models are operated on a cloud-resolving scale with 2 km horizontal grid spacing. The results of the model intercomparison suggest that the sensitivity of orographic precipitation to aerosol modifications varies greatly from case to case and from model to model. Neither a precipitation decrease nor a precipitation increase is found robustly in all simulations. Qualitative robust results can only be found for a subset of the simulations but even then quantitative agreement is scarce. Estimates of the aerosol effect on orographic precipitation are found to range from -19% to 0% depending on the simulated case and the model. Similarly, riming is shown to decrease in some cases and models whereas it increases in others, which implies that a decrease in riming with increasing aerosol load is not a robust result

  7. The Gas-Grain Chemistry of Galactic Translucent Clouds

    Science.gov (United States)

    Maffucci, Dominique M.; Herbst, Eric

    2016-01-01

    We employ a combination of traditional and modified rate equation approaches to simulate the time-dependent gas-grain chemistry that pertains to molecular species observed in absorption in Galactic translucent clouds towards Sgr B2(N). We solve the kinetic rate laws over a range of relevant physical conditions (gas and grain temperatures, particle density, visual extinction, cosmic ray ionization rate) characteristic of translucent clouds by implementing a new grid module that allows for parallelization of the astrochemical simulations. Gas-phase and grain-surface synthetic pathways, chemical timescales, and associated physical sensitivities are discussed for selected classes of species including the cyanopolyynes, complex cyanides, and simple aldehydes.

  8. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Science.gov (United States)

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  9. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Directory of Open Access Journals (Sweden)

    Anwar S. Shatil

    2015-01-01

    Full Text Available With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1 inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2 highlight their main advantages; 3 discuss when it may (and may not be advisable to use them; 4 review some of their potential problems and barriers to access; and finally 5 give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc., a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  10. Formation of Silicate and Titanium Clouds on Hot Jupiters

    Science.gov (United States)

    Powell, Diana; Zhang, Xi; Gao, Peter; Parmentier, Vivien

    2018-06-01

    We present the first application of a bin-scheme microphysical and vertical transport model to determine the size distribution of titanium and silicate cloud particles in the atmospheres of hot Jupiters. We predict particle size distributions from first principles for a grid of planets at four representative equatorial longitudes, and investigate how observed cloud properties depend on the atmospheric thermal structure and vertical mixing. The predicted size distributions are frequently bimodal and irregular in shape. There is a negative correlation between the total cloud mass and equilibrium temperature as well as a positive correlation between the total cloud mass and atmospheric mixing. The cloud properties on the east and west limbs show distinct differences that increase with increasing equilibrium temperature. Cloud opacities are roughly constant across a broad wavelength range, with the exception of features in the mid-infrared. Forward-scattering is found to be important across the same wavelength range. Using the fully resolved size distribution of cloud particles as opposed to a mean particle size has a distinct impact on the resultant cloud opacities. The particle size that contributes the most to the cloud opacity depends strongly on the cloud particle size distribution. We predict that it is unlikely that silicate or titanium clouds are responsible for the optical Rayleigh scattering slope seen in many hot Jupiters. We suggest that cloud opacities in emission may serve as sensitive tracers of the thermal state of a planet’s deep interior through the existence or lack of a cold trap in the deep atmosphere.

  11. Mapping of the extinction in Giant Molecular Clouds using optical star counts

    OpenAIRE

    Cambresy, L.

    1999-01-01

    This paper presents large scale extinction maps of most nearby Giant Molecular Clouds of the Galaxy (Lupus, rho-Ophiuchus, Scorpius, Coalsack, Taurus, Chamaeleon, Musca, Corona Australis, Serpens, IC 5146, Vela, Orion, Monoceros R1 and R2, Rosette, Carina) derived from a star count method using an adaptive grid and a wavelet decomposition applied to the optical data provided by the USNO-Precision Measuring Machine. The distribution of the extinction in the clouds leads to estimate their total...

  12. Fluctuations in a quasi-stationary shallow cumulus cloud ensemble

    Directory of Open Access Journals (Sweden)

    M. Sakradzija

    2015-01-01

    Full Text Available We propose an approach to stochastic parameterisation of shallow cumulus clouds to represent the convective variability and its dependence on the model resolution. To collect information about the individual cloud lifecycles and the cloud ensemble as a whole, we employ a large eddy simulation (LES model and a cloud tracking algorithm, followed by conditional sampling of clouds at the cloud-base level. In the case of a shallow cumulus ensemble, the cloud-base mass flux distribution is bimodal, due to the different shallow cloud subtypes, active and passive clouds. Each distribution mode can be approximated using a Weibull distribution, which is a generalisation of exponential distribution by accounting for the change in distribution shape due to the diversity of cloud lifecycles. The exponential distribution of cloud mass flux previously suggested for deep convection parameterisation is a special case of the Weibull distribution, which opens a way towards unification of the statistical convective ensemble formalism of shallow and deep cumulus clouds. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate a shallow convective cloud ensemble. It is formulated as a compound random process, with the number of convective elements drawn from a Poisson distribution, and the cloud mass flux sampled from a mixed Weibull distribution. Convective memory is accounted for through the explicit cloud lifecycles, making the model formulation consistent with the choice of the Weibull cloud mass flux distribution function. The memory of individual shallow clouds is required to capture the correct convective variability. The resulting distribution of the subgrid convective states in the considered shallow cumulus case is scale-adaptive – the smaller the grid size, the broader the distribution.

  13. Claims and Identity: On-Premise and Cloud Solutions

    Science.gov (United States)

    Bertocci, Vittorio

    Today's identity-management practices are often a patchwork of partial solutions, which somehow accommodate but never really integrate applications and entities separated by technology and organizational boundaries. The rise of Software as a Service (SaaS) and cloud computing, however, will force organizations to cross such boundaries so often that ad hoc solutions will simply be untenable. A new approach that tears down identity silos and supports a de-perimiterized IT by design is in order.This article will walk you through the principles of claims-based identity management, a model which addresses both traditional and cloud scenarios with the same efficacy. We will explore the most common token exchange patterns, highlighting the advantages and opportunities they offer when applied on cloud computing solutions and generic distributed systems.

  14. Editorial for special section of grid computing journal on “Cloud Computing and Services Science‿

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ivanov, Ivan I.

    This editorial briefly discusses characteristics, technology developments and challenges of cloud computing. It then introduces the papers included in the special issue on "Cloud Computing and Services Science" and positions the work reported in these papers with respect to the previously mentioned

  15. Enforcement of Security and Privacy in a Service-Oriented Smart Grid

    DEFF Research Database (Denmark)

    Mikkelsen, Søren Aagaard

    inhabitants. With the vision, it is therefore necessity to enforce privacy and security of the data in all phases of its life cycle. The life cycle starts from acquiring the data to it is stored. Therefore, this dissertation follows a system-level and application-level approach to manage data with respect...... to privacy and security. This includes first a design of a service-oriented architecture that allows for the deployment of home-oriented and grid-oriented IASs on a Home Energy Management System (HEMS) and in the cloud, respectively. Privacy and security of electricity data are addressed by letting...... the residential consumer control data dissemination in a two-stage process: first from the HEMS to the cloud and from the cloud to the IASs. Then the dissertation focuses on the critical phases in securing the residential home as well as securing the cloud. It presents a system-level threat model of the HEMS...

  16. GridPP - Preparing for LHC Run 2 and the Wider Context

    Science.gov (United States)

    Coles, Jeremy

    2015-12-01

    This paper elaborates upon the operational status and directions within the UK Computing for Particle Physics (GridPP) project as it approaches LHC Run 2. It details the pressures that have been gradually reshaping the deployed hardware and middleware environments at GridPP sites - from the increasing adoption of larger multicore nodes to the move towards alternative batch systems and cloud alternatives - as well as changes being driven by funding considerations. The paper highlights work being done with non-LHC communities and describes some of the early outcomes of adopting a generic DIRAC based job submission and management framework. The paper presents results from an analysis of how GridPP effort is distributed across various deployment and operations tasks and how this may be used to target further improvements in efficiency.

  17. Cloud Based Educational Systems 
And Its Challenges And Opportunities And Issues

    Directory of Open Access Journals (Sweden)

    Prantosh Kr. PAUL

    2014-01-01

    Full Text Available Cloud Computing (CC is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC is extension of Grid computing with independency and smarter tools and technological gradients. Healthy Cloud Computing helps in sharing of software, hardware, application and other packages with the help of internet tools and wireless media. Cloud Computing, has benefits in several field and applications domain such as Agriculture, Business and Commerce, Health Care, Hospitality and Tourism, Education and Training sector and so on. In Education Systems, it may be applicable in general regular education and other education systems including general and vocational training. This paper is talks about opportunities that provide Cloud Computing (CC; however the intention would be challenges and issues in relation to Education, Education Systems and Training programme.

  18. SECURITY AND PRIVACY ISSUES IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Amina AIT OUAHMAN

    2014-10-01

    Full Text Available Today, cloud computing is defined and talked about across the ICT industry under different contexts and with different definitions attached to it. It is a new paradigm in the evolution of Information Technology, as it is one of the biggest revolutions in this field to have taken place in recent times. According to the National Institute for Standards and Technology (NIST, “cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction” [1]. The importance of Cloud Computing is increasing and it is receiving a growing attention in the scientific and industrial communities. A study by Gartner [2] considered Cloud Computing as the first among the top 10 most important technologies and with a better prospect in successive years by companies and organizations. Clouds bring out tremendous benefits for both individuals and enterprises. Clouds support economic savings, outsourcing mechanisms, resource sharing, any-where any-time accessibility, on-demand scalability, and service flexibility. Clouds minimize the need for user involvement by masking technical details such as software upgrades, licenses, and maintenance from its customers. Clouds could also offer better security advantages over individual server deployments. Since a cloud aggregates resources, cloud providers charter expert security personnel while typical companies could be limited with a network administrator who might not be well versed in cyber security issues. The new concepts introduced by the clouds, such as computation outsourcing, resource sharing, and external data warehousing, increase the security and privacy concerns and create new security challenges. Moreover, the large scale of the clouds, the proliferation of mobile access devices (e

  19. Communication tools between Grid virtual organisations, middleware deployers and sites

    CERN Document Server

    Dimou, Maria

    2008-01-01

    Grid Deployment suffers today from the difficulty to reach users and site administrators when a package or a configuration parameter changes. Release notes, twiki pages and news’ broadcasts are not efficient enough. The interest of using GGUS as an efficient and effective intra-project communication tool is the message to the user community presented here. The purpose of GGUS is to bring together End Users and Supporters in the Regions where the Grid is deployed and in operation. Today’s Grid usage is still very far from the simplicity and functionality of the web. While pressing for middleware usability, we try to turn the Global Grid User Support (GGUS) into the central tool for identifying areas in the support environment that need attention. To do this, we exploit GGUS' capacity to expand, by including new Support Units that follow the project's operational structure. Using tailored GGUS database searches we obtain concrete results that prove where we need to improve procedures, Service Level Agreemen...

  20. Risk perception and risk management in cloud computing: results from a case study of Swiss companies

    OpenAIRE

    Brender, Nathalie; Markov, Iliya

    2013-01-01

    In today's economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, inve...

  1. Using In Situ Observations and Satellite Retrievals to Constrain Large-Eddy Simulations and Single-Column Simulations: Implications for Boundary-Layer Cloud Parameterization in the NASA GISS GCM

    Science.gov (United States)

    Remillard, J.

    2015-12-01

    Two low-cloud periods from the CAP-MBL deployment of the ARM Mobile Facility at the Azores are selected through a cluster analysis of ISCCP cloud property matrices, so as to represent two low-cloud weather states that the GISS GCM severely underpredicts not only in that region but also globally. The two cases represent (1) shallow cumulus clouds occurring in a cold-air outbreak behind a cold front, and (2) stratocumulus clouds occurring when the region was dominated by a high-pressure system. Observations and MERRA reanalysis are used to derive specifications used for large-eddy simulations (LES) and single-column model (SCM) simulations. The LES captures the major differences in horizontal structure between the two low-cloud fields, but there are unconstrained uncertainties in cloud microphysics and challenges in reproducing W-band Doppler radar moments. The SCM run on the vertical grid used for CMIP-5 runs of the GCM does a poor job of representing the shallow cumulus case and is unable to maintain an overcast deck in the stratocumulus case, providing some clues regarding problems with low-cloud representation in the GCM. SCM sensitivity tests with a finer vertical grid in the boundary layer show substantial improvement in the representation of cloud amount for both cases. GCM simulations with CMIP-5 versus finer vertical gridding in the boundary layer are compared with observations. The adoption of a two-moment cloud microphysics scheme in the GCM is also tested in this framework. The methodology followed in this study, with the process-based examination of different time and space scales in both models and observations, represents a prototype for GCM cloud parameterization improvements.

  2. Grid-scale Indirect Radiative Forcing of Climate due to aerosols over the northern hemisphere simulated by the integrated WRF-CMAQ model: Preliminary results

    Science.gov (United States)

    In this study, indirect aerosol effects on grid-scale clouds were implemented in the integrated WRF3.3-CMAQ5.0 modeling system by including parameterizations for both cloud droplet and ice number concentrations calculated from the CMAQ-predicted aerosol particles. The resulting c...

  3. Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics

    Science.gov (United States)

    Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc

    2018-02-01

    Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.

  4. Joint flow routing-scheduling for energy efficient software defined data center networks : A prototype of energy-aware network management platform

    NARCIS (Netherlands)

    Zhu, H.; Liao, X.; de Laat, C.; Grosso, P.

    Data centers are a cost-effective infrastructure for hosting Cloud and Grid applications, but they do incur tremendous energy cost and CO2 emissions. Today's data center network architectures such as Fat-tree and BCube are over-provisioned to guarantee large network capacity and meet peak

  5. Evaluation of cumulus cloud – radiation interaction effects on air quality –relevant meteorological variables from WRF, from a regional climate perspective

    Science.gov (United States)

    Aware only of the resolved, grid-scale clouds, the Weather Research & Forecasting model (WRF) does not consider the interactions between subgrid-scale convective clouds and radiation. One consequence of this omission may be WRF’s overestimation of surface precipitation during sum...

  6. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    Science.gov (United States)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  7. Attacks and Intrusion Detection in Cloud Computing Using Neural Networks and Particle Swarm Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Ahmad Shokuh Saljoughi

    2018-01-01

    Full Text Available Today, cloud computing has become popular among users in organizations and companies. Security and efficiency are the two major issues facing cloud service providers and their customers. Since cloud computing is a virtual pool of resources provided in an open environment (Internet, cloud-based services entail security risks. Detection of intrusions and attacks through unauthorized users is one of the biggest challenges for both cloud service providers and cloud users. In the present study, artificial intelligence techniques, e.g. MLP Neural Network sand particle swarm optimization algorithm, were used to detect intrusion and attacks. The methods were tested for NSL-KDD, KDD-CUP datasets. The results showed improved accuracy in detecting attacks and intrusions by unauthorized users.

  8. Security Enhancement for Data Migration in the Cloud

    Directory of Open Access Journals (Sweden)

    Jean Raphael Ngnie Sighom

    2017-06-01

    Full Text Available In today’s society, cloud computing has significantly impacted nearly every section of our lives and business structures. Cloud computing is, without any doubt, one of the strategic directions for many companies and the most dominating infrastructure for enterprises as long as end users. Instead of buying IT equipment (hardware and/or software and managing it themselves, many organizations today prefer to buy services from IT service providers. The number of service providers increase dramatically and the cloud is becoming the tools of choice for more cloud storage services. However, as more personal information and data are moved to the cloud, into social media sites, DropBox, Baidu WangPan, etc., data security and privacy issues are questioned. Daily, academia and industry seek to find an efficient way to secure data migration in the cloud. Various solution approaches and encryption techniques have been implemented. In this work, we will discuss some of these approaches and evaluate the popular ones in order to find the elements that affect system performance. Finally, we will propose a model that enhances data security and privacy by combining Advanced Encryption Standard-256, Information Dispersal Algorithms and Secure Hash Algorithm-512. Our protocol achieves provable security assessments and fast execution times for medium thresholds.

  9. Web-based CERES Clouds QC Property Viewing Tool

    Science.gov (United States)

    Smith, R. A.; Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Minnis, P.

    2014-12-01

    This presentation will display the capabilities of a web-based CERES cloud property viewer. Terra data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool. A laptop will hopefully be available to allow conference attendees to try navigating the tool.

  10. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP

    Science.gov (United States)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-01

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version of the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. Other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.

  11. Simulation of Electrical Grid with Omnet++ Open Source Discrete Event System Simulator

    Directory of Open Access Journals (Sweden)

    Sőrés Milán

    2016-12-01

    Full Text Available The simulation of electrical networks is very important before development and servicing of electrical networks and grids can occur. There are software that can simulate the behaviour of electrical grids under different operating conditions, but these simulation environments cannot be used in a single cloud-based project, because they are not GNU-licensed software products. In this paper, an integrated framework was proposed that models and simulates communication networks. The design and operation of the simulation environment are investigated and a model of electrical components is proposed. After simulation, the simulation results were compared to manual computed results.

  12. Microbase2.0: A Generic Framework for Computationally Intensive Bioinformatics Workflows in the Cloud

    OpenAIRE

    Flanagan Keith; Nakjang Sirintra; Hallinan Jennifer; Harwood Colin; Hirt Robert P.; Pocock Matthew R.; Wipat Anil

    2012-01-01

    As bioinformatics datasets grow ever larger, and analyses become increasingly complex, there is a need for data handling infrastructures to keep pace with developing technology. One solution is to apply Grid and Cloud technologies to address the computational requirements of analysing high throughput datasets. We present an approach for writing new, or wrapping existing applications, and a reference implementation of a framework, Microbase2.0, for executing those applications using Grid and C...

  13. Safe Grid

    Science.gov (United States)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  14. Buildings and Terrain of Urban Area Point Cloud Segmentation based on PCL

    International Nuclear Information System (INIS)

    Liu, Ying; Zhong, Ruofei

    2014-01-01

    One current problem with laser radar point data classification is building and urban terrain segmentation, this paper proposes a point cloud segmentation method base on PCL libraries. PCL is a large cross-platform open source C++ programming library, which implements a large number of point cloud related efficient data structures and generic algorithms involving point cloud retrieval, filtering, segmentation, registration, feature extraction and curved surface reconstruction, visualization, etc. Due to laser radar point cloud characteristics with large amount of data, unsymmetrical distribution, this paper proposes using the data structure of kd-tree to organize data; then using Voxel Grid filter for point cloud resampling, namely to reduce the amount of point cloud data, and at the same time keep the point cloud shape characteristic; use PCL Segmentation Module, we use a Euclidean Cluster Extraction class with Europe clustering for buildings and ground three-dimensional point cloud segmentation. The experimental results show that this method avoids the multiple copy system existing data needs, saves the program storage space through the call of PCL library method and class, shortens the program compiled time and improves the running speed of the program

  15. Heat grids today and after the German Renewable Energies Act (EEG). A business segment for the agriculture?

    International Nuclear Information System (INIS)

    Clemens, Dietrich; Billerbeck, Hagen

    2016-01-01

    The development of a centralised and sustainable heat supply through the construction of heat grids offers consumers numerous advantages compared to a decentralised energy supply of residential and commercial properties. Where the migration to centralised heat supply relegates fossil fuels through the long-term incorporation of sustainable renewable energy sources, the projects make an important contribution towards meeting the government's climate protection goals. Heat generation and heat sales from renewable energy sources should be ensured in the long term. In the countryside, biogas plant operators are frequently the initiators of heat grid investments, or they take on the role of supplier for the provision of low-cost CHP heat from cogeneration units. In view of the limited remuneration period under the terms of the German Renewable Energy Act, the clock is ticking for the establishment of a centralised heat supply. This paper presents the advantages and disadvantages of a centralised, sustainable heat supply and additionally considers the flexibi/isation of biogas plants in view of the construction of the heat grid and the associated infrastructure. A focus is placed on the security of supply for customers after the discontinuation of remuneration under the German Renewable Energy Act and on how a competitive heat price from alternative energy sources can continue to be ensured.

  16. A Proposed Model for Improving Performance and Reducing Costs of IT Through Cloud Computing of Egyptian Business Enterprises

    OpenAIRE

    Mohamed M.El Hadi; Azza Monir Ismail

    2016-01-01

    Information technologies are affecting the big business enterprises of todays from data processing and transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities and towards new competitive advantage, service must be enough to match the recent trends of IT such as cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing offers an alternative to adaptable with technology model current , creating reduci...

  17. Application of Cloud Computing at KTU: MS Live@Edu Case

    Science.gov (United States)

    Miseviciene, Regina; Budnikas, Germanas; Ambraziene, Danute

    2011-01-01

    Cloud computing is a significant alternative in today's educational perspective. The technology gives the students and teachers the opportunity to quickly access various application platforms and resources through the web pages on-demand. Unfortunately, not all educational institutions often have an ability to take full advantages of the newest…

  18. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  19. Using the Atmospheric Radiation Measurement (ARM) Datasets to Evaluate Climate Models in Simulating Diurnal and Seasonal Variations of Tropical Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hailong [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Burleyson, Casey D. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Fast, Jerome D. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Rasch, Philip J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington

    2018-04-01

    We use the long-term Atmospheric Radiation Measurement (ARM) datasets collected at the three Tropical Western Pacific (TWP) sites as a tropical testbed to evaluate the ability of the Community Atmosphere Model (CAM5) to simulate the various types of clouds, their seasonal and diurnal variations, and their impact on surface radiation. We conducted a series of CAM5 simulations at various horizontal grid spacing (around 2°, 1°, 0.5°, and 0.25°) with meteorological constraints from reanalysis. Model biases in the seasonal cycle of cloudiness are found to be weakly dependent on model resolution. Positive biases (up to 20%) in the annual mean total cloud fraction appear mostly in stratiform ice clouds. Higher-resolution simulations do reduce the positive bias in the frequency of ice clouds, but they inadvertently increase the negative biases in convective clouds and low-level liquid clouds, leading to a positive bias in annual mean shortwave fluxes at the sites, as high as 65 W m-2 in the 0.25° simulation. Such resolution-dependent biases in clouds can adversely lead to biases in ambient thermodynamic properties and, in turn, feedback on clouds. Both the CAM5 model and ARM observations show distinct diurnal cycles in total, stratiform and convective cloud fractions; however, they are out-of-phase by 12 hours and the biases vary by site. Our results suggest that biases in deep convection affect the vertical distribution and diurnal cycle of stratiform clouds through the transport of vapor and/or the detrainment of liquid and ice. We also found that the modelled gridmean surface longwave fluxes are systematically larger than site measurements when the grid that the ARM sites reside in is partially covered by ocean. The modeled longwave fluxes at such sites also lack a discernable diurnal cycle because the ocean part of the grid is warmer and less sensitive to radiative heating/cooling compared to land. Higher spatial resolution is more helpful is this regard. Our

  20. IO strategies and data services for petascale data sets from a global cloud resolving model

    International Nuclear Information System (INIS)

    Schuchardt, K L; Palmer, B J; Daily, J A; Elsethagen, T O; Koontz, A S

    2007-01-01

    Global cloud resolving models at resolutions of 4km or less create significant challenges for simulation output, data storage, data management, and post-simulation analysis and visualization. To support efficient model output as well as data analysis, new methods for IO and data organization must be evaluated. The model we are supporting, the Global Cloud Resolving Model being developed at Colorado State University, uses a geodesic grid. The non-monotonic nature of the grid's coordinate variables requires enhancements to existing data processing tools and community standards for describing and manipulating grids. The resolution, size and extent of the data suggest the need for parallel analysis tools and allow for the possibility of new techniques in data mining, filtering and comparison to observations. We describe the challenges posed by various aspects of data generation, management, and analysis, our work exploring IO strategies for the model, and a preliminary architecture, web portal, and tool enhancements which, when complete, will enable broad community access to the data sets in familiar ways to the community

  1. Effects of sea surface temperature, cloud radiative and microphysical processes, and diurnal variations on rainfall in equilibrium cloud-resolving model simulations

    International Nuclear Information System (INIS)

    Jiang Zhe; Li Xiao-Fan; Zhou Yu-Shu; Gao Shou-Ting

    2012-01-01

    The effects of sea surface temperature (SST), cloud radiative and microphysical processes, and diurnal variations on rainfall statistics are documented with grid data from the two-dimensional equilibrium cloud-resolving model simulations. For a rain rate of higher than 3 mm·h −1 , water vapor convergence prevails. The rainfall amount decreases with the decrease of SST from 29 °C to 27 °C, the inclusion of diurnal variation of SST, or the exclusion of microphysical effects of ice clouds and radiative effects of water clouds, which are primarily associated with the decreases in water vapor convergence. However, the amount of rainfall increases with the increase of SST from 29 °C to 31 °C, the exclusion of diurnal variation of solar zenith angle, and the exclusion of the radiative effects of ice clouds, which are primarily related to increases in water vapor convergence. For a rain rate of less than 3 mm·h −1 , water vapor divergence prevails. Unlike rainfall statistics for rain rates of higher than 3 mm·h −1 , the decrease of SST from 29 °C to 27 °C and the exclusion of radiative effects of water clouds in the presence of radiative effects of ice clouds increase the rainfall amount, which corresponds to the suppression in water vapor divergence. The exclusion of microphysical effects of ice clouds decreases the amount of rainfall, which corresponds to the enhancement in water vapor divergence. The amount of rainfall is less sensitive to the increase of SST from 29 °C to 31 °C and to the radiative effects of water clouds in the absence of the radiative effects of ice clouds. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  2. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    CERN Document Server

    De Salvo, Alessandro; The ATLAS collaboration; Sanchez, Arturo; Smirnov, Yuri

    2015-01-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original WMS and the new Panda modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over WAN. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation DB is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system i...

  3. Cost-effective GPU-grid for genome-wide epistasis calculations.

    Science.gov (United States)

    Pütz, B; Kam-Thong, T; Karbalai, N; Altmann, A; Müller-Myhsok, B

    2013-01-01

    Until recently, genotype studies were limited to the investigation of single SNP effects due to the computational burden incurred when studying pairwise interactions of SNPs. However, some genetic effects as simple as coloring (in plants and animals) cannot be ascribed to a single locus but only understood when epistasis is taken into account [1]. It is expected that such effects are also found in complex diseases where many genes contribute to the clinical outcome of affected individuals. Only recently have such problems become feasible computationally. The inherently parallel structure of the problem makes it a perfect candidate for massive parallelization on either grid or cloud architectures. Since we are also dealing with confidential patient data, we were not able to consider a cloud-based solution but had to find a way to process the data in-house and aimed to build a local GPU-based grid structure. Sequential epistatsis calculations were ported to GPU using CUDA at various levels. Parallelization on the CPU was compared to corresponding GPU counterparts with regards to performance and cost. A cost-effective solution was created by combining custom-built nodes equipped with relatively inexpensive consumer-level graphics cards with highly parallel GPUs in a local grid. The GPU method outperforms current cluster-based systems on a price/performance criterion, as a single GPU shows speed performance comparable up to 200 CPU cores. The outlined approach will work for problems that easily lend themselves to massive parallelization. Code for various tasks has been made available and ongoing development of tools will further ease the transition from sequential to parallel algorithms.

  4. Cloud services for the Fermilab scientific stakeholders

    International Nuclear Information System (INIS)

    Timm, S; Garzoglio, G; Mhashilkar, P

    2015-01-01

    As part of the Fermilab/KISTI cooperative research project, Fermilab has successfully run an experimental simulation workflow at scale on a federation of Amazon Web Services (AWS), FermiCloud, and local FermiGrid resources. We used the CernVM-FS (CVMFS) file system to deliver the application software. We established Squid caching servers in AWS as well, using the Shoal system to let each individual virtual machine find the closest squid server. We also developed an automatic virtual machine conversion system so that we could transition virtual machines made on FermiCloud to Amazon Web Services. We used this system to successfully run a cosmic ray simulation of the NOvA detector at Fermilab, making use of both AWS spot pricing and network bandwidth discounts to minimize the cost. On FermiCloud we also were able to run the workflow at the scale of 1000 virtual machines, using a private network routable inside of Fermilab. We present in detail the technological improvements that were used to make this work a reality. (paper)

  5. Cooperative Strategy for Optimal Management of Smart Grids by Wavelet RNNs and Cloud Computing.

    Science.gov (United States)

    Napoli, Christian; Pappalardo, Giuseppe; Tina, Giuseppe Marco; Tramontana, Emiliano

    2016-08-01

    Advanced smart grids have several power sources that contribute with their own irregular dynamic to the power production, while load nodes have another dynamic. Several factors have to be considered when using the owned power sources for satisfying the demand, i.e., production rate, battery charge and status, variable cost of externally bought energy, and so on. The objective of this paper is to develop appropriate neural network architectures that automatically and continuously govern power production and dispatch, in order to maximize the overall benefit over a long time. Such a control will improve the fundamental work of a smart grid. For this, status data of several components have to be gathered, and then an estimate of future power production and demand is needed. Hence, the neural network-driven forecasts are apt in this paper for renewable nonprogrammable energy sources. Then, the produced energy as well as the stored one can be supplied to consumers inside a smart grid, by means of digital technology. Among the sought benefits, reduced costs and increasing reliability and transparency are paramount.

  6. LingoBee--Crowd-Sourced Mobile Language Learning in the Cloud

    Science.gov (United States)

    Petersen, Sobah Abbas; Procter-Legg, Emma; Cacchione, Annamaria

    2013-01-01

    This paper describes three case studies, where language learners were invited to use "LingoBee" as a means of supporting their language learning. LingoBee is a mobile app that provides user-generated language content in a cloud-based shared repository. Assuming that today's students are mobile savvy and "Digital Natives" able…

  7. Cloud-Based Collaborative Decision Making: Design Considerations and Architecture of the GRUPO-MOD System

    OpenAIRE

    Heiko Thimm

    2012-01-01

    The complexity of many decision problems of today’s globalized world requires new innovative solutions that are built upon proven decision support technology and also recent advancements in the area of information and communication technology (ICT) such as Cloud Computing and Mobile Communication. A combination of the cost-effective Cloud Computing approach with extended group decision support system technology bears several interesting unprecedented opportunities for the development of suc...

  8. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    Science.gov (United States)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  9. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    Science.gov (United States)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary

  10. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2012-09-20

    ..., Grid, and cloud projects. The MAGIC Team reports to the Large Scale Networking (LSN) Coordinating Group... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD.... Dates/Location: The MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00pm, at...

  11. Adaptation of Powerline Communications-Based Smart Metering Deployments to the Requirements of Smart Grids

    Directory of Open Access Journals (Sweden)

    Alberto Sendin

    2015-11-01

    Full Text Available Powerline communications (PLC-based smart meter deployments are now a reality in many regions of the world. Although PLC elements are generally incorporated in smart meters and data concentrators, the underlying PLC network allows the integration of other smart grid services directly over it. The remote control capabilities that automation programs need and are today deployed over their medium voltage (MV grid, can be extended to the low voltage (LV grid through these existing PLC networks. This paper demonstrates the capabilities of narrowband high data rate (NB HDR PLC technologies deployed over LV grids for smart metering purposes to support internet protocol internet protocol (IP communications in the LV grid. The paper demonstrates these possibilities with the presentation of the simulation and laboratory results of IP communications over international telecommunication union: ITU-T G.9904 PLC technology, and the definition of a PLC Network Management System based on a simple network management protocol (SNMP management information base (MIB definition and applicable use cases.

  12. Impact of external grid disturbances on nuclear power plants; Rueckwirkungen von Netzstoerungen auf Kernkraftwerke

    Energy Technology Data Exchange (ETDEWEB)

    Arains, Robert; Arnold, Simone; Brueck, Benjamin; Mueller, Christian; Quester, Claudia; Sommer, Dagmar

    2017-06-15

    The electrical design of nuclear power plants and the reliability of their electrical power supply including the offsite power supply are of high importance for the safe operation of the plants. The operating experience of recent years has shown that disturbances in the external grid can have impact on the electrical equipment of nuclear power plants. In the course of this project, possible causes and types of grid disturbances were identified. Based on these, scenarios of grid disturbances were developed. In order to investigate the impact of the developed scenarios of grid disturbances on the electrical equipment of nuclear power plants, the auxiliary power supply of a German pressurized water reactor of type Konvoi was simulated using the simulation tool NEPLAN. On the basis of the results of the analyses, it was identified whether there are possible measures to prevent the spread of grid disturbances in the plants which have not been implemented in the nuclear power plants today.

  13. Shallow layer modelling of dense gas clouds

    Energy Technology Data Exchange (ETDEWEB)

    Ott, S.; Nielsen, M.

    1996-11-01

    The motivation for making shallow layer models is that they can deal with the dynamics of gravity driven flow in complex terrain at a modest computational cost compared to 3d codes. The main disadvantage is that the air-cloud interactions still have to be added `by hand`, where 3d models inherit the correct dynamics from the fundamental equations. The properties of the inviscid shallow water equations are discussed, focusing on existence and uniqueness of solutions. It is demonstrated that breaking waves and fronts pose severe problems, that can only be overcome if the hydrostatic approximation is given up and internal friction is added to the model. A set of layer integrated equations is derived starting from the Navier-Stokes equations. The various steps in the derivation are accompanied by plausibility arguments. These form the scientific basis of the model. The principle of least action is introduced as a means of generating consistent models, and as a tool for making discrete equations for numerical models, which automatically obey conservation laws. A numerical model called SLAM (Shallow LAyer Model) is presented. SLAM has some distinct features compared to other shallow layer models: A Lagrangian, moving grid; Explicit account for the turbulent kinetic energy budget; The entrainment rate is estimated on the basis of the local turbulent kinetic energy; Non-hydrostatic pressure; and Numerical methods respect conservation laws even for coarse grids. Thorney Island trial 8 is used as a reference case model tuning. The model reproduces the doughnut shape of the cloud and yield concentrations in reasonable agreement with observations, even when a small number of cells (e.g. 16) is used. It is concluded that lateral exchange of matter within the cloud caused by shear is important, and that the model should be improved on this point. (au) 16 ills., 38 refs.

  14. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  15. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using a StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.

  16. Cloud and Radiation Studies during SAFARI 2000

    Science.gov (United States)

    Platnick, Steven; King, M. D.; Hobbs, P. V.; Osborne, S.; Piketh, S.; Bruintjes, R.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Though the emphasis of the Southern Africa Regional Science Initiative 2000 (SAFARI-2000) dry season campaign was largely on emission sources and transport, the assemblage of aircraft (including the high altitude NASA ER-2 remote sensing platform and the University of Washington CV-580, UK MRF C130, and South African Weather Bureau JRA in situ aircrafts) provided a unique opportunity for cloud studies. Therefore, as part of the SAFARI initiative, investigations were undertaken to assess regional aerosol-cloud interactions and cloud remote sensing algorithms. In particular, the latter part of the experiment concentrated on marine boundary layer stratocumulus clouds off the southwest coast of Africa. Associated with cold water upwelling along the Benguela current, the Namibian stratocumulus regime has received limited attention but appears to be unique for several reasons. During the dry season, outflow of continental fires and industrial pollution over this area can be extreme. From below, upwelling provides a rich nutrient source for phytoplankton (a source of atmospheric sulphur through DMS production as well as from decay processes). The impact of these natural and anthropogenic sources on the microphysical and optical properties of the stratocumulus is unknown. Continental and Indian Ocean cloud systems of opportunity were also studied during the campaign. Aircraft flights were coordinated with NASA Terra Satellite overpasses for synergy with the Moderate Resolution Imaging Spectroradiometer (MODIS) and other Terra instruments. An operational MODIS algorithm for the retrieval of cloud optical and physical properties (including optical thickness, effective particle radius, and water path) has been developed. Pixel-level MODIS retrievals (11 km spatial resolution at nadir) and gridded statistics of clouds in th SAFARI region will be presented. In addition, the MODIS Airborne Simulator flown on the ER-2 provided high spatial resolution retrievals (50 m at nadir

  17. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    Science.gov (United States)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High temperature superconductors (HTS) can help solving these grid problems. Advantages to integrate HTS components at Extra High Voltage (EHV) and High Voltage (HV) levels are numerous: more power with less losses and less emissions, intrinsic fault current limiting capability, better control of power flow, reduced footprint, etc. Today's main obstacle is the relatively high price of HTS. Nevertheless, as the price goes down, initial market penetration for several HTS components is expected by year 2015 (e.g.: cables, fault current limiters). In this paper we present a design of intrinsically compensated EHV HTS cable for future grid integration. Discussed are the parameters of such cable providing an optimal power transmission in the future network.

  18. Connecting multiple clouds and mixing real and virtual resources via the open source WNoDeS framework

    CERN Multimedia

    CERN. Geneva; Italiano, Alessandro

    2012-01-01

    In this paper we present the latest developments introduced in the WNoDeS framework (http://web.infn.it/wnodes); we will in particular describe inter-cloud connectivity, support for multiple batch systems, and coexistence of virtual and real environments on a single hardware. Specific effort has been dedicated to the work needed to deploy a "multi-sites" WNoDeS installation. The goal is to give end users the possibility to submit requests for resources using cloud interfaces on several sites in a transparent way. To this extent, we will show how we have exploited already existing and deployed middleware within the framework of the IGI (Italian Grid Initiative) and EGI (European Grid Infrastructure) services. In this context, we will also describe the developments that have taken place in order to have the possibility to dynamically exploit public cloud services like Amazon EC2. The latter gives WNoDeS the capability to serve, for example, part of the user requests through external computing resources when ne...

  19. Individual aerosol particles in ambient and updraft conditions below convective cloud bases in the Oman mountain region

    Science.gov (United States)

    Semeniuk, T. A.; Bruintjes, R. T.; Salazar, V.; Breed, D. W.; Jensen, T. L.; Buseck, P. R.

    2014-03-01

    An airborne study of cloud microphysics provided an opportunity to collect aerosol particles in ambient and updraft conditions of natural convection systems for transmission electron microscopy (TEM). Particles were collected simultaneously on lacey carbon and calcium-coated carbon (Ca-C) TEM grids, providing information on particle morphology and chemistry and a unique record of the particle's physical state on impact. In total, 22 particle categories were identified, including single, coated, aggregate, and droplet types. The fine fraction comprised up to 90% mixed cation sulfate (MCS) droplets, while the coarse fraction comprised up to 80% mineral-containing aggregates. Insoluble (dry), partially soluble (wet), and fully soluble particles (droplets) were recorded on Ca-C grids. Dry particles were typically silicate grains; wet particles were mineral aggregates with chloride, nitrate, or sulfate components; and droplets were mainly aqueous NaCl and MCS. Higher numbers of droplets were present in updrafts (80% relative humidity (RH)) compared with ambient conditions (60% RH), and almost all particles activated at cloud base (100% RH). Greatest changes in size and shape were observed in NaCl-containing aggregates (>0.3 µm diameter) along updraft trajectories. Their abundance was associated with high numbers of cloud condensation nuclei (CCN) and cloud droplets, as well as large droplet sizes in updrafts. Thus, compositional dependence was observed in activation behavior recorded for coarse and fine fractions. Soluble salts from local pollution and natural sources clearly affected aerosol-cloud interactions, enhancing the spectrum of particles forming CCN and by forming giant CCN from aggregates, thus, making cloud seeding with hygroscopic flares ineffective in this region.

  20. Analysis of the current use, benefit, and value of the Open Science Grid

    International Nuclear Information System (INIS)

    Pordes, R; Weichel, J

    2010-01-01

    The Open Science Grid usage has ramped up more than 25% in the past twelve months due to both the increase in throughput of the core stakeholders - US LHC, LIGO and Run II - and increase in usage by non-physics communities. It is important to understand the value collaborative projects, such as the OSG, contribute to the scientific community. This needs to be cognizant of the environment of commercial cloud offerings, the evolving and maturing middleware for grid based distributed computing, and the evolution in science and research dependence on computation. We present a first categorization of OSG value and analysis across several different aspects of the Consortium's goals and activities. And lastly, we presents some of the upcoming challenges of LHC data analysis ramp up and our ongoing contributions to the World Wide LHC Computing Grid.

  1. Analysis of the Current Use, Benefit, and Value of the Open Science Grid

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, R.; /Fermilab

    2009-04-01

    The Open Science Grid usage has ramped up more than 25% in the past twelve months due to both the increase in throughput of the core stakeholders - US LHC, LIGO and Run II - and increase in usage by nonphysics communities. It is important to understand the value collaborative projects, such as the OSG, contribute to the scientific community. This needs to be cognizant of the environment of commercial cloud offerings, the evolving and maturing middleware for grid based distributed computing, and the evolution in science and research dependence on computation. We present a first categorization of OSG value and analysis across several different aspects of the Consortium's goals and activities. And lastly, we presents some of the upcoming challenges of LHC data analysis ramp up and our ongoing contributions to the World Wide LHC Computing Grid.

  2. An architecture based on SOA and virtual enterprise principles: OpenNebula for cloud deployment

    CSIR Research Space (South Africa)

    Mvelase, P

    2012-04-01

    Full Text Available Today enterprises have to survive in a dynamically changing business environment. Cloud computing presents a new business model where the Information Technology services supporting the business are provided by partners rather than in-house. The idea...

  3. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    Science.gov (United States)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a

  4. Smart grid, household consumers and asymmetries: Energy visualization and scripting of technology

    DEFF Research Database (Denmark)

    Hansen, Meiken

    This paper will focus on the asymmetries that occur when different consumer groups are presented to the same energy visualisation equipment. The studied technology is home automation/control equipment, designed to contribute to the general set up of smart grid (facilitate a flexible use......-technologies applied in the human actor's homes) and how the consumers interpret the technology (the De-scription of the object). In relation to the general goals of smart grid to change the consumption of electricity into being more flexible, it is relevant to investigate if different consumer groups accept...... of electricity and accommodate demand response). Large smart grid pilot projects suggest that energy visualisation technology will be a common part of households in the future. There exist numerous different visualisation technologies within the area of electricity and private consumers today.This study seeks...

  5. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  6. Modelling ice microphysics of mixed-phase clouds

    Science.gov (United States)

    Ahola, J.; Raatikainen, T.; Tonttila, J.; Romakkaniemi, S.; Kokkola, H.; Korhonen, H.

    2017-12-01

    The low-level Arctic mixed-phase clouds have a significant role for the Arctic climate due to their ability to absorb and reflect radiation. Since the climate change is amplified in polar areas, it is vital to apprehend the mixed-phase cloud processes. From a modelling point of view, this requires a high spatiotemporal resolution to capture turbulence and the relevant microphysical processes, which has shown to be difficult.In order to solve this problem about modelling mixed-phase clouds, a new ice microphysics description has been developed. The recently published large-eddy simulation cloud model UCLALES-SALSA offers a good base for a feasible solution (Tonttila et al., Geosci. Mod. Dev., 10:169-188, 2017). The model includes aerosol-cloud interactions described with a sectional SALSA module (Kokkola et al., Atmos. Chem. Phys., 8, 2469-2483, 2008), which represents a good compromise between detail and computational expense.Newly, the SALSA module has been upgraded to include also ice microphysics. The dynamical part of the model is based on well-known UCLA-LES model (Stevens et al., J. Atmos. Sci., 56, 3963-3984, 1999) which can be used to study cloud dynamics on a fine grid.The microphysical description of ice is sectional and the included processes consist of formation, growth and removal of ice and snow particles. Ice cloud particles are formed by parameterized homo- or heterogeneous nucleation. The growth mechanisms of ice particles and snow include coagulation and condensation of water vapor. Autoconversion from cloud ice particles to snow is parameterized. The removal of ice particles and snow happens by sedimentation and melting.The implementation of ice microphysics is tested by initializing the cloud simulation with atmospheric observations from the Indirect and Semi-Direct Aerosol Campaign (ISDAC). The results are compared to the model results shown in the paper of Ovchinnikov et al. (J. Adv. Model. Earth Syst., 6, 223-248, 2014) and they show a good

  7. gLExec: gluing grid computing to the Unix world

    Science.gov (United States)

    Groep, D.; Koeroo, O.; Venekamp, G.

    2008-07-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system.

  8. gLExec: gluing grid computing to the Unix world

    International Nuclear Information System (INIS)

    Groep, D; Koeroo, O; Venekamp, G

    2008-01-01

    The majority of compute resources in todays scientific grids are based on Unix and Unix-like operating systems. In this world, user and user-group management are based around the concepts of a numeric 'user ID' and 'group ID' that are local to the resource. In contrast, grid concepts of user and group management are centered around globally assigned identifiers and VO membership, structures that are independent of any specific resource. At the fabric boundary, these 'grid identities' have to be translated to Unix user IDs. New job submission methodologies, such as job-execution web services, community-deployed local schedulers, and the late binding of user jobs in a grid-wide overlay network of 'pilot jobs', push this fabric boundary ever further down into the resource. gLExec, a light-weight (and thereby auditable) credential mapping and authorization system, addresses these issues. It can be run both on fabric boundary, as part of an execution web service, and on the worker node in a late-binding scenario. In this contribution we describe the rationale for gLExec, how it interacts with the site authorization and credential mapping frameworks such as LCAS, LCMAPS and GUMS, and how it can be used to improve site control and traceability in a pilot-job system

  9. A Cloud-Computing Service for Environmental Geophysics and Seismic Data Processing

    Science.gov (United States)

    Heilmann, B. Z.; Maggi, P.; Piras, A.; Satta, G.; Deidda, G. P.; Bonomi, E.

    2012-04-01

    Cloud computing is establishing worldwide as a new high performance computing paradigm that offers formidable possibilities to industry and science. The presented cloud-computing portal, part of the Grida3 project, provides an innovative approach to seismic data processing by combining open-source state-of-the-art processing software and cloud-computing technology, making possible the effective use of distributed computation and data management with administratively distant resources. We substituted the user-side demanding hardware and software requirements by remote access to high-performance grid-computing facilities. As a result, data processing can be done quasi in real-time being ubiquitously controlled via Internet by a user-friendly web-browser interface. Besides the obvious advantages over locally installed seismic-processing packages, the presented cloud-computing solution creates completely new possibilities for scientific education, collaboration, and presentation of reproducible results. The web-browser interface of our portal is based on the commercially supported grid portal EnginFrame, an open framework based on Java, XML, and Web Services. We selected the hosted applications with the objective to allow the construction of typical 2D time-domain seismic-imaging workflows as used for environmental studies and, originally, for hydrocarbon exploration. For data visualization and pre-processing, we chose the free software package Seismic Un*x. We ported tools for trace balancing, amplitude gaining, muting, frequency filtering, dip filtering, deconvolution and rendering, with a customized choice of options as services onto the cloud-computing portal. For structural imaging and velocity-model building, we developed a grid version of the Common-Reflection-Surface stack, a data-driven imaging method that requires no user interaction at run time such as manual picking in prestack volumes or velocity spectra. Due to its high level of automation, CRS stacking

  10. The GridPP DIRAC project - DIRAC for non-LHC communities

    CERN Document Server

    Bauer, D; Currie, R; Fayer, S; Huffman, A; Martyniak, J; Rand, D; Richards, A

    2015-01-01

    The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communiti...

  11. The GridPP DIRAC project - DIRAC for non-LHC communities

    Science.gov (United States)

    Bauer, D.; Colling, D.; Currie, R.; Fayer, S.; Huffman, A.; Martyniak, J.; Rand, D.; Richards, A.

    2015-12-01

    The GridPP consortium in the UK is currently testing a multi-VO DIRAC service aimed at non-LHC VOs. These VOs (Virtual Organisations) are typically small and generally do not have a dedicated computing support post. The majority of these represent particle physics experiments (e.g. NA62 and COMET), although the scope of the DIRAC service is not limited to this field. A few VOs have designed bespoke tools around the EMI-WMS & LFC, while others have so far eschewed distributed resources as they perceive the overhead for accessing them to be too high. The aim of the GridPP DIRAC project is to provide an easily adaptable toolkit for such VOs in order to lower the threshold for access to distributed resources such as Grid and cloud computing. As well as hosting a centrally run DIRAC service, we will also publish our changes and additions to the upstream DIRAC codebase under an open-source license. We report on the current status of this project and show increasing adoption of DIRAC within the non-LHC communities.

  12. Sensitivity of warm-frontal processes to cloud-nucleating aerosol concentrations

    Science.gov (United States)

    Igel, Adele L.; Van Den Heever, Susan C.; Naud, Catherine M.; Saleeby, Stephen M.; Posselt, Derek J.

    2013-01-01

    An extratropical cyclone that crossed the United States on 9-11 April 2009 was successfully simulated at high resolution (3-km horizontal grid spacing) using the Colorado State University Regional Atmospheric Modeling System. The sensitivity of the associated warm front to increasing pollution levels was then explored by conducting the same experiment with three different background profiles of cloud-nucleating aerosol concentration. To the authors' knowledge, no study has examined the indirect effects of aerosols on warm fronts. The budgets of ice, cloud water, and rain in the simulation with the lowest aerosol concentrations were examined. The ice mass was found to be produced in equal amounts through vapor deposition and riming, and the melting of ice produced approximately 75% of the total rain. Conversion of cloud water to rain accounted for the other 25%. When cloud-nucleating aerosol concentrations were increased, significant changes were seen in the budget terms, but total precipitation remained relatively constant. Vapor deposition onto ice increased, but riming of cloud water decreased such that there was only a small change in the total ice production and hence there was no significant change in melting. These responses can be understood in terms of a buffering effect in which smaller cloud droplets in the mixed-phase region lead to both an enhanced vapor deposition and decreased riming efficiency with increasing aerosol concentrations. Overall, while large changes were seen in the microphysical structure of the frontal cloud, cloud-nucleating aerosols had little impact on the precipitation production of the warm front.

  13. Accounting for Unresolved Spatial Variability in Large Scale Models: Development and Evaluation of a Statistical Cloud Parameterization with Prognostic Higher Order Moments

    Energy Technology Data Exchange (ETDEWEB)

    Robert Pincus

    2011-05-17

    This project focused on the variability of clouds that is present across a wide range of scales ranging from the synoptic to the millimeter. In particular, there is substantial variability in cloud properties at scales smaller than the grid spacing of models used to make climate projections (GCMs) and weather forecasts. These models represent clouds and other small-scale processes with parameterizations that describe how those processes respond to and feed back on the largescale state of the atmosphere.

  14. The MSG-SEVIRI-based cloud property data record CLAAS-2

    Directory of Open Access Journals (Sweden)

    N. Benas

    2017-07-01

    Full Text Available Clouds play a central role in the Earth's atmosphere, and satellite observations are crucial for monitoring clouds and understanding their impact on the energy budget and water cycle. Within the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT Satellite Application Facility on Climate Monitoring (CM SAF, a new cloud property data record was derived from geostationary Meteosat Spinning Enhanced Visible and Infrared Imager (SEVIRI measurements for the time frame 2004–2015. The resulting CLAAS-2 (CLoud property dAtAset using SEVIRI, Edition 2 data record is publicly available via the CM SAF website (https://doi.org/10.5676/EUM_SAF_CM/CLAAS/V002. In this paper we present an extensive evaluation of the CLAAS-2 cloud products, which include cloud fractional coverage, thermodynamic phase, cloud top properties, liquid/ice cloud water path and corresponding optical thickness and particle effective radius. Data validation and comparisons were performed on both level 2 (native SEVIRI grid and repeat cycle and level 3 (daily and monthly averages and histograms with reference datasets derived from lidar, microwave and passive imager measurements. The evaluation results show very good overall agreement with matching spatial distributions and temporal variability and small biases attributed mainly to differences in sensor characteristics, retrieval approaches, spatial and temporal samplings and viewing geometries. No major discrepancies were found. Underpinned by the good evaluation results, CLAAS-2 demonstrates that it is fit for the envisaged applications, such as process studies of the diurnal cycle of clouds and the evaluation of regional climate models. The data record is planned to be extended and updated in the future.

  15. Behavior Life Style Analysis for Mobile Sensory Data in Cloud Computing through MapReduce

    Science.gov (United States)

    Hussain, Shujaat; Bang, Jae Hun; Han, Manhyung; Ahmed, Muhammad Idris; Amin, Muhammad Bilal; Lee, Sungyoung; Nugent, Chris; McClean, Sally; Scotney, Bryan; Parr, Gerard

    2014-01-01

    Cloud computing has revolutionized healthcare in today's world as it can be seamlessly integrated into a mobile application and sensor devices. The sensory data is then transferred from these devices to the public and private clouds. In this paper, a hybrid and distributed environment is built which is capable of collecting data from the mobile phone application and store it in the cloud. We developed an activity recognition application and transfer the data to the cloud for further processing. Big data technology Hadoop MapReduce is employed to analyze the data and create user timeline of user's activities. These activities are visualized to find useful health analytics and trends. In this paper a big data solution is proposed to analyze the sensory data and give insights into user behavior and lifestyle trends. PMID:25420151

  16. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  17. The German experience with grid-connected PV-systems

    International Nuclear Information System (INIS)

    Erge, T.; Hoffmann, V.U.; Kiefer, K.

    2001-01-01

    Grid-connected photovoltaics experienced increasing attention in Germany in recent years and are expected to face a major boost at the beginning of the new millennium. Highlights like the German 100,000-Roofs-Solar-Programme, PV programmes at schools financed by utilities and governments (e.g. 'SONNEonline' by PreussenElektra, 'Sonne in der Schule' by BMWi and 'Sonne in der Schule' by Bayernwerk) and large centralised installations of MW size ('Neue Messe Munchen' by Bayernwerk and 'Energiepark Mont-Cenis' by state Nordrhein-Westfalen, Stadtwerke Herne and European Union) count for the potential of grid-connected PV. Today in Germany a typical grid-connected PV installation of 1 kW nominal power produces average annual energy yields of 700 kWh (dependent on location and system components) and shows a high operating availability. The price per kWh from PV installations is still significantly higher than the price for conventional energy, but new funding schemes and cost models (like the large increase of feed-in tariff in Germany due to the Act on Granting Priority to Renewable Energy Sources in 2000) give optimism about the future. (Author)

  18. Women in engineering conference: capitalizing on today`s challenges

    Energy Technology Data Exchange (ETDEWEB)

    Metz, S.S.; Martins, S.M. [eds.

    1996-06-01

    This document contains the conference proceedings of the Women in Engineering Conference: Capitalizing on Today`s Challenges, held June 1-4, 1996 in Denver, Colorado. Topics included engineering and science education, career paths, workplace issues, and affirmative action.

  19. An adaptive multi-agent-based approach to smart grids control and optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Marco [Florida Institute of Technology, Melbourne, FL (United States); Perez, Carlos; Granados, Adrian [Institute for Human and Machine Cognition, Ocala, FL (United States)

    2012-03-15

    In this paper, we describe a reinforcement learning-based approach to power management in smart grids. The scenarios we consider are smart grid settings where renewable power sources (e.g. Photovoltaic panels) have unpredictable variations in power output due, for example, to weather or cloud transient effects. Our approach builds on a multi-agent system (MAS)-based infrastructure for the monitoring and coordination of smart grid environments with renewable power sources and configurable energy storage devices (battery banks). Software agents are responsible for tracking and reporting power flow variations at different points in the grid, and to optimally coordinate the engagement of battery banks (i.e. charge/idle/discharge modes) to maintain energy requirements to end-users. Agents are able to share information and coordinate control actions through a parallel communications infrastructure, and are also capable of learning, from experience, how to improve their response strategies for different operational conditions. In this paper we describe our approach and address some of the challenges associated with the communications infrastructure for distributed coordination. We also present some preliminary results of our first simulations using the GridLAB-D simulation environment, created by the US Department of Energy (DoE) at Pacific Northwest National Laboratory (PNNL). (orig.)

  20. A review of metaheuristic scheduling techniques in cloud computing

    Directory of Open Access Journals (Sweden)

    Mala Kalra

    2015-11-01

    Full Text Available Cloud computing has become a buzzword in the area of high performance distributed computing as it provides on-demand access to shared pool of resources over Internet in a self-service, dynamically scalable and metered manner. Cloud computing is still in its infancy, so to reap its full benefits, much research is required across a broad array of topics. One of the important research issues which need to be focused for its efficient performance is scheduling. The goal of scheduling is to map tasks to appropriate resources that optimize one or more objectives. Scheduling in cloud computing belongs to a category of problems known as NP-hard problem due to large solution space and thus it takes a long time to find an optimal solution. There are no algorithms which may produce optimal solution within polynomial time to solve these problems. In cloud environment, it is preferable to find suboptimal solution, but in short period of time. Metaheuristic based techniques have been proved to achieve near optimal solutions within reasonable time for such problems. In this paper, we provide an extensive survey and comparative analysis of various scheduling algorithms for cloud and grid environments based on three popular metaheuristic techniques: Ant Colony Optimization (ACO, Genetic Algorithm (GA and Particle Swarm Optimization (PSO, and two novel techniques: League Championship Algorithm (LCA and BAT algorithm.

  1. Study of Cloud Computing in HealthCare Industry

    OpenAIRE

    Reddy, G. Nikhita; Reddy, G. J. Ugander

    2014-01-01

    In Todays real world technology has become a domiant crucial component in every industry including healthcare industry. The benefits of storing electronically the records of patients have increased the productivity of patient care and easy accessibility and usage. The recent technological innovations in the health care is the invention of cloud based Technology. But many fears and security measures regarding patient records storing remotely is a concern for many in health care industry. One n...

  2. Global spectroscopic survey of cloud thermodynamic phase at high spatial resolution, 2005-2015

    Science.gov (United States)

    Thompson, David R.; Kahn, Brian H.; Green, Robert O.; Chien, Steve A.; Middleton, Elizabeth M.; Tran, Daniel Q.

    2018-02-01

    The distribution of ice, liquid, and mixed phase clouds is important for Earth's planetary radiation budget, impacting cloud optical properties, evolution, and solar reflectivity. Most remote orbital thermodynamic phase measurements observe kilometer scales and are insensitive to mixed phases. This under-constrains important processes with outsize radiative forcing impact, such as spatial partitioning in mixed phase clouds. To date, the fine spatial structure of cloud phase has not been measured at global scales. Imaging spectroscopy of reflected solar energy from 1.4 to 1.8 µm can address this gap: it directly measures ice and water absorption, a robust indicator of cloud top thermodynamic phase, with spatial resolution of tens to hundreds of meters. We report the first such global high spatial resolution survey based on data from 2005 to 2015 acquired by the Hyperion imaging spectrometer onboard NASA's Earth Observer 1 (EO-1) spacecraft. Seasonal and latitudinal distributions corroborate observations by the Atmospheric Infrared Sounder (AIRS). For extratropical cloud systems, just 25 % of variance observed at GCM grid scales of 100 km was related to irreducible measurement error, while 75 % was explained by spatial correlations possible at finer resolutions.

  3. The method of a joint intraday security check system based on cloud computing

    Science.gov (United States)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  4. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    Science.gov (United States)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  5. Can Nuclear Installations and Research Centres Adopt Cloud Computing Platform-

    International Nuclear Information System (INIS)

    Pichan, A.; Lazarescu, M.; Soh, S.T.

    2015-01-01

    Cloud Computing is arguably one of the recent and highly significant advances in information technology today. It produces transformative changes in the history of computing and presents many promising technological and economic opportunities. The pay-per-use model, the computing power, abundance of storage, skilled resources, fault tolerance and the economy of scale it offers, provides significant advantages to enterprises to adopt cloud platform for their business needs. However, customers especially those dealing with national security, high end scientific research institutions, critical national infrastructure service providers (like power, water) remain very much reluctant to move their business system to the cloud. One of the main concerns is the question of information security in the cloud and the threat of the unknown. Cloud Service Providers (CSP) indirectly encourages this perception by not letting their customers see what is behind their virtual curtain. Jurisdiction (information assets being stored elsewhere), data duplication, multi-tenancy, virtualisation and decentralized nature of data processing are the default characteristics of cloud computing. Therefore traditional approach of enforcing and implementing security controls remains a big challenge and largely depends upon the service provider. The other biggest challenge and open issue is the ability to perform digital forensic investigations in the cloud in case of security breaches. Traditional approaches to evidence collection and recovery are no longer practical as they rely on unrestricted access to the relevant systems and user data, something that is not available in the cloud model. This continues to fuel high insecurity for the cloud customers. In this paper we analyze the cyber security and digital forensics challenges, issues and opportunities for nuclear facilities to adopt cloud computing. We also discuss the due diligence process and applicable industry best practices which shall be

  6. Cloud Computing Strategy

    Science.gov (United States)

    2012-07-01

    regardless of  access point or the device being used across the Global Information Grid ( GIG ).  These data  centers will host existing applications...state.  It  illustrates that the DoD Enterprise Cloud is an integrated environment on the  GIG , consisting of  DoD Components, commercial entities...Operations and Maintenance (O&M) costs by  leveraging  economies  of scale, and automate monitoring and provisioning to reduce the  human cost of service

  7. Remote Sensing of Cloud Top Heights Using the Research Scanning Polarimeter

    Science.gov (United States)

    Sinclair, Kenneth; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John; Wasilewski, Andrzej

    2015-01-01

    Clouds cover roughly two thirds of the globe and act as an important regulator of Earth's radiation budget. Of these, multilayered clouds occur about half of the time and are predominantly two-layered. Changes in cloud top height (CTH) have been predicted by models to have a globally averaged positive feedback, however observational changes in CTH have shown uncertain results. Additional CTH observations are necessary to better and quantify the effect. Improved CTH observations will also allow for improved sub-grid parameterizations in large-scale models and accurate CTH information is important when studying variations in freezing point and cloud microphysics. NASA's airborne Research Scanning Polarimeter (RSP) is able to measure cloud top height using a novel multi-angular contrast approach. RSP scans along the aircraft track and obtains measurements at 152 viewing angles at any aircraft location. The approach presented here aggregates measurements from multiple scans to a single location at cloud altitude using a correlation function designed to identify the location-distinct features in each scan. During NASAs SEAC4RS air campaign, the RSP was mounted on the ER-2 aircraft along with the Cloud Physics Lidar (CPL), which made simultaneous measurements of CTH. The RSPs unique method of determining CTH is presented. The capabilities of using single and combinations of channels within the approach are investigated. A detailed comparison of RSP retrieved CTHs with those of CPL reveal the accuracy of the approach. Results indicate a strong ability for the RSP to accurately identify cloud heights. Interestingly, the analysis reveals an ability for the approach to identify multiple cloud layers in a single scene and estimate the CTH of each layer. Capabilities and limitations of identifying single and multiple cloud layers heights are explored. Special focus is given to sources of error in the method including optically thin clouds, physically thick clouds, multi

  8. Grid accounting service: state and future development

    International Nuclear Information System (INIS)

    Levshina, T; Sehgal, C; Bockelman, B; Weitzel, D; Guru, A

    2014-01-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  9. Grid accounting service: state and future development

    Science.gov (United States)

    Levshina, T.; Sehgal, C.; Bockelman, B.; Weitzel, D.; Guru, A.

    2014-06-01

    During the last decade, large-scale federated distributed infrastructures have been continually developed and expanded. One of the crucial components of a cyber-infrastructure is an accounting service that collects data related to resource utilization and identity of users using resources. The accounting service is important for verifying pledged resource allocation per particular groups and users, providing reports for funding agencies and resource providers, and understanding hardware provisioning requirements. It can also be used for end-to-end troubleshooting as well as billing purposes. In this work we describe Gratia, a federated accounting service jointly developed at Fermilab and Holland Computing Center at University of Nebraska-Lincoln. The Open Science Grid, Fermilab, HCC, and several other institutions have used Gratia in production for several years. The current development activities include expanding Virtual Machines provisioning information, XSEDE allocation usage accounting, and Campus Grids resource utilization. We also identify the direction of future work: improvement and expansion of Cloud accounting, persistent and elastic storage space allocation, and the incorporation of WAN and LAN network metrics.

  10. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  11. Evaluating Lightning-generated NOx (LNOx) Parameterization based on Cloud Top Height at Resolutions with Partially-resolved Convection for Upper Tropospheric Chemistry Studies

    Science.gov (United States)

    Wong, J.; Barth, M. C.; Noone, D. C.

    2012-12-01

    Lightning-generated nitrogen oxides (LNOx) is an important precursor to tropospheric ozone production. With a meteorological time-scale variability similar to that of the ozone chemical lifetime, it can nonlinearly perturb tropospheric ozone concentration. Coupled with upper-air circulation patterns, LNOx can accumulate in significant amount in the upper troposphere with other precursors, thus enhancing ozone production (see attached figure). While LNOx emission has been included and tuned extensively in global climate models, its inclusions in regional chemistry models are seldom tested. Here we present a study that evaluates the frequently used Price and Rind parameterization based on cloud-top height at resolutions that partially resolve deep convection using the Weather Research and Forecasting model with Chemistry (WRF-Chem) over the contiguous United States. With minor modifications, the parameterization is shown to generate integrated flash counts close to those observed. However, the modeled frequency distribution of cloud-to-ground flashes do not represent well for storms with high flash rates, bringing into question the applicability of the intra-cloud/ground partitioning (IC:CG) formulation of Price and Rind in some studies. Resolution dependency also requires attention when sub-grid cloud-tops are used instead of the originally intended grid-averaged cloud-top. LNOx passive tracers being gathered by monsoonal upper tropospheric anticyclone.

  12. Smart Grid Risk Management

    Science.gov (United States)

    Abad Lopez, Carlos Adrian

    Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility

  13. The effects of different footprint sizes and cloud algorithms on the top-of-atmosphere radiative flux calculation from the Clouds and Earth's Radiant Energy System (CERES instrument on Suomi National Polar-orbiting Partnership (NPP

    Directory of Open Access Journals (Sweden)

    W. Su

    2017-10-01

    Full Text Available Only one Clouds and Earth's Radiant Energy System (CERES instrument is onboard the Suomi National Polar-orbiting Partnership (NPP and it has been placed in cross-track mode since launch; it is thus not possible to construct a set of angular distribution models (ADMs specific for CERES on NPP. Edition 4 Aqua ADMs are used for flux inversions for NPP CERES measurements. However, the footprint size of NPP CERES is greater than that of Aqua CERES, as the altitude of the NPP orbit is higher than that of the Aqua orbit. Furthermore, cloud retrievals from the Visible Infrared Imaging Radiometer Suite (VIIRS and the Moderate Resolution Imaging Spectroradiometer (MODIS, which are the imagers sharing the spacecraft with NPP CERES and Aqua CERES, are also different. To quantify the flux uncertainties due to the footprint size difference between Aqua CERES and NPP CERES, and due to both the footprint size difference and cloud property difference, a simulation is designed using the MODIS pixel-level data, which are convolved with the Aqua CERES and NPP CERES point spread functions (PSFs into their respective footprints. The simulation is designed to isolate the effects of footprint size and cloud property differences on flux uncertainty from calibration and orbital differences between NPP CERES and Aqua CERES. The footprint size difference between Aqua CERES and NPP CERES introduces instantaneous flux uncertainties in monthly gridded NPP CERES measurements of less than 4.0 W m−2 for SW (shortwave and less than 1.0 W m−2 for both daytime and nighttime LW (longwave. The global monthly mean instantaneous SW flux from simulated NPP CERES has a low bias of 0.4 W m−2 when compared to simulated Aqua CERES, and the root-mean-square (RMS error is 2.2 W m−2 between them; the biases of daytime and nighttime LW flux are close to zero with RMS errors of 0.8 and 0.2 W m−2. These uncertainties are within the uncertainties of CERES ADMs

  14. Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

    CERN Document Server

    Bagnasco, S; Guarise, A; Lusso, S; Masera, M; Vallero, S

    2015-01-01

    The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monit...

  15. Micro-grid platform based on NODE.JS architecture, implemented in electrical network instrumentation

    Science.gov (United States)

    Duque, M.; Cando, E.; Aguinaga, A.; Llulluna, F.; Jara, N.; Moreno, T.

    2016-05-01

    In this document, I propose a theory about the impact of systems based on microgrids in non-industrialized countries that have the goal to improve energy exploitation through alternatives methods of a clean and renewable energy generation and the creation of the app to manage the behavior of the micro-grids based on the NodeJS, Django and IOJS technologies. The micro-grids allow the optimal way to manage energy flow by electric injection directly in electric network small urban's cells in a low cost and available way. In difference from conventional systems, micro-grids can communicate between them to carry energy to places that have higher demand in accurate moments. This system does not require energy storage, so, costs are lower than conventional systems like fuel cells, solar panels or else; even though micro-grids are independent systems, they are not isolated. The impact that this analysis will generate, is the improvement of the electrical network without having greater control than an intelligent network (SMART-GRID); this leads to move to a 20% increase in energy use in a specified network; that suggest there are others sources of energy generation; but for today's needs, we need to standardize methods and remain in place to support all future technologies and the best option are the Smart Grids and Micro-Grids.

  16. Implementation of grid-connected to/from off-grid transference for micro-grid inverters

    OpenAIRE

    Heredero Peris, Daniel; Chillón Antón, Cristian; Pages Gimenez, Marc; Gross, Gabriel Igor; Montesinos Miracle, Daniel

    2013-01-01

    This paper presents the transfer of a microgrid converter from/to on-grid to/from off-grid when the converter is working in two different modes. In the first transfer presented method, the converter operates as a Current Source Inverter (CSI) when on-grid and as a Voltage Source Inverter (VSI) when off-grid. In the second transfer method, the converter is operated as a VSI both, when operated on-grid and off-grid. The two methods are implemented successfully in a real pla...

  17. Sahara Dust Cloud

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] Dust Particles Click on the image for Quicktime movie from 7/15-7/24 A continent-sized cloud of hot air and dust originating from the Sahara Desert crossed the Atlantic Ocean and headed towards Florida and the Caribbean. A Saharan Air Layer, or SAL, forms when dry air and dust rise from Africa's west coast and ride the trade winds above the Atlantic Ocean. These dust clouds are not uncommon, especially during the months of July and August. They start when weather patterns called tropical waves pick up dust from the desert in North Africa, carry it a couple of miles into the atmosphere and drift westward. In a sequence of images created by data acquired by the Earth-orbiting Atmospheric Infrared Sounder ranging from July 15 through July 24, we see the distribution of the cloud in the atmosphere as it swirls off of Africa and heads across the ocean to the west. Using the unique silicate spectral signatures of dust in the thermal infrared, AIRS can detect the presence of dust in the atmosphere day or night. This detection works best if there are no clouds present on top of the dust; when clouds are present, they can interfere with the signal, making it much harder to detect dust as in the case of July 24, 2005. In the Quicktime movie, the scale at the bottom of the images shows +1 for dust definitely detected, and ranges down to -1 for no dust detected. The plots are averaged over a number of AIRS observations falling within grid boxes, and so it is possible to obtain fractional numbers. [figure removed for brevity, see original site] Total Water Vapor in the Atmosphere Around the Dust Cloud Click on the image for Quicktime movie The dust cloud is contained within a dry adiabatic layer which originates over the Sahara Desert. This Saharan Air Layer (SAL) advances Westward over the Atlantic Ocean, overriding the cool, moist air nearer the surface. This burst of very dry air is visible in the AIRS retrieved total water

  18. Sensitivities of simulated satellite views of clouds to subgrid-scale overlap and condensate heterogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Hillman, Benjamin R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Marchand, Roger T. [Univ. of Washington, Seattle, WA (United States); Ackerman, Thomas P. [Univ. of Washington, Seattle, WA (United States)

    2017-08-01

    Satellite simulators are often used to account for limitations in satellite retrievals of cloud properties in comparisons between models and satellite observations. The purpose of the simulator framework is to enable more robust evaluation of model cloud properties, so that di erences between models and observations can more con dently be attributed to model errors. However, these simulators are subject to uncertainties themselves. A fundamental uncertainty exists in connecting the spatial scales at which cloud properties are retrieved with those at which clouds are simulated in global models. In this study, we create a series of sensitivity tests using 4 km global model output from the Multiscale Modeling Framework to evaluate the sensitivity of simulated satellite retrievals when applied to climate models whose grid spacing is many tens to hundreds of kilometers. In particular, we examine the impact of cloud and precipitation overlap and of condensate spatial variability. We find the simulated retrievals are sensitive to these assumptions. Specifically, using maximum-random overlap with homogeneous cloud and precipitation condensate, which is often used in global climate models, leads to large errors in MISR and ISCCP-simulated cloud cover and in CloudSat-simulated radar reflectivity. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  19. Cloud Tolerance of Remote-Sensing Technologies to Measure Land Surface Temperature

    Science.gov (United States)

    Holmes, Thomas R. H.; Hain, Christopher R.; Anderson, Martha C.; Crow, Wade T.

    2016-01-01

    Conventional methods to estimate land surface temperature (LST) from space rely on the thermal infrared(TIR) spectral window and is limited to cloud-free scenes. To also provide LST estimates during periods with clouds, a new method was developed to estimate LST based on passive microwave(MW) observations. The MW-LST product is informed by six polar-orbiting satellites to create a global record with up to eight observations per day for each 0.25resolution grid box. For days with sufficient observations, a continuous diurnal temperature cycle (DTC) was fitted. The main characteristics of the DTC were scaled to match those of a geostationary TIR-LST product. This paper tests the cloud tolerance of the MW-LST product. In particular, we demonstrate its stable performance with respect to flux tower observation sites (four in Europe and nine in the United States), over a range of cloudiness conditions up to heavily overcast skies. The results show that TIR based LST has slightly better performance than MW-LST for clear-sky observations but suffers an increasing negative bias as cloud cover increases. This negative bias is caused by incomplete masking of cloud-covered areas within the TIR scene that affects many applications of TIR-LST. In contrast, for MW-LST we find no direct impact of clouds on its accuracy and bias. MW-LST can therefore be used to improve TIR cloud screening. Moreover, the ability to provide LST estimates for cloud-covered surfaces can help expand current clear-sky-only satellite retrieval products to all-weather applications.

  20. SALVAGE Report D2.1 Description of existing and extended smart grid component models for use in the intrusion detection system

    DEFF Research Database (Denmark)

    Kosek, Anna Magdalena; Heussen, Kai

    2015-01-01

    The purpose of the SALVAGE project is to develop better support for managing and designing a secure future smart grid. This approach includes cyber security technologies dedicated to power grid operation as well as support for the migration to the future smart grid solutions, including the legacy...... of ICT that necessarily will be part of it. The objective is further to develop cyber security technology and methodology optimized with the particular needs and context of the power industry, something that is to a large extent lacking in general cyber security best practices and technologies today...

  1. Formation of Massive Molecular Cloud Cores by Cloud-cloud Collision

    OpenAIRE

    Inoue, Tsuyoshi; Fukui, Yasuo

    2013-01-01

    Recent observations of molecular clouds around rich massive star clusters including NGC3603, Westerlund 2, and M20 revealed that the formation of massive stars could be triggered by a cloud-cloud collision. By using three-dimensional, isothermal, magnetohydrodynamics simulations with the effect of self-gravity, we demonstrate that massive, gravitationally unstable, molecular cloud cores are formed behind the strong shock waves induced by the cloud-cloud collision. We find that the massive mol...

  2. Sonora: A New Generation Model Atmosphere Grid for Brown Dwarfs and Young Extrasolar Giant Planets

    Science.gov (United States)

    Marley, Mark S.; Saumon, Didier; Fortney, Jonathan J.; Morley, Caroline; Lupu, Roxana Elena; Freedman, Richard; Visscher, Channon

    2017-01-01

    Brown dwarf and giant planet atmospheric structure and composition has been studied both by forward models and, increasingly so, by retrieval methods. While indisputably informative, retrieval methods are of greatest value when judged in the context of grid model predictions. Meanwhile retrieval models can test the assumptions inherent in the forward modeling procedure. In order to provide a new, systematic survey of brown dwarf atmospheric structure, emergent spectra, and evolution, we have constructed a new grid of brown dwarf model atmospheres. We ultimately aim for our grid to span substantial ranges of atmospheric metallilcity, C/O ratios, cloud properties, atmospheric mixing, and other parameters. Spectra predicted by our modeling grid can be compared to both observations and retrieval results to aid in the interpretation and planning of future telescopic observations. We thus present Sonora, a new generation of substellar atmosphere models, appropriate for application to studies of L, T, and Y-type brown dwarfs and young extrasolar giant planets. The models describe the expected temperature-pressure profile and emergent spectra of an atmosphere in radiative-convective equilibrium for ranges of effective temperatures and gravities encompassing 200 less than or equal to T(sub eff) less than or equal to 2400 K and 2.5 less than or equal to log g less than or equal to 5.5. In our poster we briefly describe our modeling methodology, enumerate various updates since our group's previous models, and present our initial tranche of models for cloudless, solar metallicity, and solar carbon-to-oxygen ratio, chemical equilibrium atmospheres. These models will be available online and will be updated as opacities and cloud modeling methods continue to improve.

  3. Uncertainty Estimate of Surface Irradiances Computed with MODIS-, CALIPSO-, and CloudSat-Derived Cloud and Aerosol Properties

    Science.gov (United States)

    Kato, Seiji; Loeb, Norman G.; Rutan, David A.; Rose, Fred G.; Sun-Mack, Sunny; Miller, Walter F.; Chen, Yan

    2012-07-01

    Differences of modeled surface upward and downward longwave and shortwave irradiances are calculated using modeled irradiance computed with active sensor-derived and passive sensor-derived cloud and aerosol properties. The irradiance differences are calculated for various temporal and spatial scales, monthly gridded, monthly zonal, monthly global, and annual global. Using the irradiance differences, the uncertainty of surface irradiances is estimated. The uncertainty (1σ) of the annual global surface downward longwave and shortwave is, respectively, 7 W m-2 (out of 345 W m-2) and 4 W m-2 (out of 192 W m-2), after known bias errors are removed. Similarly, the uncertainty of the annual global surface upward longwave and shortwave is, respectively, 3 W m-2 (out of 398 W m-2) and 3 W m-2 (out of 23 W m-2). The uncertainty is for modeled irradiances computed using cloud properties derived from imagers on a sun-synchronous orbit that covers the globe every day (e.g., moderate-resolution imaging spectrometer) or modeled irradiances computed for nadir view only active sensors on a sun-synchronous orbit such as Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation and CloudSat. If we assume that longwave and shortwave uncertainties are independent of each other, but up- and downward components are correlated with each other, the uncertainty in global annual mean net surface irradiance is 12 W m-2. One-sigma uncertainty bounds of the satellite-based net surface irradiance are 106 W m-2 and 130 W m-2.

  4. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei; Wonka, Peter; Nan, Liangliang

    2016-01-01

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  5. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei

    2016-09-16

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  6. Advanced Cloud Forecasting for Solar Energy’s Impact on Grid Modernization

    Energy Technology Data Exchange (ETDEWEB)

    Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Nichols, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Solar energy production is subject to variability in the solar resource – clouds and aerosols will reduce the available solar irradiance and inhibit power production. The fact that solar irradiance can vary by large amounts at small timescales and in an unpredictable way means that power utilities are reluctant to assign to their solar plants a large portion of future energy demand – the needed power might be unavailable, forcing the utility to make costly adjustments to its daily portfolio. The availability and predictability of solar radiation therefore represent important research topics for increasing the power produced by renewable sources.

  7. Smart grid

    International Nuclear Information System (INIS)

    Choi, Dong Bae

    2001-11-01

    This book describes press smart grid from basics to recent trend. It is divided into ten chapters, which deals with smart grid as green revolution in energy with introduction, history, the fields, application and needed technique for smart grid, Trend of smart grid in foreign such as a model business of smart grid in foreign, policy for smart grid in U.S.A, Trend of smart grid in domestic with international standard of smart grid and strategy and rood map, smart power grid as infrastructure of smart business with EMS development, SAS, SCADA, DAS and PQMS, smart grid for smart consumer, smart renewable like Desertec project, convergence IT with network and PLC, application of an electric car, smart electro service for realtime of electrical pricing system, arrangement of smart grid.

  8. GridCom, Grid Commander: graphical interface for Grid jobs and data management

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    2011-01-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  9. Effectiveness and limitations of parameter tuning in reducing biases of top-of-atmosphere radiation and clouds in MIROC version 5

    Science.gov (United States)

    Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro; Yoshimori, Masakazu; Yokohata, Tokuta; Annan, James D.; Hargreaves, Julia C.; Ushigami, Naoto; Hirota, Kazuya; Someya, Yu; Kamae, Youichi; Tatebe, Hiroaki; Kimoto, Masahide

    2017-12-01

    This study discusses how much of the biases in top-of-atmosphere (TOA) radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5) generation. We used output of a perturbed parameter ensemble (PPE) experiment conducted with an atmosphere-ocean general circulation model (AOGCM) without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5) was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude-longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.

  10. Effectiveness and limitations of parameter tuning in reducing biases of top-of-atmosphere radiation and clouds in MIROC version 5

    Directory of Open Access Journals (Sweden)

    T. Ogura

    2017-12-01

    Full Text Available This study discusses how much of the biases in top-of-atmosphere (TOA radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5 generation. We used output of a perturbed parameter ensemble (PPE experiment conducted with an atmosphere–ocean general circulation model (AOGCM without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5 was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude–longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.

  11. Collaborative Research: Cloudiness transitions within shallow marine clouds near the Azores

    Energy Technology Data Exchange (ETDEWEB)

    Mechem, David B. [Univ. of Kansas, Lawrence, KS (United States). Atmospheric Science Program. Dept. of Geography and Atmospheric Science; de Szoeke, Simon P. [Oregon State Univ., Corvallis, OR (United States). College of Earth, Ocean, and Atmospheric Sciences; Yuter, Sandra E. [North Carolina State Univ., Raleigh, NC (United States). Dept. of Marine, Earth, and Atmospheric Sciences

    2017-01-15

    Marine stratocumulus clouds are low, persistent, liquid phase clouds that cover large areas and play a significant role in moderating the climate by reflecting large quantities of incoming solar radiation. The deficiencies in simulating these clouds in global climate models are widely recognized. Much of the uncertainty arises from sub-grid scale variability in the cloud albedo that is not accurately parameterized in climate models. The Clouds, Aerosol and Precipitation in the Marine Boundary Layer (CAP–MBL) observational campaign and the ongoing ARM site measurements on Graciosa Island in the Azores aim to sample the Northeast Atlantic low cloud regime. These data represent, the longest continuous research quality cloud radar/lidar/radiometer/aerosol data set of open-ocean shallow marine clouds in existence. Data coverage from CAP–MBL and the series of cruises to the southeast Pacific culminating in VOCALS will both be of sufficient length to contrast the two low cloud regimes and explore the joint variability of clouds in response to several environmental factors implicated in cloudiness transitions. Our research seeks to better understand cloud system processes in an underexplored but climatologically important maritime region. Our primary goal is an improved physical understanding of low marine clouds on temporal scales of hours to days. It is well understood that aerosols, synoptic-scale forcing, surface fluxes, mesoscale dynamics, and cloud microphysics all play a role in cloudiness transitions. However, the relative importance of each mechanism as a function of different environmental conditions is unknown. To better understand cloud forcing and response, we are documenting the joint variability of observed environmental factors and associated cloud characteristics. In order to narrow the realm of likely parameter ranges, we assess the relative importance of parameter conditions based primarily on two criteria: how often the condition occurs (frequency

  12. An Offload NIC for NASA, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James

    2013-01-01

    This work addresses distributed data management and access dynamically configurable high-speed access to data distributed and shared over wide-area high-speed network environments. An offload engine NIC (network interface card) is proposed that scales at nX10-Gbps increments through 100-Gbps full duplex. The Globus de facto standard was used in projects requiring secure, robust, high-speed bulk data transport. Novel extension mechanisms were derived that will combine these technologies for use by GridFTP, bandwidth management resources, and host CPU (central processing unit) acceleration. The result will be wire-rate encrypted Globus grid data transactions through offload for splintering, encryption, and compression. As the need for greater network bandwidth increases, there is an inherent need for faster CPUs. The best way to accelerate CPUs is through a network acceleration engine. Grid computing data transfers for the Globus tool set did not have wire-rate encryption or compression. Existing technology cannot keep pace with the greater bandwidths of backplane and network connections. Present offload engines with ports to Ethernet are 32 to 40 Gbps f-d at best. The best of ultra-high-speed offload engines use expensive ASICs (application specific integrated circuits) or NPUs (network processing units). The present state of the art also includes bonding and the use of multiple NICs that are also in the planning stages for future portability to ASICs and software to accommodate data rates at 100 Gbps. The remaining industry solutions are for carrier-grade equipment manufacturers, with costly line cards having multiples of 10-Gbps ports, or 100-Gbps ports such as CFP modules that interface to costly ASICs and related circuitry. All of the existing solutions vary in configuration based on requirements of the host, motherboard, or carriergrade equipment. The purpose of the innovation is to eliminate data bottlenecks within cluster, grid, and cloud computing systems

  13. Improvement of Systematic Bias of mean state and the intraseasonal variability of CFSv2 through superparameterization and revised cloud-convection-radiation parameterization

    Science.gov (United States)

    Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy

    2016-05-01

    Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of

  14. STRUCTURE LINE DETECTION FROM LIDAR POINT CLOUDS USING TOPOLOGICAL ELEVATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Y. Lo

    2012-07-01

    Full Text Available Airborne LIDAR point clouds, which have considerable points on object surfaces, are essential to building modeling. In the last two decades, studies have developed different approaches to identify structure lines using two main approaches, data-driven and modeldriven. These studies have shown that automatic modeling processes depend on certain considerations, such as used thresholds, initial value, designed formulas, and predefined cues. Following the development of laser scanning systems, scanning rates have increased and can provide point clouds with higher point density. Therefore, this study proposes using topological elevation analysis (TEA to detect structure lines instead of threshold-dependent concepts and predefined constraints. This analysis contains two parts: data pre-processing and structure line detection. To preserve the original elevation information, a pseudo-grid for generating digital surface models is produced during the first part. The highest point in each grid is set as the elevation value, and its original threedimensional position is preserved. In the second part, using TEA, the structure lines are identified based on the topology of local elevation changes in two directions. Because structure lines can contain certain geometric properties, their locations have small relieves in the radial direction and steep elevation changes in the circular direction. Following the proposed approach, TEA can be used to determine 3D line information without selecting thresholds. For validation, the TEA results are compared with those of the region growing approach. The results indicate that the proposed method can produce structure lines using dense point clouds.

  15. Low Voltage Ride-Through Capability of a Single-Stage Single-Phase Photovoltaic System Connected to the Low-Voltage Grid

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2013-01-01

    The progressively growing of single-phase photovoltaic (PV) systems makes the Distribution System Operators (DSO) to update or revise the existing grid codes in order to guarantee the availability, quality and reliability of the electrical system. It is expected that the future PV systems connected...... to the low-voltage grid will be more active with functionalities of low voltage ride-through (LVRT) and the grid support capability, which is not the case today. In this paper, the operation principle is demonstrated for a single-phase grid-connected PV system in low voltage ride through operation in order...... to map future challenges. The system is verified by simulations and experiments. Test results show that the proposed power control method is effective and the single-phase PV inverters connected to low-voltage networks are ready to provide grid support and ride-through voltage fault capability...

  16. THE MAGELLANIC QUASARS SURVEY. III. SPECTROSCOPIC CONFIRMATION OF 758 ACTIVE GALACTIC NUCLEI BEHIND THE MAGELLANIC CLOUDS

    International Nuclear Information System (INIS)

    Kozłowski, Szymon; Udalski, Andrzej; Szymański, M. K.; Kubiak, M.; Pietrzyński, G.; Soszyński, I.; Wyrzykowski, Ł.; Ulaczyk, K.; Poleski, R.; Pietrukowicz, P.; Skowron, J.; Onken, Christopher A.; Kochanek, Christopher S.; Meixner, M.; Bonanos, A. Z.

    2013-01-01

    The Magellanic Quasars Survey (MQS) has now increased the number of quasars known behind the Magellanic Clouds by almost an order of magnitude. All survey fields in the Large Magellanic Cloud (LMC) and 70% of those in the Small Magellanic Cloud (SMC) have been observed. The targets were selected from the third phase of the Optical Gravitational Lensing Experiment (OGLE-III) based on their optical variability, mid-IR, and/or X-ray properties. We spectroscopically confirmed 758 quasars (565 in the LMC and 193 in the SMC) behind the clouds, of which 94% (527 in the LMC and 186 in the SMC) are newly identified. The MQS quasars have long-term (12 yr and growing for OGLE), high-cadence light curves, enabling unprecedented variability studies of quasars. The MQS quasars also provide a dense reference grid for measuring both the internal and bulk proper motions of the clouds, and 50 quasars are bright enough (I ∼< 18 mag) for absorption studies of the interstellar/intergalactic medium of the clouds

  17. HammerCloud: A Stress Testing System for Distributed Analysis

    International Nuclear Information System (INIS)

    Ster, Daniel C van der; García, Mario Úbeda; Paladin, Massimo; Elmsheuser, Johannes

    2011-01-01

    Distributed analysis of LHC data is an I/O-intensive activity which places large demands on the internal network, storage, and local disks at remote computing facilities. Commissioning and maintaining a site to provide an efficient distributed analysis service is therefore a challenge which can be aided by tools to help evaluate a variety of infrastructure designs and configurations. HammerCloud is one such tool; it is a stress testing service which is used by central operations teams, regional coordinators, and local site admins to (a) submit arbitrary number of analysis jobs to a number of sites, (b) maintain at a steady-state a predefined number of jobs running at the sites under test, (c) produce web-based reports summarizing the efficiency and performance of the sites under test, and (d) present a web-interface for historical test results to both evaluate progress and compare sites. HammerCloud was built around the distributed analysis framework Ganga, exploiting its API for grid job management. HammerCloud has been employed by the ATLAS experiment for continuous testing of many sites worldwide, and also during large scale computing challenges such as STEP'09 and UAT'09, where the scale of the tests exceeded 10,000 concurrently running and 1,000,000 total jobs over multi-day periods. In addition, HammerCloud is being adopted by the CMS experiment; the plugin structure of HammerCloud allows the execution of CMS jobs using their official tool (CRAB).

  18. Relationship between cloud radiative forcing, cloud fraction and cloud albedo, and new surface-based approach for determining cloud albedo

    OpenAIRE

    Y. Liu; W. Wu; M. P. Jensen; T. Toto

    2011-01-01

    This paper focuses on three interconnected topics: (1) quantitative relationship between surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo; (2) surfaced-based approach for measuring cloud albedo; (3) multiscale (diurnal, annual and inter-annual) variations and covariations of surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo. An analytical expression is first derived to quantify the relationship between cloud radiative forcing, cloud fractio...

  19. Mucura: your personal file repository in the cloud

    Science.gov (United States)

    Hernandez, F.; Wu, W.; Du, R.; Li, S.; Kan, W.

    2012-12-01

    Large-scale distributed data processing platforms for scientific research such as the LHC computing grid include services for transporting, storing and processing massive amounts of data. They often address the data processing needs of a virtual organization but lack the convenience and flexibility required by individual users for their personal data storage needs. This paper presents the motivation, design and implementation status of Mucura, an open source software system for operating multi-tenant cloud-based storage services. The system is specifically intended for building file repositories for individual users, such as those of the scientific research communities, who use distributed computing infrastructures for processing and sharing data. It exposes the Amazon S3-compatible interface, supports both interactive and batch usage and is compatible with the X509 certificates-based authentication mechanism used by grid infrastructures. The system builds on top of distributed persistent key-value stores for storing user's data.

  20. Mucura: your personal file repository in the cloud

    International Nuclear Information System (INIS)

    Hernandez, F; Wu, W; Du, R; Li, S; Kan, W

    2012-01-01

    Large-scale distributed data processing platforms for scientific research such as the LHC computing grid include services for transporting, storing and processing massive amounts of data. They often address the data processing needs of a virtual organization but lack the convenience and flexibility required by individual users for their personal data storage needs. This paper presents the motivation, design and implementation status of Mucura, an open source software system for operating multi-tenant cloud-based storage services. The system is specifically intended for building file repositories for individual users, such as those of the scientific research communities, who use distributed computing infrastructures for processing and sharing data. It exposes the Amazon S3-compatible interface, supports both interactive and batch usage and is compatible with the X509 certificates-based authentication mechanism used by grid infrastructures. The system builds on top of distributed persistent key-value stores for storing user's data.

  1. Above-Campus Services: Shaping the Promise of Cloud Computing for Higher Education

    Science.gov (United States)

    Wheeler, Brad; Waggener, Shelton

    2009-01-01

    The concept of today's cloud computing may date back to 1961, when John McCarthy, retired Stanford professor and Turing Award winner, delivered a speech at MIT's Centennial. In that speech, he predicted that in the future, computing would become a "public utility." Yet for colleges and universities, the recent growth of pervasive, very high speed…

  2. Parallel Processing of Big Point Clouds Using Z-Order Partitioning

    Science.gov (United States)

    Alis, C.; Boehm, J.; Liu, K.

    2016-06-01

    As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112) is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest neighbour algorithm

  3. A subgrid parameterization scheme for precipitation

    Directory of Open Access Journals (Sweden)

    S. Turner

    2012-04-01

    Full Text Available With increasing computing power, the horizontal resolution of numerical weather prediction (NWP models is improving and today reaches 1 to 5 km. Nevertheless, clouds and precipitation formation are still subgrid scale processes for most cloud types, such as cumulus and stratocumulus. Subgrid scale parameterizations for water vapor condensation have been in use for many years and are based on a prescribed probability density function (PDF of relative humidity spatial variability within the model grid box, thus providing a diagnosis of the cloud fraction. A similar scheme is developed and tested here. It is based on a prescribed PDF of cloud water variability and a threshold value of liquid water content for droplet collection to derive a rain fraction within the model grid. Precipitation of rainwater raises additional concerns relative to the overlap of cloud and rain fractions, however. The scheme is developed following an analysis of data collected during field campaigns in stratocumulus (DYCOMS-II and fair weather cumulus (RICO and tested in a 1-D framework against large eddy simulations of these observed cases. The new parameterization is then implemented in a 3-D NWP model with a horizontal resolution of 2.5 km to simulate real cases of precipitating cloud systems over France.

  4. Philosophical Approach to Engineering Education Under the Introduction of the Smart Grid Concept in Russia

    Directory of Open Access Journals (Sweden)

    Makienko Marina A.

    2015-01-01

    Full Text Available The development of power industry in the world today is driven by two main trends: the search for renewable energy sources and their use and the energy efficiency which require the development of smart grids. This paper brings up the issue of staff training for professional development of the Smart Grid technology and for use of its elements by customers in households. The problem of consumer readiness for the use of smart meters was studied. It was revealed that the considerable part of the respondents was not familiar with the definition of Smart Grid. That required the development of communication skills by energy engineering students and their social activity as well. The reasons mentioned make actual the following elements of engineering education: social responsibility, stress resistance, ability to forecast the future.

  5. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  6. Screening of biosurfactants from cloud microorganisms

    Science.gov (United States)

    Sancelme, Martine; Canet, Isabelle; Traikia, Mounir; Uhliarikova, Yveta; Capek, Peter; Matulova, Maria; Delort, Anne-Marie; Amato, Pierre

    2015-04-01

    The formation of cloud droplets from aerosol particles in the atmosphere is still not well understood and a main source of uncertainties in the climate budget today. One of the principal parameters in these processes is the surface tension of atmospheric particles, which can be strongly affected by trace compounds called surfactants. Within a project devoted to bring information on atmospheric surfactants and their effects on cloud droplet formation, we focused on surfactants produced by microorganisms present in atmospheric waters. From our unique collection of microorganisms, isolated from cloud water collected at the Puy-de-Dôme (France),1 we undertook a screening of this bank for biosurfactant producers. After extraction of the supernatants of the pure cultures, surface tension of crude extracts was determined by the hanging drop technique. Results showed that a wide variety of microorganisms are able to produce biosurfactants, some of them exhibiting strong surfactant properties as the resulting tension surface decreases to values less then 35 mN.m-1. Preliminary analytical characterization of biosurfactants, obtained after isolation from overproducing cultures of Rhodococcus sp. and Pseudomonas sp., allowed us to identify them as belonging to two main classes, namely glycolipids and glycopeptides. 1. Vaïtilingom, M.; Attard, E.; Gaiani, N.; Sancelme, M.; Deguillaume, L.; Flossmann, A. I.; Amato, P.; Delort, A. M. Long-term features of cloud microbiology at the puy de Dôme (France). Atmos. Environ. 2012, 56, 88-100. Acknowledgements: This work is supported by the French-USA ANR SONATA program and the French-Slovakia programs Stefanik and CNRS exchange.

  7. Evaluation of cloud resolving model simulations of midlatitude cirrus with ARM and A-Train observations

    Science.gov (United States)

    Muehlbauer, A. D.; Ackerman, T. P.; Lawson, P.; Xie, S.; Zhang, Y.

    2015-12-01

    This paper evaluates cloud resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration (NASA) A-train satellites. Vertical profiles of temperature, relative humidity and wind speeds are reasonably well simulated by the CSRM and CRM but there are remaining biases in the temperature, wind speeds and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in GCMs and in CSRM simulations with horizontal grid spacings on the order of 1km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. However, there still is considerable

  8. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Science.gov (United States)

    Swales, Dustin J.; Pincus, Robert; Bodas-Salcedo, Alejandro

    2018-01-01

    The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP) gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  9. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  10. OCRA radiometric cloud fractions for GOME-2 on MetOp-A/B

    Science.gov (United States)

    Lutz, Ronny; Loyola, Diego; Gimeno García, Sebastián; Romahn, Fabian

    2016-05-01

    This paper describes an approach for cloud parameter retrieval (radiometric cloud-fraction estimation) using the polarization measurements of the Global Ozone Monitoring Experiment-2 (GOME-2) onboard the MetOp-A/B satellites. The core component of the Optical Cloud Recognition Algorithm (OCRA) is the calculation of monthly cloud-free reflectances for a global grid (resolution of 0.2° in longitude and 0.2° in latitude) to derive radiometric cloud fractions. These cloud fractions will serve as a priori information for the retrieval of cloud-top height (CTH), cloud-top pressure (CTP), cloud-top albedo (CTA) and cloud optical thickness (COT) with the Retrieval Of Cloud Information using Neural Networks (ROCINN) algorithm. This approach is already being implemented operationally for the GOME/ERS-2 and SCIAMACHY/ENVISAT sensors and here we present version 3.0 of the OCRA algorithm applied to the GOME-2 sensors. Based on more than five years of GOME-2A data (April 2008 to June 2013), reflectances are calculated for ≈ 35 000 orbits. For each measurement a degradation correction as well as a viewing-angle-dependent and latitude-dependent correction is applied. In addition, an empirical correction scheme is introduced in order to remove the effect of oceanic sun glint. A comparison of the GOME-2A/B OCRA cloud fractions with colocated AVHRR (Advanced Very High Resolution Radiometer) geometrical cloud fractions shows a general good agreement with a mean difference of -0.15 ± 0.20. From an operational point of view, an advantage of the OCRA algorithm is its very fast computational time and its straightforward transferability to similar sensors like OMI (Ozone Monitoring Instrument), TROPOMI (TROPOspheric Monitoring Instrument) on Sentinel 5 Precursor, as well as Sentinel 4 and Sentinel 5. In conclusion, it is shown that a robust, accurate and fast radiometric cloud-fraction estimation for GOME-2 can be achieved with OCRA using polarization measurement devices (PMDs).

  11. Verifying Operational and Developmental Air Force Weather Cloud Analysis and Forecast Products Using Lidar Data from Department of Energy Atmospheric Radiation Measurement (ARM) Sites

    Science.gov (United States)

    Hildebrand, E. P.

    2017-12-01

    Air Force Weather has developed various cloud analysis and forecast products designed to support global Department of Defense (DoD) missions. A World-Wide Merged Cloud Analysis (WWMCA) and short term Advected Cloud (ADVCLD) forecast is generated hourly using data from 16 geostationary and polar-orbiting satellites. Additionally, WWMCA and Numerical Weather Prediction (NWP) data are used in a statistical long-term (out to five days) cloud forecast model known as the Diagnostic Cloud Forecast (DCF). The WWMCA and ADVCLD are generated on the same polar stereographic 24 km grid for each hemisphere, whereas the DCF is generated on the same grid as its parent NWP model. When verifying the cloud forecast models, the goal is to understand not only the ability to detect cloud, but also the ability to assign it to the correct vertical layer. ADVCLD and DCF forecasts traditionally have been verified using WWMCA data as truth, but this might over-inflate the performance of those models because WWMCA also is a primary input dataset for those models. Because of this, in recent years, a WWMCA Reanalysis product has been developed, but this too is not a fully independent dataset. This year, work has been done to incorporate data from external, independent sources to verify not only the cloud forecast products, but the WWMCA data itself. One such dataset that has been useful for examining the 3-D performance of the cloud analysis and forecast models is Atmospheric Radiation Measurement (ARM) data from various sites around the globe. This presentation will focus on the use of the Department of Energy (DoE) ARM data to verify Air Force Weather cloud analysis and forecast products. Results will be presented to show relative strengths and weaknesses of the analyses and forecasts.

  12. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  13. Cloud archiving and data mining of High-Resolution Rapid Refresh forecast model output

    Science.gov (United States)

    Blaylock, Brian K.; Horel, John D.; Liston, Samuel T.

    2017-12-01

    Weather-related research often requires synthesizing vast amounts of data that need archival solutions that are both economical and viable during and past the lifetime of the project. Public cloud computing services (e.g., from Amazon, Microsoft, or Google) or private clouds managed by research institutions are providing object data storage systems potentially appropriate for long-term archives of such large geophysical data sets. We illustrate the use of a private cloud object store developed by the Center for High Performance Computing (CHPC) at the University of Utah. Since early 2015, we have been archiving thousands of two-dimensional gridded fields (each one containing over 1.9 million values over the contiguous United States) from the High-Resolution Rapid Refresh (HRRR) data assimilation and forecast modeling system. The archive is being used for retrospective analyses of meteorological conditions during high-impact weather events, assessing the accuracy of the HRRR forecasts, and providing initial and boundary conditions for research simulations. The archive is accessible interactively and through automated download procedures for researchers at other institutions that can be tailored by the user to extract individual two-dimensional grids from within the highly compressed files. Characteristics of the CHPC object storage system are summarized relative to network file system storage or tape storage solutions. The CHPC storage system is proving to be a scalable, reliable, extensible, affordable, and usable archive solution for our research.

  14. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, Fritz; Donadei, Sabine [KBB Underground Technologies GmbH, Hannover (Germany)

    2009-07-01

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, peak shaving, seasonal balancing, etc. Today the electric power industry benefits from the extreme high energy density of fossil fuels. This is one important reason why the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% related to natural gas. Along with the changeover to renewable wind based electricity production this ''outsourcing'' of storage services to fossil fuels will decline. One important way out will be grid scale energy storage. The present discussion for balancing short term wind and solar power fluctuations focuses primarily on the installation of Compressed Air Energy Storages (CAES) in addition to existing pumped hydro plants. Because of their small energy density, these storage options are, however, generally not suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power or even seasonal fluctuations. Underground hydrogen storages, however, provide a much higher energy density because of chemical energy bond - standard practice since many years. The first part of the article describes the present status and performance of grid scale energy storages in geological formations, mainly salt caverns. It is followed by a compilation of generally suitable locations in Europe and particularly Germany. The second part deals with first results of preliminary investigations in possibilities and limits of offshore CAES power stations. (orig.)

  15. Ten Years of Cloud Properties from MODIS: Global Statistics and Use in Climate Model Evaluation

    Science.gov (United States)

    Platnick, Steven E.

    2011-01-01

    The NASA Moderate Resolution Imaging Spectroradiometer (MODIS), launched onboard the Terra and Aqua spacecrafts, began Earth observations on February 24, 2000 and June 24,2002, respectively. Among the algorithms developed and applied to this sensor, a suite of cloud products includes cloud masking/detection, cloud-top properties (temperature, pressure), and optical properties (optical thickness, effective particle radius, water path, and thermodynamic phase). All cloud algorithms underwent numerous changes and enhancements between for the latest Collection 5 production version; this process continues with the current Collection 6 development. We will show example MODIS Collection 5 cloud climatologies derived from global spatial . and temporal aggregations provided in the archived gridded Level-3 MODIS atmosphere team product (product names MOD08 and MYD08 for MODIS Terra and Aqua, respectively). Data sets in this Level-3 product include scalar statistics as well as 1- and 2-D histograms of many cloud properties, allowing for higher order information and correlation studies. In addition to these statistics, we will show trends and statistical significance in annual and seasonal means for a variety of the MODIS cloud properties, as well as the time required for detection given assumed trends. To assist in climate model evaluation, we have developed a MODIS cloud simulator with an accompanying netCDF file containing subsetted monthly Level-3 statistical data sets that correspond to the simulator output. Correlations of cloud properties with ENSO offer the potential to evaluate model cloud sensitivity; initial results will be discussed.

  16. Self-consistent atmosphere modeling with cloud formation for low-mass stars and exoplanets

    Science.gov (United States)

    Juncher, Diana; Jørgensen, Uffe G.; Helling, Christiane

    2017-12-01

    Context. Low-mass stars and extrasolar planets have ultra-cool atmospheres where a rich chemistry occurs and clouds form. The increasing amount of spectroscopic observations for extrasolar planets requires self-consistent model atmosphere simulations to consistently include the formation processes that determine cloud formation and their feedback onto the atmosphere. Aims: Our aim is to complement the MARCS model atmosphere suit with simulations applicable to low-mass stars and exoplanets in preparation of E-ELT, JWST, PLATO and other upcoming facilities. Methods: The MARCS code calculates stellar atmosphere models, providing self-consistent solutions of the radiative transfer and the atmospheric structure and chemistry. We combine MARCS with a kinetic model that describes cloud formation in ultra-cool atmospheres (seed formation, growth/evaporation, gravitational settling, convective mixing, element depletion). Results: We present a small grid of self-consistently calculated atmosphere models for Teff = 2000-3000 K with solar initial abundances and log (g) = 4.5. Cloud formation in stellar and sub-stellar atmospheres appears for Teff day-night energy transport and no temperature inversion.

  17. An Evaluation of Marine Boundary Layer Cloud Property Simulations in the Community Atmosphere Model Using Satellite Observations: Conventional Subgrid Parameterization versus CLUBB

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hua [Joint Center for Earth Systems Technology, University of Maryland, Baltimore County, Baltimore, Maryland; Zhang, Zhibo [Joint Center for Earth Systems Technology, and Physics Department, University of Maryland, Baltimore County, Baltimore, Maryland; Ma, Po-Lun [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Ghan, Steven J. [Atmospheric Sciences and Global Change Division, Pacific Northwest National Laboratory, Richland, Washington; Wang, Minghuai [Institute for Climate and Global Change Research, and School of Atmospheric Sciences, Nanjing University, Nanjing, China

    2018-03-01

    This paper presents a two-step evaluation of the marine boundary layer (MBL) cloud properties from two Community Atmospheric Model (version 5.3, CAM5) simulations, one based on the CAM5 standard parameterization schemes (CAM5-Base), and the other on the Cloud Layers Unified By Binormals (CLUBB) scheme (CAM5-CLUBB). In the first step, we compare the cloud properties directly from model outputs between the two simulations. We find that the CAM5-CLUBB run produces more MBL clouds in the tropical and subtropical large-scale descending regions. Moreover, the stratocumulus (Sc) to cumulus (Cu) cloud regime transition is much smoother in CAM5-CLUBB than in CAM5-Base. In addition, in CAM5-Base we find some grid cells with very small low cloud fraction (<20%) to have very high in-cloud water content (mixing ratio up to 400mg/kg). We find no such grid cells in the CAM5-CLUBB run. However, we also note that both simulations, especially CAM5-CLUBB, produce a significant amount of “empty” low cloud cells with significant cloud fraction (up to 70%) and near-zero in-cloud water content. In the second step, we use satellite observations from CERES, MODIS and CloudSat to evaluate the simulated MBL cloud properties by employing the COSP satellite simulators. We note that a feature of the COSP-MODIS simulator to mimic the minimum detection threshold of MODIS cloud masking removes much more low clouds from CAM5-CLUBB than it does from CAM5-Base. This leads to a surprising result — in the large-scale descending regions CAM5-CLUBB has a smaller COSP-MODIS cloud fraction and weaker shortwave cloud radiative forcing than CAM5-Base. A sensitivity study suggests that this is because CAM5-CLUBB suffers more from the above-mentioned “empty” clouds issue than CAM5-Base. The COSP-MODIS cloud droplet effective radius in CAM5-CLUBB shows a spatial increase from coastal St toward Cu, which is in qualitative agreement with MODIS observations. In contrast, COSP-MODIS cloud droplet

  18. mPano: cloud-based mobile panorama view from single picture

    Science.gov (United States)

    Li, Hongzhi; Zhu, Wenwu

    2013-09-01

    Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.

  19. Advances in Grid and Pervasive Computing: 5th International Conference, GPC 2010, Hualien, Taiwan, May 10-13, 2010: Proceedings

    NARCIS (Netherlands)

    Bellavista, P.; Chang, R.-S.; Chao, H.-C.; Lin, S.-F.; Sloot, P.M.A.

    2010-01-01

    This book constitutes the proceedings of the 5th international conference, CPC 2010, held in Hualien, Taiwan in May 2010. The 67 full papers are selected from 184 submissions and focus on topics such as cloud and Grid computing, peer-to-peer and pervasive computing, sensor and mobile networks,

  20. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  1. Impact of Aerosols on Convective Clouds and Precipitation

    Science.gov (United States)

    Tao, Wei-Kuo; Chen, Jen-Ping; Li, Zhanqing; Wang, Chien; Zhang, Chidong; Li, Xiaowen

    2012-01-01

    Aerosols are a critical.factor in the atmospheric hydrological cycle and radiation budget. As a major agent for clouds to form and a significant attenuator of solar radiation, aerosols affect climate in several ways. Current research suggests that aerosols have a major impact on the dynamics, microphysics, and electrification properties of continental mixed-phase convective clouds. In addition, high aerosol concentrations in urban environments could affect precipitation variability by providing a significant source of cloud condensation nuclei (CCN). Such pollution . effects on precipitation potentially have enormous climatic consequences both in terms of feedbacks involving the land surface via rainfall as well as the surface energy budget and changes in latent heat input to the atmosphere. Basically, aerosol concentrations can influence cloud droplet size distributions, the warm-rain process, the cold-rain process, cloud-top heights, the depth of the mixed-phase region, and the occurrence of lightning. Recently, many cloud resolution models (CRMs) have been used to examine the role of aerosols on mixed-phase convective clouds. These modeling studies have many differences in terms of model configuration (two- or three-dimensional), domain size, grid spacing (150-3000 m), microphysics (two-moment bulk, simple or sophisticated spectral-bin), turbulence (1st or 1.5 order turbulent kinetic energy (TKE)), radiation, lateral boundary conditions (i.e., closed, radiative open or cyclic), cases (isolated convection, tropical or midlatitude squall lines) and model integration time (e.g., 2.5 to 48 hours). Among these modeling studies, the most striking difference is that cumulative precipitation can either increase or decrease in response to higher concentrations of CCN. In this presentation, we review past efforts and summarize our current understanding of the effect of aerosols on convective precipitation processes. Specifically, this paper addresses the following topics

  2. Cloud-Top Entrainment in Stratocumulus Clouds

    Science.gov (United States)

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  3. Cloud type comparisons of AIRS, CloudSat, and CALIPSO cloud height and amount

    Directory of Open Access Journals (Sweden)

    B. H. Kahn

    2008-03-01

    Full Text Available The precision of the two-layer cloud height fields derived from the Atmospheric Infrared Sounder (AIRS is explored and quantified for a five-day set of observations. Coincident profiles of vertical cloud structure by CloudSat, a 94 GHz profiling radar, and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO, are compared to AIRS for a wide range of cloud types. Bias and variability in cloud height differences are shown to have dependence on cloud type, height, and amount, as well as whether CloudSat or CALIPSO is used as the comparison standard. The CloudSat-AIRS biases and variability range from −4.3 to 0.5±1.2–3.6 km for all cloud types. Likewise, the CALIPSO-AIRS biases range from 0.6–3.0±1.2–3.6 km (−5.8 to −0.2±0.5–2.7 km for clouds ≥7 km (<7 km. The upper layer of AIRS has the greatest sensitivity to Altocumulus, Altostratus, Cirrus, Cumulonimbus, and Nimbostratus, whereas the lower layer has the greatest sensitivity to Cumulus and Stratocumulus. Although the bias and variability generally decrease with increasing cloud amount, the ability of AIRS to constrain cloud occurrence, height, and amount is demonstrated across all cloud types for many geophysical conditions. In particular, skill is demonstrated for thin Cirrus, as well as some Cumulus and Stratocumulus, cloud types infrared sounders typically struggle to quantify. Furthermore, some improvements in the AIRS Version 5 operational retrieval algorithm are demonstrated. However, limitations in AIRS cloud retrievals are also revealed, including the existence of spurious Cirrus near the tropopause and low cloud layers within Cumulonimbus and Nimbostratus clouds. Likely causes of spurious clouds are identified and the potential for further improvement is discussed.

  4. How to keep the Grid full and working with ATLAS production and physics jobs

    Science.gov (United States)

    Pacheco Pagés, A.; Barreiro Megino, F. H.; Cameron, D.; Fassi, F.; Filipcic, A.; Di Girolamo, A.; González de la Hoz, S.; Glushkov, I.; Maeno, T.; Walker, R.; Yang, W.; ATLAS Collaboration

    2017-10-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  5. How to keep the Grid full and working with ATLAS production and physics jobs

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00221495; The ATLAS collaboration; Barreiro Megino, Fernando Harald; Cameron, David; Fassi, Farida; Filipcic, Andrej; Di Girolamo, Alessandro; Gonzalez de la Hoz, Santiago; Glushkov, Ivan; Maeno, Tadashi; Walker, Rodney; Yang, Wei

    2017-01-01

    The ATLAS production system provides the infrastructure to process millions of events collected during the LHC Run 1 and the first two years of Run 2 using grid, clouds and high performance computing. We address in this contribution the strategies and improvements that have been implemented to the production system for optimal performance and to achieve the highest efficiency of available resources from operational perspective. We focus on the recent developments.

  6. A comparison of shock-cloud and wind-cloud interactions: effect of increased cloud density contrast on cloud evolution

    Science.gov (United States)

    Goldsmith, K. J. A.; Pittard, J. M.

    2018-05-01

    The similarities, or otherwise, of a shock or wind interacting with a cloud of density contrast χ = 10 were explored in a previous paper. Here, we investigate such interactions with clouds of higher density contrast. We compare the adiabatic hydrodynamic interaction of a Mach 10 shock with a spherical cloud of χ = 103 with that of a cloud embedded in a wind with identical parameters to the post-shock flow. We find that initially there are only minor morphological differences between the shock-cloud and wind-cloud interactions, compared to when χ = 10. However, once the transmitted shock exits the cloud, the development of a turbulent wake and fragmentation of the cloud differs between the two simulations. On increasing the wind Mach number, we note the development of a thin, smooth tail of cloud material, which is then disrupted by the fragmentation of the cloud core and subsequent `mass-loading' of the flow. We find that the normalized cloud mixing time (tmix) is shorter at higher χ. However, a strong Mach number dependence on tmix and the normalized cloud drag time, t_{drag}^' }, is not observed. Mach-number-dependent values of tmix and t_{drag}^' } from comparable shock-cloud interactions converge towards the Mach-number-independent time-scales of the wind-cloud simulations. We find that high χ clouds can be accelerated up to 80-90 per cent of the wind velocity and travel large distances before being significantly mixed. However, complete mixing is not achieved in our simulations and at late times the flow remains perturbed.

  7. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  8. GridCom, Grid Commander: graphical interface for Grid jobs and data management; GridCom, Grid Commander: graficheskij interfejs dlya raboty s zadachami i dannymi v gride

    Energy Technology Data Exchange (ETDEWEB)

    Galaktionov, V V

    2011-07-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  9. Grid computing and collaboration technology in support of fusion energy sciences

    International Nuclear Information System (INIS)

    Schissel, D.P.

    2005-01-01

    Science research in general and magnetic fusion research in particular continue to grow in size and complexity resulting in a concurrent growth in collaborations between experimental sites and laboratories worldwide. The simultaneous increase in wide area network speeds has made it practical to envision distributed working environments that are as productive as traditionally collocated work. In computing power, it has become reasonable to decouple production and consumption resulting in the ability to construct computing grids in a similar manner as the electrical power grid. Grid computing, the secure integration of computer systems over high speed networks to provide on-demand access to data analysis capabilities and related functions, is being deployed as an alternative to traditional resource sharing among institutions. For human interaction, advanced collaborative environments are being researched and deployed to have distributed group work that is as productive as traditional meetings. The DOE Scientific Discovery through Advanced Computing Program initiative has sponsored several collaboratory projects, including the National Fusion Collaboratory Project, to utilize recent advances in grid computing and advanced collaborative environments to further research in several specific scientific domains. For fusion, the collaborative technology being deployed is being used in present day research and is also scalable to future research, in particular, to the International Thermonuclear Experimental Reactor experiment that will require extensive collaboration capability worldwide. This paper briefly reviews the concepts of grid computing and advanced collaborative environments and gives specific examples of how these technologies are being used in fusion research today

  10. PARALLEL PROCESSING OF BIG POINT CLOUDS USING Z-ORDER-BASED PARTITIONING

    Directory of Open Access Journals (Sweden)

    C. Alis

    2016-06-01

    Full Text Available As laser scanning technology improves and costs are coming down, the amount of point cloud data being generated can be prohibitively difficult and expensive to process on a single machine. This data explosion is not only limited to point cloud data. Voluminous amounts of high-dimensionality and quickly accumulating data, collectively known as Big Data, such as those generated by social media, Internet of Things devices and commercial transactions, are becoming more prevalent as well. New computing paradigms and frameworks are being developed to efficiently handle the processing of Big Data, many of which utilize a compute cluster composed of several commodity grade machines to process chunks of data in parallel. A central concept in many of these frameworks is data locality. By its nature, Big Data is large enough that the entire dataset would not fit on the memory and hard drives of a single node hence replicating the entire dataset to each worker node is impractical. The data must then be partitioned across worker nodes in a manner that minimises data transfer across the network. This is a challenge for point cloud data because there exist different ways to partition data and they may require data transfer. We propose a partitioning based on Z-order which is a form of locality-sensitive hashing. The Z-order or Morton code is computed by dividing each dimension to form a grid then interleaving the binary representation of each dimension. For example, the Z-order code for the grid square with coordinates (x = 1 = 012, y = 3 = 112 is 10112 = 11. The number of points in each partition is controlled by the number of bits per dimension: the more bits, the fewer the points. The number of bits per dimension also controls the level of detail with more bits yielding finer partitioning. We present this partitioning method by implementing it on Apache Spark and investigating how different parameters affect the accuracy and running time of the k nearest

  11. Smart grid strategy - the future intelligent energy system. [Denmark]; Smart grid-strategi - fremtidens intelligente energisystem

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-04-15

    The Government's Smart Grid Strategy brings Danish consumers a big step closer to managing their own energy consumption. The strategy, combines electricity meters read on an hourly basis with variable tariffs and a data hub. It will make it possible for consumers to use the power when it is least expensive. ''Today we set the course for developing a smart energy network that will reduce the cost of converting to sustainable energy, cut electricity bills and create brand new products consumers will welcome,'' says Minister of Climate, Energy and Building Martin Lidegaard. Encouraging consumers to use energy more efficiently is a key aspect of the strategy. The remote-read electricity meters are crucial if consumers are to play a role in optimising the flexible energy network. (LN)

  12. ATLAS WORLD-cloud and networking in PanDA

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Di Girolamo, A.; Maeno, T.; Walker, R.; ATLAS Collaboration

    2017-10-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites’ network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network connectivity. This contribution will introduce the conceptual changes for World-cloud, the development necessary in PanDA, an insight into the network model and the first half-year of operational experience.

  13. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  14. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  15. Large-Scale, Parallel, Multi-Sensor Data Fusion in the Cloud

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.

    2012-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over periods of years to decades. However, moving from predominantly single-instrument studies to a multi-sensor, measurement-based model for long-duration analysis of important climate variables presents serious challenges for large-scale data mining and data fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another instrument (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over years of AIRS data. To perform such an analysis, one must discover & access multiple datasets from remote sites, find the space/time "matchups" between instruments swaths and model grids, understand the quality flags and uncertainties for retrieved physical variables, assemble merged datasets, and compute fused products for further scientific and statistical analysis. To efficiently assemble such decade-scale datasets in a timely manner, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. "SciReduce" is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, in which simple tuples (keys & values) are passed between the map and reduce functions, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Thus, SciReduce uses the native datatypes (geolocated grids, swaths, and points) that geo-scientists are familiar with. We are deploying within Sci

  16. Effective Grid Utilization: A Technical Assessment and Application Guide; April 2011 - September 2012

    Energy Technology Data Exchange (ETDEWEB)

    Balser, S.; Sankar, S.; Miller, R.; Rawlins, A.; Israel, M.; Curry, T.; Mason, T.

    2012-09-01

    In order to more fully integrate renewable resources, such as wind and solar, into the transmission system, additional capacity must be realized in the short term using the installed transmission capacity that exists today. The U.S. Department of Energy (DOE) and the National Renewable Energy Laboratory Transmission and Grid Integration Group supported this study to assemble the history of regulations and status of transmission technology to expand existing grid capacity. This report compiles data on various transmission technology methods and upgrades for increased capacity utilization of the existing transmission system and transmission corridors. The report discusses the technical merit of each method and explains how the method could be applied within the current regulatory structure to increase existing transmission conductor and/or corridor capacity. The history and current state of alternatives to new construction is presented for regulators, legislators, and other policy makers wrestling with issues surrounding integration of variable generation. Current regulations are assessed for opportunities to change them to promote grid expansion. To support consideration of these alternatives for expanding grid capacity, the report lists relevant rules, standards, and policy changes.

  17. The Cloud Feedback Model Intercomparison Project Observational Simulator Package: Version 2

    Directory of Open Access Journals (Sweden)

    D. J. Swales

    2018-01-01

    Full Text Available The Cloud Feedback Model Intercomparison Project Observational Simulator Package (COSP gathers together a collection of observation proxies or satellite simulators that translate model-simulated cloud properties to synthetic observations as would be obtained by a range of satellite observing systems. This paper introduces COSP2, an evolution focusing on more explicit and consistent separation between host model, coupling infrastructure, and individual observing proxies. Revisions also enhance flexibility by allowing for model-specific representation of sub-grid-scale cloudiness, provide greater clarity by clearly separating tasks, support greater use of shared code and data including shared inputs across simulators, and follow more uniform software standards to simplify implementation across a wide range of platforms. The complete package including a testing suite is freely available.

  18. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  19. +Cloud: An Agent-Based Cloud Computing Platform

    OpenAIRE

    González, Roberto; Hernández de la Iglesia, Daniel; de la Prieta Pintado, Fernando; Gil González, Ana Belén

    2017-01-01

    Cloud computing is revolutionizing the services provided through the Internet, and is continually adapting itself in order to maintain the quality of its services. This study presents the platform +Cloud, which proposes a cloud environment for storing information and files by following the cloud paradigm. This study also presents Warehouse 3.0, a cloud-based application that has been developed to validate the services provided by +Cloud.

  20. Cloud Computing and Its Applications in GIS

    Science.gov (United States)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

  1. Grid3: An Application Grid Laboratory for Science

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    level services required by the participating experiments. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. The Grid3 infrastructure was deployed from grid level services provided by groups and applications within the collaboration. The services were organized into four distinct "grid level services" including: Grid3 Packaging, Monitoring and Information systems, User Authentication and the iGOC Grid Operatio...

  2. Outcrop-scale fracture trace identification using surface roughness derived from a high-density point cloud

    Science.gov (United States)

    Okyay, U.; Glennie, C. L.; Khan, S.

    2017-12-01

    Owing to the advent of terrestrial laser scanners (TLS), high-density point cloud data has become increasingly available to the geoscience research community. Research groups have started producing their own point clouds for various applications, gradually shifting their emphasis from obtaining the data towards extracting more and meaningful information from the point clouds. Extracting fracture properties from three-dimensional data in a (semi-)automated manner has been an active area of research in geosciences. Several studies have developed various processing algorithms for extracting only planar surfaces. In comparison, (semi-)automated identification of fracture traces at the outcrop scale, which could be used for mapping fracture distribution have not been investigated frequently. Understanding the spatial distribution and configuration of natural fractures is of particular importance, as they directly influence fluid-flow through the host rock. Surface roughness, typically defined as the deviation of a natural surface from a reference datum, has become an important metric in geoscience research, especially with the increasing density and accuracy of point clouds. In the study presented herein, a surface roughness model was employed to identify fracture traces and their distribution on an ophiolite outcrop in Oman. Surface roughness calculations were performed using orthogonal distance regression over various grid intervals. The results demonstrated that surface roughness could identify outcrop-scale fracture traces from which fracture distribution and density maps can be generated. However, considering outcrop conditions and properties and the purpose of the application, the definition of an adequate grid interval for surface roughness model and selection of threshold values for distribution maps are not straightforward and require user intervention and interpretation.

  3. Grid-connected to/from off-grid transference for micro-grid inverters

    OpenAIRE

    Heredero Peris, Daniel; Chillón Antón, Cristian; Pages Gimenez, Marc; Gross, Gabriel Igor; Montesinos Miracle, Daniel

    2013-01-01

    This paper compares two methods for controlling the on-line transference from connected to stand-alone mode and vice versa in converters for micro-grids. The first proposes a method where the converter changes from CSI (Current Source Inverter) in grid-connected mode to VSI (Voltage Source Inverter) in off-grid. In the second method, the inverter always works as a non-ideal voltage source, acting as VSI, using AC droop control strategy.

  4. Acoustic 2D full waveform inversion to solve gas cloud challenges

    Directory of Open Access Journals (Sweden)

    Srichand Prajapati

    2015-09-01

    Full Text Available The existing conventional inversion algorithm does not provide satisfactory results due to the complexity of propagated wavefield though the gas cloud. Acoustic full waveform inversion has been developed and applied to a realistic synthetic offshore shallow gas cloud feature with Student-t approach, with and without simultaneous sources encoding. As a modeling operator, we implemented the grid based finite-difference method in frequency domain using second order elastic wave equation. Jacobin operator and its adjoint provide a necessary platform for solving full waveform inversion problem in a reduced Hessian matrix. We invert gas cloud model in 5 frequency band selected from 1 to 12 Hz, each band contains 3 frequencies. The inversion results are highly sensitive to the misfit. The model allows better convergence and recovery of amplitude losses. This approach gives better resolution then the existing least-squares approach. In this paper, we implement the full waveform inversion for low frequency model with minimum number of iteration providing a better resolution of inversion results.

  5. The GridSite Web/Grid security system

    International Nuclear Information System (INIS)

    McNab, Andrew; Li Yibiao

    2010-01-01

    We present an overview of the current status of the GridSite toolkit, describing the security model for interactive and programmatic uses introduced in the last year. We discuss our experiences of implementing these internal changes and how they and previous rounds of improvements have been prompted by requirements from users and wider security trends in Grids (such as CSRF). Finally, we explain how these have improved the user experience of GridSite-based websites, and wider implications for portals and similar web/grid sites.

  6. The Taverna workflow suite: designing and executing workflows of Web Services on the desktop, web or in the cloud

    NARCIS (Netherlands)

    Wolstencroft, K.; Haines, R.; Fellows, D.; Williams, A.; Withers, D.; Owen, S.; Soiland-Reyes, S.; Dunlop, I.; Nenadic, A.; Fisher, P.; Bhagat, J.; Belhajjame, K.; Bacall, F.; Hardisty, A.; Nieva de la Hidalga, A.; Balcazar Vargas, M.P.; Sufi, S.; Goble, C.

    2013-01-01

    The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud

  7. Silicon Photonics Cloud (SiCloud)

    DEFF Research Database (Denmark)

    DeVore, P. T. S.; Jiang, Y.; Lynch, M.

    2015-01-01

    Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths.......Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths....

  8. Joint classification and contour extraction of large 3D point clouds

    Science.gov (United States)

    Hackel, Timo; Wegner, Jan D.; Schindler, Konrad

    2017-08-01

    We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.

  9. Advanced Cloud Forecasting for Solar Energy's Impact on Grid Modernization

    International Nuclear Information System (INIS)

    Werth, D.; Nichols, R.

    2017-01-01

    Solar energy production is subject to variability in the solar resource - clouds and aerosols will reduce the available solar irradiance and inhibit power production. The fact that solar irradiance can vary by large amounts at small timescales and in an unpredictable way means that power utilities are reluctant to assign to their solar plants a large portion of future energy demand - the needed power might be unavailable, forcing the utility to make costly adjustments to its daily portfolio. The availability and predictability of solar radiation therefore represent important research topics for increasing the power produced by renewable sources.

  10. How to keep the Grid full and working with ATLAS production and physics jobs

    CERN Document Server

    Pacheco Pages, Andres; The ATLAS collaboration; Di Girolamo, Alessandro; Walker, Rodney; Filip\\v{c}i\\v{c}, Andrej; Cameron, David; Yang, Wei; Fassi, Farida; Glushkov, Ivan

    2016-01-01

    The ATLAS production system has provided the infrastructure to process of tens of thousand of events during LHC Run1 and the first years of the LHC Run2 using grid, clouds and high performance computing. We address in this contribution several strategies and improvements added to the production system to optimize its performance to get the maximum efficiency of available resources from operational perspective and focusing in detail in the recent developments

  11. Cloud Processed CCN Suppress Stratus Cloud Drizzle

    Science.gov (United States)

    Hudson, J. G.; Noble, S. R., Jr.

    2017-12-01

    Conversion of sulfur dioxide to sulfate within cloud droplets increases the sizes and decreases the critical supersaturation, Sc, of cloud residual particles that had nucleated the droplets. Since other particles remain at the same sizes and Sc a size and Sc gap is often observed. Hudson et al. (2015) showed higher cloud droplet concentrations (Nc) in stratus clouds associated with bimodal high-resolution CCN spectra from the DRI CCN spectrometer compared to clouds associated with unimodal CCN spectra (not cloud processed). Here we show that CCN spectral shape (bimodal or unimodal) affects all aspects of stratus cloud microphysics and drizzle. Panel A shows mean differential cloud droplet spectra that have been divided according to traditional slopes, k, of the 131 measured CCN spectra in the Marine Stratus/Stratocumulus Experiment (MASE) off the Central California coast. K is generally high within the supersaturation, S, range of stratus clouds (< 0.5%). Because cloud processing decreases Sc of some particles, it reduces k. Panel A shows higher concentrations of small cloud droplets apparently grown on lower k CCN than clouds grown on higher k CCN. At small droplet sizes the concentrations follow the k order of the legend, black, red, green, blue (lowest to highest k). Above 13 µm diameter the lines cross and the hierarchy reverses so that blue (highest k) has the highest concentrations followed by green, red and black (lowest k). This reversed hierarchy continues into the drizzle size range (panel B) where the most drizzle drops, Nd, are in clouds grown on the least cloud-processed CCN (blue), while clouds grown on the most processed CCN (black) have the lowest Nd. Suppression of stratus cloud drizzle by cloud processing is an additional 2nd indirect aerosol effect (IAE) that along with the enhancement of 1st IAE by higher Nc (panel A) are above and beyond original IAE. However, further similar analysis is needed in other cloud regimes to determine if MASE was

  12. The magnitude and causes of uncertainty in global model simulations of cloud condensation nuclei

    Directory of Open Access Journals (Sweden)

    L. A. Lee

    2013-09-01

    Full Text Available Aerosol–cloud interaction effects are a major source of uncertainty in climate models so it is important to quantify the sources of uncertainty and thereby direct research efforts. However, the computational expense of global aerosol models has prevented a full statistical analysis of their outputs. Here we perform a variance-based analysis of a global 3-D aerosol microphysics model to quantify the magnitude and leading causes of parametric uncertainty in model-estimated present-day concentrations of cloud condensation nuclei (CCN. Twenty-eight model parameters covering essentially all important aerosol processes, emissions and representation of aerosol size distributions were defined based on expert elicitation. An uncertainty analysis was then performed based on a Monte Carlo-type sampling of an emulator built for each model grid cell. The standard deviation around the mean CCN varies globally between about ±30% over some marine regions to ±40–100% over most land areas and high latitudes, implying that aerosol processes and emissions are likely to be a significant source of uncertainty in model simulations of aerosol–cloud effects on climate. Among the most important contributors to CCN uncertainty are the sizes of emitted primary particles, including carbonaceous combustion particles from wildfires, biomass burning and fossil fuel use, as well as sulfate particles formed on sub-grid scales. Emissions of carbonaceous combustion particles affect CCN uncertainty more than sulfur emissions. Aerosol emission-related parameters dominate the uncertainty close to sources, while uncertainty in aerosol microphysical processes becomes increasingly important in remote regions, being dominated by deposition and aerosol sulfate formation during cloud-processing. The results lead to several recommendations for research that would result in improved modelling of cloud–active aerosol on a global scale.

  13. Monte Carlo-based subgrid parameterization of vertical velocity and stratiform cloud microphysics in ECHAM5.5-HAM2

    Directory of Open Access Journals (Sweden)

    J. Tonttila

    2013-08-01

    Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.

  14. OGC and Grid Interoperability in enviroGRIDS Project

    Science.gov (United States)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and

  15. Facade Reconstruction with Generalized 2.5d Grids

    Directory of Open Access Journals (Sweden)

    J. Demantke

    2013-10-01

    Full Text Available Reconstructing fine facade geometry from MMS lidar data remains a challenge: In addition to being inherently sparse, the point cloud provided by a single street point of view is necessarily incomplete. We propose a simple framework to estimate the facade surface with a deformable 2.5d grid. Computations are performed in a "sensor-oriented" coordinate system that maximizes consistency with the data. the algorithm allows to retrieve the facade geometry without priori knowledge. It can thus be automatically applied to a large amount of data in spite of the variability of encountered architectural forms. The 2.5d image structure of the output makes it compatible with storage and real-time constraints of immersive navigation.

  16. Hybrid cloud and cluster computing paradigms for life science applications.

    Science.gov (United States)

    Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey

    2010-12-21

    Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.

  17. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  18. Vehicle-to-grid power implementation: From stabilizing the grid to supporting large-scale renewable energy

    Science.gov (United States)

    Kempton, Willett; Tomić, Jasna

    Vehicle-to-grid power (V2G) uses electric-drive vehicles (battery, fuel cell, or hybrid) to provide power for specific electric markets. This article examines the systems and processes needed to tap energy in vehicles and implement V2G. It quantitatively compares today's light vehicle fleet with the electric power system. The vehicle fleet has 20 times the power capacity, less than one-tenth the utilization, and one-tenth the capital cost per prime mover kW. Conversely, utility generators have 10-50 times longer operating life and lower operating costs per kWh. To tap V2G is to synergistically use these complementary strengths and to reconcile the complementary needs of the driver and grid manager. This article suggests strategies and business models for doing so, and the steps necessary for the implementation of V2G. After the initial high-value, V2G markets saturate and production costs drop, V2G can provide storage for renewable energy generation. Our calculations suggest that V2G could stabilize large-scale (one-half of US electricity) wind power with 3% of the fleet dedicated to regulation for wind, plus 8-38% of the fleet providing operating reserves or storage for wind. Jurisdictions more likely to take the lead in adopting V2G are identified.

  19. Lessons learned from the ATLAS performance studies of the Iberian Cloud for the first LHC running period.

    CERN Document Server

    Sánchez-Martínez, V; The ATLAS collaboration; Borrego, C; del Peso, J; Delfino, M; Gomes, J; González de la Hoz, S; Pacheco Pages, A; Salt, J; Sedov, A; Villaplana, M; Wolters, H

    2013-01-01

    In this contribution we describe the performance of the Iberian (Spain and Portugal) ATLAS cloud during the first LHC running period (March 2010-January 2013) in the context of the GRID Computing and Data Distribution Model. The evolution of the resources for CPU, disk and tape in the Iberian Tier-1 and Tier-2s is summarized. The data distribution over all ATLAS destinations is shown, focusing on the number of files transferred and the size of the data. The status and distribution of simulation and analysis jobs within the cloud are discussed. The Distributed Analysis tools used to perform physics analysis are explained as well. Cloud performance in terms of the availability and reliability of its sites is discussed. The e ffect of the changes in the ATLAS Computing Model on the cloud is analyzed. Finally, the readiness of the Iberian Cloud towards the fi rst Long Shutdown (LS1) is evaluated and an outline of the foreseen actions to take in the coming years is given. The shutdown will be a good opportunity to...

  20. caGrid 1.0: a Grid enterprise architecture for cancer research.

    Science.gov (United States)

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2007-10-11

    caGrid is the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. The current release, caGrid version 1.0, is developed as the production Grid software infrastructure of caBIG. Based on feedback from adopters of the previous version (caGrid 0.5), it has been significantly enhanced with new features and improvements to existing components. This paper presents an overview of caGrid 1.0, its main components, and enhancements over caGrid 0.5.

  1. Cloud vertical profiles derived from CALIPSO and CloudSat and a comparison with MODIS derived clouds

    Science.gov (United States)

    Kato, S.; Sun-Mack, S.; Miller, W. F.; Rose, F. G.; Minnis, P.; Wielicki, B. A.; Winker, D. M.; Stephens, G. L.; Charlock, T. P.; Collins, W. D.; Loeb, N. G.; Stackhouse, P. W.; Xu, K.

    2008-05-01

    CALIPSO and CloudSat from the a-train provide detailed information of vertical distribution of clouds and aerosols. The vertical distribution of cloud occurrence is derived from one month of CALIPSO and CloudSat data as a part of the effort of merging CALIPSO, CloudSat and MODIS with CERES data. This newly derived cloud profile is compared with the distribution of cloud top height derived from MODIS on Aqua from cloud algorithms used in the CERES project. The cloud base from MODIS is also estimated using an empirical formula based on the cloud top height and optical thickness, which is used in CERES processes. While MODIS detects mid and low level clouds over the Arctic in April fairly well when they are the topmost cloud layer, it underestimates high- level clouds. In addition, because the CERES-MODIS cloud algorithm is not able to detect multi-layer clouds and the empirical formula significantly underestimates the depth of high clouds, the occurrence of mid and low-level clouds is underestimated. This comparison does not consider sensitivity difference to thin clouds but we will impose an optical thickness threshold to CALIPSO derived clouds for a further comparison. The effect of such differences in the cloud profile to flux computations will also be discussed. In addition, the effect of cloud cover to the top-of-atmosphere flux over the Arctic using CERES SSF and FLASHFLUX products will be discussed.

  2. Study of clumping in the Cepheus OB 3 molecular cloud

    International Nuclear Information System (INIS)

    Carr, J.S.

    1987-01-01

    A portion of the Cep OB 3 molecular cloud has been mapped in the (C-13)O (1-0) line on a completely sampled grid with a 1.5-arcmin spacing. A total of 45 individual clouds, or clumps, have been identified in the map, with masses from 3 to 300 solar mass, sizes 3 pc or smaller, and mean densities of a few hundred/cu cm. Power-law correlations are found among the clump properties, namely, M proportional to R exp 2.5 and Delta(v) exp 0.24. These exponents differ somewhat from those found for similar correlations for molecular clouds in previous studies. Determination of the virial masses for the clumps shows that the clumps are not gravitationally bound and must be expanding on a time scale of about 1 Myr. Measurements of the (C-13)O (2-1) line give volume densities of 2000-5000. Comparisons of these densities with the mean volume densities from the (C-13)O (1-0) data suggest that the gas is clumped on a small scale with a volume filling factor of 0.04-0.10. 31 references

  3. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  4. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, CMS, and LHCb experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionalities have been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This contribution summarizes the different developm...

  5. A comprehensive WSN-based approach to efficiently manage a Smart Grid.

    Science.gov (United States)

    Martinez-Sandoval, Ruben; Garcia-Sanchez, Antonio-Javier; Garcia-Sanchez, Felipe; Garcia-Haro, Joan; Flynn, David

    2014-10-10

    The Smart Grid (SG) is conceived as the evolution of the current electrical grid representing a big leap in terms of efficiency, reliability and flexibility compared to today's electrical network. To achieve this goal, the Wireless Sensor Networks (WSNs) are considered by the scientific/engineering community to be one of the most suitable technologies to apply SG technology to due to their low-cost, collaborative and long-standing nature. However, the SG has posed significant challenges to utility operators-mainly very harsh radio propagation conditions and the lack of appropriate systems to empower WSN devices-making most of the commercial widespread solutions inadequate. In this context, and as a main contribution, we have designed a comprehensive ad-hoc WSN-based solution for the Smart Grid (SENSED-SG) that focuses on specific implementations of the MAC, the network and the application layers to attain maximum performance and to successfully deal with any arising hurdles. Our approach has been exhaustively evaluated by computer simulations and mathematical analysis, as well as validation within real test-beds deployed in controlled environments. In particular, these test-beds cover two of the main scenarios found in a SG; on one hand, an indoor electrical substation environment, implemented in a High Voltage AC/DC laboratory, and, on the other hand, an outdoor case, deployed in the Transmission and Distribution segment of a power grid. The results obtained show that SENSED-SG performs better and is more suitable for the Smart Grid than the popular ZigBee WSN approach.

  6. The MODIS cloud optical and microphysical products: Collection 6 updates and examples from Terra and Aqua

    Science.gov (United States)

    Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.; Yang, Ping; Ridgway, William L.; Riedi, Jérôme

    2018-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases–daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel’s retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant. PMID:29657349

  7. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  8. Evaluation of cloud-resolving model simulations of midlatitude cirrus with ARM and A-train observations

    Science.gov (United States)

    Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; Xie, S.; Zhang, Y.

    2015-07-01

    Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. This paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated by the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating

  9. Comparasion of Cloud Cover restituted by POLDER and MODIS

    Science.gov (United States)

    Zeng, S.; Parol, F.; Riedi, J.; Cornet, C.; Thieuxleux, F.

    2009-04-01

    PARASOL and AQUA are two sun-synchronous orbit satellites in the queue of A-Train satellites that observe our earth within a few minutes apart from each other. Aboard these two platforms, POLDER and MODIS provide coincident observations of the cloud cover with very different characteristics. These give us a good opportunity to study the clouds system and evaluate strengths and weaknesses of each dataset in order to provide an accurate representation of global cloud cover properties. This description is indeed of outermost importance to quantify and understand the effect of clouds on global radiation budget of the earth-atmosphere system and their influence on the climate changes. We have developed a joint dataset containing both POLDER and MODIS level 2 cloud products collocated and reprojected on a common sinusoidal grid in order to make the data comparison feasible and veracious. Our foremost work focuses on the comparison of both spatial distribution and temporal variation of the global cloud cover. This simple yet critical cloud parameter need to be clearly understood to allow further comparison of the other cloud parameters. From our study, we demonstrate that on average these two sensors both detect the clouds fairly well. They provide similar spatial distributions and temporal variations:both sensors see high values of cloud amount associated with deep convection in ITCZ, over Indonesia, and in west-central Pacific Ocean warm pool region; they also provide similar high cloud cover associated to mid-latitude storm tracks, to Indian monsoon or to the stratocumulus along the west coast of continents; on the other hand small cloud amounts that typically present over subtropical oceans and deserts in subsidence aeras are well identified by both POLDER and MODIS. Each sensor has its advantages and inconveniences for the detection of a particular cloud types. With higher spatial resolution, MODIS can better detect the fractional clouds thus explaining as one part

  10. Fast Cloud Adjustment to Increasing CO2 in a Superparameterized Climate Model

    Directory of Open Access Journals (Sweden)

    Marat Khairoutdinov

    2012-05-01

    Full Text Available Two-year simulation experiments with a superparameterized climate model, SP-CAM, are performed to understand the fast tropical (30S-30N cloud response to an instantaneous quadrupling of CO2 concentration with SST held fixed at present-day values.The greenhouse effect of the CO2 perturbation quickly warms the tropical land surfaces by an average of 0.5 K. This shifts rising motion, surface precipitation, and cloud cover at all levels from the ocean to the land, with only small net tropical-mean cloud changes. There is a widespread average reduction of about 80 m in the depth of the trade inversion capping the marine boundary layer (MBL over the cooler subtropical oceans.One apparent contributing factor is CO2-enhanced downwelling longwave radiation, which reduces boundary-layer radiative cooling, a primary driver of turbulent entrainment through the trade inversion. A second contributor is a slight CO2-induced heating of the free troposphere above the MBL, which strengthens the trade inversion and also inhibits entrainment. There is a corresponding downward displacement of MBL clouds with a very slight decrease in mean cloud cover and albedo.Two-dimensional cloud-resolving model (CRM simulations of this MBL response are run to steady state using composite SP-CAM simulated thermodynamic and wind profiles from a representative cool subtropical ocean regime, for the control and 4xCO2 cases. Simulations with a CRM grid resolution equal to that of SP-CAM are compared with much finer resolution simulations. The coarse-resolution simulations maintain a cloud fraction and albedo comparable to SP-CAM, but the fine-resolution simulations have a much smaller cloud fraction. Nevertheless, both CRM configurations simulate a reduction in inversion height comparable to SP-CAM. The changes in low cloud cover and albedo in the CRM simulations are small, but both simulations predict a slight reduction in low cloud albedo as in SP-CAM.

  11. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  12. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    CERN Document Server

    Kompaniets, Mikhail; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-01-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments.We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is ba...

  13. Gridded Species Distribution, Version 1: Global Amphibians Presence Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Presence Grids of the Gridded Species Distribution, Version 1 is a reclassified version of the original grids of amphibian species distribution...

  14. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  15. Chimera Grid Tools

    Science.gov (United States)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  16. Analysis of the Metal Oxide Space Clouds (MOSC) HF Propagation Environment

    Science.gov (United States)

    Jackson-Booth, N.; Selzer, L.

    2015-12-01

    Artificial Ionospheric Modification (AIM) attempts to modify the ionosphere in order to alter the high frequency (HF) propagation environment. It can be achieved through injections of aerosols, chemicals or radio (RF) signals into the ionosphere. The Metal Oxide Space Clouds (MOSC) experiment was undertaken in April/May 2013 to investigate chemical AIM. Two sounding rockets were launched from the Kwajalein Atoll (part of the Marshall Islands) and each released a cloud of vaporized samarium (Sm). The samarium created a localized plasma cloud, with increased electron density, which formed an additional ionospheric layer. The ionospheric effects were measured by a wide range of ground based instrumentation which included a network of high frequency (HF) sounders. Chirp transmissions were made from three atolls and received at five sites within the Marshall Islands. One of the receive sites consisted of an 18 antenna phased array, which was used for direction finding. The ionograms have shown that as well as generating a new layer the clouds created anomalous RF propagation paths, which interact with both the cloud and the F-layer, resulting in 'ghost traces'. To fully understand the propagation environment a 3D numerical ray trace has been undertaken, using a variety of background ionospheric and cloud models, to find the paths through the electron density grid for a given fan of elevation and azimuth firing angles. Synthetic ionograms were then produced using the ratio of ray path length to speed of light as an estimation of the delay between transmission and observation for a given frequency of radio wave. This paper reports on the latest analysis of the MOSC propagation environment, comparing theory with observations, to further understanding of AIM.

  17. Green cloud environment by using robust planning algorithm

    Directory of Open Access Journals (Sweden)

    Jyoti Thaman

    2017-11-01

    Full Text Available Cloud computing provided a framework for seamless access to resources through network. Access to resources is quantified through SLA between service providers and users. Service provider tries to best exploit their resources and reduce idle times of the resources. Growing energy concerns further makes the life of service providers miserable. User’s requests are served by allocating users tasks to resources in Clouds and Grid environment through scheduling algorithms and planning algorithms. With only few Planning algorithms in existence rarely planning and scheduling algorithms are differentiated. This paper proposes a robust hybrid planning algorithm, Robust Heterogeneous-Earliest-Finish-Time (RHEFT for binding tasks to VMs. The allocation of tasks to VMs is based on a novel task matching algorithm called Interior Scheduling. The consistent performance of proposed RHEFT algorithm is compared with Heterogeneous-Earliest-Finish-Time (HEFT and Distributed HEFT (DHEFT for various parameters like utilization ratio, makespan, Speed-up and Energy Consumption. RHEFT’s consistent performance against HEFT and DHEFT has established the robustness of the hybrid planning algorithm through rigorous simulations.

  18. ATLAS World-cloud and networking in PanDA

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration; De, Kaushik; Di Girolamo, Alessandro; Walker, Rodney

    2016-01-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centers, which confined tasks and most of the data traffic. Since those early days, the sites' network bandwidth has increased at O(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. Nuclei and satellite sites are dynamically paired by PanDA for each task based on the input data availability, capability matching, site load and...

  19. ATLAS WORLD-cloud and networking in PanDA

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Di Girolamo, Alessandro; Maeno, Tadashi; Walker, Rodney

    2017-01-01

    The ATLAS computing model was originally designed as static clouds (usually national or geographical groupings of sites) around the Tier 1 centres, which confined tasks and most of the data traffic. Since those early days, the sites' network bandwidth has increased at 0(1000) and the difference in functionalities between Tier 1s and Tier 2s has reduced. After years of manual, intermediate solutions, we have now ramped up to full usage of World-cloud, the latest step in the PanDA Workload Management System to increase resource utilization on the ATLAS Grid, for all workflows (MC production, data (re)processing, etc.). We have based the development on two new site concepts. Nuclei sites are the Tier 1s and large Tier 2s, where tasks will be assigned and the output aggregated, and satellites are the sites that will execute the jobs and send the output to their nucleus. PanDA dynamically pairs nuclei and satellite sites for each task based on the input data availability, capability matching, site load and network...

  20. Grid: From EGEE to EGI and from INFN-Grid to IGI

    International Nuclear Information System (INIS)

    Giselli, A.; Mazzuccato, M.

    2009-01-01

    In the last fifteen years the approach of the computational Grid has changed the way to use computing resources. Grid computing has raised interest worldwide in academia, industry, and government with fast development cycles. Great efforts, huge funding and resources have been made available through national, regional and international initiatives aiming at providing Grid infrastructures, Grid core technologies, Grid middle ware and Grid applications. The Grid software layers reflect the architecture of the services developed so far by the most important European and international projects. In this paper Grid e-Infrastructure story is given, detailing European, Italian and international projects such as EGEE, INFN-Grid and NAREGI. In addition the sustainability issue in the long-term perspective is described providing plans by European and Italian communities with EGI and IGI.