WorldWideScience

Sample records for grid today clouds

  1. Grid today, clouds on the horizon

    Science.gov (United States)

    Shiers, Jamie

    2009-04-01

    By the time of CCP 2008, the largest scientific machine in the world - the Large Hadron Collider - had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy ( 7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" - that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219-223]. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC'08) - aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change - as always - is on the horizon. The current funding model for Grids - which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, including South America - is evolving towards a long-term, sustainable e-infrastructure, like the European Grid Initiative (EGI) [The European Grid Initiative Design Study, website at http://web.eu-egi.eu/]. At the same time, potentially new paradigms, such as that of "Cloud Computing" are emerging. This paper summarizes the results of CCRC'08 and discusses the potential impact of future Grid funding on both regional and international application communities. It contrasts Grid and Cloud computing models from both technical and sociological points of view. Finally, it discusses the requirements from production application communities, in terms of stability and continuity in the medium to long term.

  2. Grid today, clouds on the horizon

    CERN Document Server

    Shiers, Jamie

    2009-01-01

    By the time of CCP 2008, the largest scientific machine in the world – the Large Hadron Collider – had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5+5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7+7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our “Higgs in one basket” – that of Grid computing [The Worldwide LHC Computing Grid (WLCG), in: Proceedings of the Conference on Computational Physics 2006 (CCP 2006), vol. 177, 2007, pp. 219–223]. After many years of preparation, 2008 saw a final “Common Computing Readiness Challenge” (CCRC'08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which...

  3. Grids Today, Clouds on the Horizon

    CERN Document Server

    Shiers, J

    2008-01-01

    By the time of CCP 2008, the world’s largest scientific machine – the Large Hadron Collider – should have been cooled down to its operational temperature of below 20K and injection tests should have started. Collisions of proton beams at 5 + 5 TeV are expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) now foreseen for 2009. In order to process the data from this world machine, we have put our â€ワHiggs in one basket” – that of Grid computing. After many years of preparation, 2008 has seen a final â€ワCommon Computing Readiness Challenge” (CCRC’08) – aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relies on a world‐wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which in Europe has been through 3 generations of EGEE projects, together with related projects in other part...

  4. Grids Today, Clouds on the Horizon

    CERN Document Server

    Shiers, J

    2008-01-01

    By the time of CCP 2008, the largest scientific machine in the world -– the Large Hadron Collider -– had been cooled down as scheduled to its operational temperature of below 2 degrees Kelvin and injection tests were starting. Collisions of proton beams at 5 + 5 TeV were expected within one to two months of the initial tests, with data taking at design energy (7 + 7 TeV) foreseen for 2009. In order to process the data from this world machine, we have put our "Higgs in one basket" -– that of Grid computing. After many years of preparation, 2008 saw a final "Common Computing Readiness Challenge" (CCRC’08) -– aimed at demonstrating full readiness for 2008 data taking, processing and analysis. By definition, this relied on a world-wide production Grid infrastructure. But change – as always – is on the horizon. The current funding model for Grids – which in Europe has been through 3 generations of EGEE projects, together with related projects in other parts of the world, inc...

  5. Grids, Clouds, and Virtualization

    Science.gov (United States)

    Cafaro, Massimo; Aloisio, Giovanni

    This chapter introduces and puts in context Grids, Clouds, and Virtualization. Grids promised to deliver computing power on demand. However, despite a decade of active research, no viable commercial grid computing provider has emerged. On the other hand, it is widely believed - especially in the Business World - that HPC will eventually become a commodity. Just as some commercial consumers of electricity have mission requirements that necessitate they generate their own power, some consumers of computational resources will continue to need to provision their own supercomputers. Clouds are a recent business-oriented development with the potential to render this eventually as rare as organizations that generate their own electricity today, even among institutions who currently consider themselves the unassailable elite of the HPC business. Finally, Virtualization is one of the key technologies enabling many different Clouds. We begin with a brief history in order to put them in context, and recall the basic principles and concepts underlying and clearly differentiating them. A thorough overview and survey of existing technologies provides the basis to delve into details as the reader progresses through the book.

  6. Grids, Clouds and Virtualization

    CERN Document Server

    Cafaro, Massimo

    2011-01-01

    Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference int

  7. Can Clouds replace Grids? Will Clouds replace Grids?

    International Nuclear Information System (INIS)

    Shiers, J D

    2010-01-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9 o K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  8. Can Clouds replace Grids? Will Clouds replace Grids?

    Energy Technology Data Exchange (ETDEWEB)

    Shiers, J D, E-mail: Jamie.Shiers@cern.c [CERN, 1211 Geneva 23 (Switzerland)

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9{sup o}K and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared 'open' and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently 'Cloud Computing' - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  9. Can Clouds replace Grids? Will Clouds replace Grids?

    Science.gov (United States)

    Shiers, J. D.

    2010-04-01

    The world's largest scientific machine - comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground - currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared "open" and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability - as seen by the experiments, as opposed to that measured by the official tools - still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently "Cloud Computing" - in terms of pay-per-use fabric provisioning - has emerged as a potentially viable alternative but with rather different strengths and no doubt weaknesses too. Based on the concrete needs of the LHC experiments - where the total data volume that will be acquired over the full lifetime of the project, including the additional data copies that are required by the Computing Models of the experiments, approaches 1 Exabyte - we analyze the pros and cons of Grids versus Clouds. This analysis covers not only technical issues - such as those related to demanding database and data management needs - but also sociological aspects, which cannot be ignored, neither in terms of funding nor in the wider context of the essential but often overlooked role of science in society, education and economy.

  10. Cloud Computing and Smart Grids

    Directory of Open Access Journals (Sweden)

    Janina POPEANGĂ

    2012-10-01

    Full Text Available Increasing concern about energy consumption is leading to infrastructure that supports real-time, two-way communication between utilities and consumers, and allows software systems at both ends to control and manage power use. To manage communications to millions of endpoints in a secure, scalable and highly-available environment and to achieve these twin goals of ‘energy conservation’ and ‘demand response’, utilities must extend the same communication network management processes and tools used in the data center to the field.This paper proposes that cloud computing technology, because of its low cost, flexible and redundant architecture and fast response time, has the functionality needed to provide the security, interoperability and performance required for large-scale smart grid applications.

  11. Cloud and Grid: more connected than you might think?

    CERN Multimedia

    Stephanie McClellan

    2013-01-01

    You may perceive the grid and the cloud to be two separate technologies: the grid as physical hardware and the cloud as virtual hardware simulated by running software. So how are the grid and the cloud being integrated at CERN?   CERN Computer Centre. The LHC generates a large amount of data that needs to be stored, distributed and analysed. Grid technology is used for the mass physical data processing needed for the LHC supported by many data centres around the world as part of the Worldwide LHC Computing Grid. Beyond the technology itself, the Grid represents a collaboration of all these centres working towards a common goal. Cloud technology uses virtualisation techniques, which allow one physical machine to represent many virtual machines. This technology is being used today to develop and deploy a range of IT services (such as Service Now, a cloud hosted service), allowing for a great deal of operational flexibility. Such services are available at CERN through Openstack. &...

  12. Can Clouds Replace Grids? Will Clouds Replace Grids?

    CERN Document Server

    Shiers, J

    2010-01-01

    The world’s largest scientific machine – comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground – currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared “open” and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability – as seen by the experiments, as opposed to that measured by the official tools – still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently “Cloud Computing” – in terms of pay-per-use fabric provisioning – has emerged as a potentially viable al...

  13. Cloud feedback studies with a physics grid

    Energy Technology Data Exchange (ETDEWEB)

    Dipankar, Anurag [Max Planck Institute for Meteorology Hamburg; Stevens, Bjorn [Max Planck Institute for Meteorology Hamburg

    2013-02-07

    During this project the investigators implemented a fully parallel version of dual-grid approach in main frame code ICON, implemented a fully conservative first-order interpolation scheme for horizontal remapping, integrated UCLA-LES micro-scale model into ICON to run parallely in selected columns, and did cloud feedback studies on aqua-planet setup to evaluate the classical parameterization on a small domain. The micro-scale model may be run in parallel with the classical parameterization, or it may be run on a "physics grid" independent of the dynamics grid.

  14. Grids, virtualization, and clouds at Fermilab

    International Nuclear Information System (INIS)

    Timm, S; Chadwick, K; Garzoglio, G; Noh, S

    2014-01-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  15. Grids, virtualization, and clouds at Fermilab

    Science.gov (United States)

    Timm, S.; Chadwick, K.; Garzoglio, G.; Noh, S.

    2014-06-01

    Fermilab supports a scientific program that includes experiments and scientists located across the globe. To better serve this community, in 2004, the (then) Computing Division undertook the strategy of placing all of the High Throughput Computing (HTC) resources in a Campus Grid known as FermiGrid, supported by common shared services. In 2007, the FermiGrid Services group deployed a service infrastructure that utilized Xen virtualization, LVS network routing and MySQL circular replication to deliver highly available services that offered significant performance, reliability and serviceability improvements. This deployment was further enhanced through the deployment of a distributed redundant network core architecture and the physical distribution of the systems that host the virtual machines across multiple buildings on the Fermilab Campus. In 2010, building on the experience pioneered by FermiGrid in delivering production services in a virtual infrastructure, the Computing Sector commissioned the FermiCloud, General Physics Computing Facility and Virtual Services projects to serve as platforms for support of scientific computing (FermiCloud 6 GPCF) and core computing (Virtual Services). This work will present the evolution of the Fermilab Campus Grid, Virtualization and Cloud Computing infrastructure together with plans for the future.

  16. A Cloud Associated Smart Grid Admin Dashboard

    Directory of Open Access Journals (Sweden)

    P. Naveen

    2018-02-01

    Full Text Available Intelligent smart grid system undertakes electricity demand in a sustainable, reliable, economical and environmentally friendly manner. As smart grid involves, it has the liability of meeting the changing consumer needs on the day-to-day basis. Modern energy consumers like to vivaciously regulate their consumption patterns more competently and intelligently than current provided ways. To fulfill the consumers’ needs, smart meters and sensors make the grid infrastructure more efficient and resilient in energy data collection and management even with the ever-changing renewable power generation. Though cloud acts as an outlet for the energy consumers to retrieve energy data from the grid, the information systems available are technically constrained and not user-friendly. Hence, a simple technology enabled utility-consumer interactive information system in the form of a dashboard is presented to cater the electric consumer needs.

  17. The HEPiX Virtualisation Working Group: Towards a Grid of Clouds

    International Nuclear Information System (INIS)

    Cass, Tony

    2012-01-01

    The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.

  18. Virtual Machine Lifecycle Management in Grid and Cloud Computing

    OpenAIRE

    Schwarzkopf, Roland

    2015-01-01

    Virtualization is the foundation for two important technologies: Virtualized Grid and Cloud Computing. Virtualized Grid Computing is an extension of the Grid Computing concept introduced to satisfy the security and isolation requirements of commercial Grid users. Applications are confined in virtual machines to isolate them from each other and the data they process from other users. Apart from these important requirements, Virtual...

  19. The International Symposium on Grids and Clouds

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2012 will be held at Academia Sinica in Taipei from 26 February to 2 March 2012, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). 2012 is the decennium anniversary of the ISGC which over the last decade has tracked the convergence, collaboration and innovation of individual researchers across the Asia Pacific region to a coherent community. With the continuous support and dedication from the delegates, ISGC has provided the primary international distributed computing platform where distinguished researchers and collaboration partners from around the world share their knowledge and experiences. The last decade has seen the wide-scale emergence of e-Infrastructure as a critical asset for the modern e-Scientist. The emergence of large-scale research infrastructures and instruments that has produced a torrent of electronic data is forcing a generational change in the scientific process and the mechanisms used to analyse the resulting data deluge. No longer can the processing of these vast amounts of data and production of relevant scientific results be undertaken by a single scientist. Virtual Research Communities that span organisations around the world, through an integrated digital infrastructure that connects the trust and administrative domains of multiple resource providers, have become critical in supporting these analyses. Topics covered in ISGC 2012 include: High Energy Physics, Biomedicine & Life Sciences, Earth Science, Environmental Changes and Natural Disaster Mitigation, Humanities & Social Sciences, Operations & Management, Middleware & Interoperability, Security and Networking, Infrastructure Clouds & Virtualisation, Business Models & Sustainability, Data Management, Distributed Volunteer & Desktop Grid Computing, High Throughput Computing, and High Performance, Manycore & GPU Computing.

  20. Grid and Cloud for Developing Countries

    Science.gov (United States)

    Petitdidier, Monique

    2014-05-01

    The European Grid e-infrastructure has shown the capacity to connect geographically distributed heterogeneous compute resources in a secure way taking advantages of a robust and fast REN (Research and Education Network). In many countries like in Africa the first step has been to implement a REN and regional organizations like Ubuntunet, WACREN or ASREN to coordinate the development, improvement of the network and its interconnection. The Internet connections are still exploding in those countries. The second step has been to fill up compute needs of the scientists. Even if many of them have their own multi-core or not laptops for more and more applications it is not enough because they have to face intensive computing due to the large amount of data to be processed and/or complex codes. So far one solution has been to go abroad in Europe or in America to run large applications or not to participate to international communities. The Grid is very attractive to connect geographically-distributed heterogeneous resources, aggregate new ones and create new sites on the REN with a secure access. All the users have the same servicers even if they have no resources in their institute. With faster and more robust internet they will be able to take advantage of the European Grid. There are different initiatives to provide resources and training like UNESCO/HP Brain Gain initiative, EUMEDGrid, ..Nowadays Cloud becomes very attractive and they start to be developed in some countries. In this talk challenges for those countries to implement such e-infrastructures, to develop in parallel scientific and technical research and education in the new technologies will be presented illustrated by examples.

  1. Dynamic federation of grid and cloud storage

    Science.gov (United States)

    Furano, Fabrizio; Keeble, Oliver; Field, Laurence

    2016-09-01

    The Dynamic Federations project ("Dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol-agnostic, we have focused our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. The Data Bridge has been deployed in production with good results. We believe that the features of a loosely coupled federation of open-protocolbased storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.

  2. Automated Grid Monitoring for LHCb through HammerCloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  3. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    Kennedy, J; Walker, R; Olszewski, A; Nderitu, S; Serfon, C; Duckeck, G

    2010-01-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  4. Use of VMware for providing cloud infrastructure for the Grid

    International Nuclear Information System (INIS)

    Long, Robin; Storey, Matthew

    2014-01-01

    The need to maximise computing resources whilst maintaining versatile setups leads to the need for flexible on demand facilities through the use of cloud computing. GridPP is currently investigating the role that Cloud Computing, in the form of Virtual Machines, can play in supporting Particle Physics analyses. As part of this research we look at the ability of VMware's ESXi hyper-visors[6] to provide such an infrastructure through the use of Virtual Machines (VMs); the advantages of such systems and their potential performance compared to physical environments.

  5. An Authentication Gateway for Integrated Grid and Cloud Access

    International Nuclear Information System (INIS)

    Ciaschini, V; Salomoni, D

    2011-01-01

    The WNoDeS architecture, providing distributed, integrated access to both Cloud and Grid resources through virtualization technologies, makes use of an Authentication Gateway to support diverse authentication mechanisms. Three main use cases are foreseen, covering access via X.509 digital certificates, federated services like Shibboleth or Kerberos, and credit-based access. In this paper, we describe the structure of the WNoDeS authentication gateway.

  6. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  7. Grid site testing for ATLAS with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, J; Hönig, F; Legger, F; LLamas, R Medrano; Sciacca, F G; Ster, D van der

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  8. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  9. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2013-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  10. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  11. Integration of cloud, grid and local cluster resources with DIRAC

    International Nuclear Information System (INIS)

    Fifield, Tom; Sevior, Martin; Carmona, Ana; Casajús, Adrián; Graciani, Ricardo

    2011-01-01

    Grid computing was developed to provide users with uniform access to large-scale distributed resources. This has worked well, however there are significant resources available to the scientific community that do not follow this paradigm - those on cloud infrastructure providers, HPC supercomputers or local clusters. DIRAC (Distributed Infrastructure with Remote Agent Control) was originally designed to support direct submission to the Local Resource Management Systems (LRMS) of such clusters for LHCb, matured to support grid workflows and has recently been updated to support Amazon's Elastic Compute Cloud. This raises a number of new possibilities - by opening avenues to new resources, virtual organisations can change their resources with usage patterns and use these dedicated facilities for a given time. For example, user communities such as High Energy Physics experiments, have computing tasks with a wide variety of requirements in terms of CPU, data access or memory consumption, and their usage profile is never constant throughout the year. Having the possibility to transparently absorb peaks on the demand for these kinds of tasks using Cloud resources could allow a reduction in the overall cost of the system. This paper investigates interoperability by following a recent large-scale production exercise utilising resources from these three different paradigms, during the 2010 Belle Monte Carlo run. Through this, it discusses the challenges and opportunities of such a model.

  12. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  13. Computing on the grid and in the cloud

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    "The results today are only possible because of the extraordinary performance of the accelerators, including the infrastructure, the experiments, and the Grid computing." These were the words of the CERN Director General Rolf Heuer when the observation of a new particle consistent with a Higgs Boson was revealed to the world on the 4th July 2012. The end result of the all investments made to build and operate the LHC is the data that are recorded and the knowledge that can be extracted. It is the role of the global computing infrastructure to unlock the value that is encapsulated in the data. This lecture provides a detailed overview of the Worldwide LHC Computing Grid, an international collaboration to distribute and analyse the LHC data.

  14. International Symposium on Grids and Clouds (ISGC) 2016

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2016 will be held at Academia Sinica in Taipei, Taiwan from 13-18 March 2016, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC). The theme of ISGC 2016 focuses on“Ubiquitous e-infrastructures and Applications”. Contemporary research is impossible without a strong IT component - researchers rely on the existence of stable and widely available e-infrastructures and their higher level functions and properties. As a result of these expectations, e-Infrastructures are becoming ubiquitous, providing an environment that supports large scale collaborations that deal with global challenges as well as smaller and temporal research communities focusing on particular scientific problems. To support those diversified communities and their needs, the e-Infrastructures themselves are becoming more layered and multifaceted, supporting larger groups of applications. Following the call for the last year conference, ISGC 2016 continues its aim to bring together users and application developers with those responsible for the development and operation of multi-purpose ubiquitous e-Infrastructures. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities, Arts, and Social Sciences (HASS) Applications, Virtual Research Environment (including Middleware, tools, services, workflow, etc.), Data Management, Big Data, Networking & Security, Infrastructure & Operations, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC), etc.

  15. International Symposium on Grids and Clouds (ISGC) 2014

    Science.gov (United States)

    The International Symposium on Grids and Clouds (ISGC) 2014 will be held at Academia Sinica in Taipei, Taiwan from 23-28 March 2014, with co-located events and workshops. The conference is hosted by the Academia Sinica Grid Computing Centre (ASGC).“Bringing the data scientist to global e-Infrastructures” is the theme of ISGC 2014. The last decade has seen the phenomenal growth in the production of data in all forms by all research communities to produce a deluge of data from which information and knowledge need to be extracted. Key to this success will be the data scientist - educated to use advanced algorithms, applications and infrastructures - collaborating internationally to tackle society’s challenges. ISGC 2014 will bring together researchers working in all aspects of data science from different disciplines around the world to collaborate and educate themselves in the latest achievements and techniques being used to tackle the data deluge. In addition to the regular workshops, technical presentations and plenary keynotes, ISGC this year will focus on how to grow the data science community by considering the educational foundation needed for tomorrow’s data scientist. Topics of discussion include Physics (including HEP) and Engineering Applications, Biomedicine & Life Sciences Applications, Earth & Environmental Sciences & Biodiversity Applications, Humanities & Social Sciences Application, Virtual Research Environment (including Middleware, tools, services, workflow, ... etc.), Data Management, Big Data, Infrastructure & Operations Management, Infrastructure Clouds and Virtualisation, Interoperability, Business Models & Sustainability, Highly Distributed Computing Systems, and High Performance & Technical Computing (HPTC).

  16. An Analysis of Security and Privacy Issues in Smart Grid Software Architectures on Clouds

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Kumbhare, Alok; Cao, Baohua; Prasanna, Viktor K.

    2011-07-09

    Power utilities globally are increasingly upgrading to Smart Grids that use bi-directional communication with the consumer to enable an information-driven approach to distributed energy management. Clouds offer features well suited for Smart Grid software platforms and applications, such as elastic resources and shared services. However, the security and privacy concerns inherent in an information rich Smart Grid environment are further exacerbated by their deployment on Clouds. Here, we present an analysis of security and privacy issues in a Smart Grids software architecture operating on different Cloud environments, in the form of a taxonomy. We use the Los Angeles Smart Grid Project that is underway in the largest U.S. municipal utility to drive this analysis that will benefit both Cloud practitioners targeting Smart Grid applications, and Cloud researchers investigating security and privacy.

  17. Simulation modeling of cloud computing for smart grid using CloudSim

    Directory of Open Access Journals (Sweden)

    Sandeep Mehmi

    2017-05-01

    Full Text Available In this paper a smart grid cloud has been simulated using CloudSim. Various parameters like number of virtual machines (VM, VM Image size, VM RAM, VM bandwidth, cloudlet length, and their effect on cost and cloudlet completion time in time-shared and space-shared resource allocation policy have been studied. As the number of cloudlets increased from 68 to 178, greater number of cloudlets completed their execution with high cloudlet completion time in time-shared allocation policy as compared to space-shared allocation policy. Similar trend has been observed when VM bandwidth is increased from 1 Gbps to 10 Gbps and VM RAM is increased from 512 MB to 5120 MB. The cost of processing increased linearly with respect to increase in number of VMs, VM Image size and cloudlet length.

  18. The International Symposium on Grids and Clouds and the Open Grid Forum

    Science.gov (United States)

    The International Symposium on Grids and Clouds 20111 was held at Academia Sinica in Taipei, Taiwan on 19th to 25th March 2011. A series of workshops and tutorials preceded the symposium. The aim of ISGC is to promote the use of grid and cloud computing in the Asia Pacific region. Over the 9 years that ISGC has been running, the programme has evolved to become more user community focused with subjects reaching out to a larger population. Research communities are making widespread use of distributed computing facilities. Linking together data centers, production grids, desktop systems or public clouds, many researchers are able to do more research and produce results more quickly. They could do much more if the computing infrastructures they use worked together more effectively. Changes in the way we approach distributed computing, and new services from commercial providers, mean that boundaries are starting to blur. This opens the way for hybrid solutions that make it easier for researchers to get their job done. Consequently the theme for ISGC2011 was the opportunities that better integrated computing infrastructures can bring, and the steps needed to achieve the vision of a seamless global research infrastructure. 2011 is a year of firsts for ISGC. First the title - while the acronym remains the same, its meaning has changed to reflect the evolution of computing: The International Symposium on Grids and Clouds. Secondly the programming - ISGC 2011 has always included topical workshops and tutorials. But 2011 is the first year that ISGC has been held in conjunction with the Open Grid Forum2 which held its 31st meeting with a series of working group sessions. The ISGC plenary session included keynote speakers from OGF that highlighted the relevance of standards for the research community. ISGC with its focus on applications and operational aspects complemented well with OGF's focus on standards development. ISGC brought to OGF real-life use cases and needs to be

  19. Generating Free-Form Grid Truss Structures from 3D Scanned Point Clouds

    Directory of Open Access Journals (Sweden)

    Hui Ding

    2017-01-01

    Full Text Available Reconstruction, according to physical shape, is a novel way to generate free-form grid truss structures. 3D scanning is an effective means of acquiring physical form information and it generates dense point clouds on surfaces of objects. However, generating grid truss structures from point clouds is still a challenge. Based on the advancing front technique (AFT which is widely used in Finite Element Method (FEM, a scheme for generating grid truss structures from 3D scanned point clouds is proposed in this paper. Based on the characteristics of point cloud data, the search box is adopted to reduce the search space in grid generating. A front advancing procedure suit for point clouds is established. Delaunay method and Laplacian method are used to improve the quality of the generated grids, and an adjustment strategy that locates grid nodes at appointed places is proposed. Several examples of generating grid truss structures from 3D scanned point clouds of seashells are carried out to verify the proposed scheme. Physical models of the grid truss structures generated in the examples are manufactured by 3D print, which solidifies the feasibility of the scheme.

  20. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  1. Automated Grid Monitoring for the LHCb Experiment Through HammerCloud

    CERN Document Server

    Dice, Bradley

    2015-01-01

    The HammerCloud system is used by CERN IT to monitor the status of the Worldwide LHC Computing Grid (WLCG). HammerCloud automatically submits jobs to WLCG computing resources, closely replicating the workflow of Grid users (e.g. physicists analyzing data). This allows computation nodes and storage resources to be monitored, software to be tested (somewhat like continuous integration), and new sites to be stress tested with a heavy job load before commissioning. The HammerCloud system has been in use for ATLAS and CMS experiments for about five years. This summer's work involved porting the HammerCloud suite of tools to the LHCb experiment. The HammerCloud software runs functional tests and provides data visualizations. HammerCloud's LHCb variant is written in Python, using the Django web framework and Ganga/DIRAC for job management.

  2. International Symposium on Grids and Clouds (ISGC) 2017

    Science.gov (United States)

    2017-03-01

    The International Symposium on Grids and Clouds (ISGC) 2017 will be held at Academia Sinica in Taipei, Taiwan from 5-10 March 2017, with co- located events and workshops. The main theme of ISGC 2017 is "Global Challenges: From Open Data to Open Science". The unprecedented progress in ICT has transformed the way education is conducted and research is carried out. The emerging global e-Infrastructure, championed by global science communities such as High Energy Physics, Astronomy, and Bio- medicine, must permeate into other sciences. Many areas, such as climate change, disaster mitigation, and human sustainability and well-being, represent global challenges where collaboration over e-Infrastructure will presumably help resolve the common problems of the people who are impacted. Access to global e-Infrastructure helps also the less globally organized, long-tail sciences, with their own collaboration challenges. Open data are not only a political phenomenon serving government transparency; they also create an opportunity to eliminate access barriers to all scientific data, specifically data from global sciences and regional data that concern natural phenomena and people. In this regard, the purpose of open data is to improve sciences, accelerating specifically those that may benefit people. Nevertheless, to eliminate barriers to open data is itself a daunting task and the barriers to individuals, institutions and big collaborations are manifold. Open science is a step beyond open data, where the tools and understanding of scientific data must be made available to whoever is interested to participate in such scientific research. The promotion of open science may change the academic tradition practiced over the past few hundred years. This change of dynamics may contribute to the resolution of common challenges of human sustainability where the current pace of scientific progress is not sufficiently fast. ISGC 2017 created a face-to-face venue where individual

  3. Cloud Computing for Pharmacometrics: Using AWS, NONMEM, PsN, Grid Engine, and Sonic.

    Science.gov (United States)

    Sanduja, S; Jewell, P; Aron, E; Pharai, N

    2015-09-01

    Cloud computing allows pharmacometricians to access advanced hardware, network, and security resources available to expedite analysis and reporting. Cloud-based computing environments are available at a fraction of the time and effort when compared to traditional local datacenter-based solutions. This tutorial explains how to get started with building your own personal cloud computer cluster using Amazon Web Services (AWS), NONMEM, PsN, Grid Engine, and Sonic.

  4. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  5. Cloud vector mapping using MODIS 09 Climate Modeling Grid (CMG) for the year 2010 and 2011

    International Nuclear Information System (INIS)

    Jah, Asjad Asif; Farrukh, Yousaf Bin; Ali, Rao Muhammad Saeed

    2013-01-01

    An alternate use for MODIS images was sought by mapping cloud movement directions and dissipation time during the 2010 and 2011 floods. MODIS Level-02 daily CMG (Climate Modelling Grid) land-cover images were downloaded and subsequently rectified and clipped to the study area. These images were then put together to observe the direction of cloud movement and vectorize the observed paths. Initial findings suggest that usually cloud does not have a prolonged coverage period over the northern humid region of the country and dissipates within less than 24-hours. Additionally, this led to the development of a robust methodology for cloud motion analysis using FOSS and market leading GIS utilities

  6. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Rinaldi, L; Ciocca, C; K, M; Annovi, A; Antonelli, M; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Barberis, S; Carminati, L; Campana, S; Di, A; Capone, V; Carlino, G; Doria, A; Esposito, R; Merola, L; De, A; Luminari, L

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  7. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  8. ATLAS operations in the GridKa T1/T2 Cloud

    International Nuclear Information System (INIS)

    Duckeck, G; Serfon, C; Walker, R; Harenberg, T; Kalinin, S; Schultes, J; Kawamura, G; Leffhalm, K; Meyer, J; Nderitu, S; Olszewski, A; Petzold, A; Sundermann, J E

    2011-01-01

    The ATLAS GridKa cloud consists of the GridKa Tier1 centre and 12 Tier2 sites from five countries associated to it. Over the last years a well defined and tested operation model evolved. Several core cloud services need to be operated and closely monitored: distributed data management, involving data replication, deletion and consistency checks; support for ATLAS production activities, which includes Monte Carlo simulation, reprocessing and pilot factory operation; continuous checks of data availability and performance for user analysis; software installation and database setup. Of crucial importance is good communication between sites, operations team and ATLAS as well as efficient cloud level monitoring tools. The paper gives an overview of the operations model and ATLAS services within the cloud.

  9. Edgeware Security Risk Management: A Three Essay Thesis on Cloud, Virtualization and Wireless Grid Vulnerabilities

    Science.gov (United States)

    Brooks, Tyson T.

    2013-01-01

    This thesis identifies three essays which contribute to the foundational understanding of the vulnerabilities and risk towards potentially implementing wireless grid Edgeware technology in a virtualized cloud environment. Since communication networks and devices are subject to becoming the target of exploitation by hackers (e.g. individuals who…

  10. Multidimensional Environmental Data Resource Brokering on Computational Grids and Scientific Clouds

    Science.gov (United States)

    Montella, Raffaele; Giunta, Giulio; Laccetti, Giuliano

    Grid computing has widely evolved over the past years, and its capabilities have found their way even into business products and are no longer relegated to scientific applications. Today, grid computing technology is not restricted to a set of specific grid open source or industrial products, but rather it is comprised of a set of capabilities virtually within any kind of software to create shared and highly collaborative production environments. These environments are focused on computational (workload) capabilities and the integration of information (data) into those computational capabilities. An active grid computing application field is the fully virtualization of scientific instruments in order to increase their availability and decrease operational and maintaining costs. Computational and information grids allow to manage real-world objects in a service-oriented way using industrial world-spread standards.

  11. A Survey on Cloud Security Issues and Techniques

    OpenAIRE

    Sharma, Shubhanjali; Gupta, Garima; Laxmi, P. R.

    2014-01-01

    Today, cloud computing is an emerging way of computing in computer science. Cloud computing is a set of resources and services that are offered by the network or internet. Cloud computing extends various computing techniques like grid computing, distributed computing. Today cloud computing is used in both industrial field and academic field. Cloud facilitates its users by providing virtual resources via internet. As the field of cloud computing is spreading the new techniques are developing. ...

  12. Smart grids clouds, communications, open source, and automation

    CERN Document Server

    Bakken, David

    2014-01-01

    The utilization of sensors, communications, and computer technologies to create greater efficiency in the generation, transmission, distribution, and consumption of electricity will enable better management of the electric power system. As the use of smart grid technologies grows, utilities will be able to automate meter reading and billing and consumers will be more aware of their energy usage and the associated costs. The results will require utilities and their suppliers to develop new business models, strategies, and processes.With an emphasis on reducing costs and improving return on inve

  13. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Science.gov (United States)

    Chakravarthy, Srinivas R.; Rumyantsev, Alexander

    2018-03-01

    Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication) for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  14. Efficient Redundancy Techniques in Cloud and Desktop Grid Systems using MAP/G/c-type Queues

    Directory of Open Access Journals (Sweden)

    Chakravarthy Srinivas R.

    2018-03-01

    Full Text Available Cloud computing is continuing to prove its flexibility and versatility in helping industries and businesses as well as academia as a way of providing needed computing capacity. As an important alternative to cloud computing, desktop grids allow to utilize the idle computer resources of an enterprise/community by means of distributed computing system, providing a more secure and controllable environment with lower operational expenses. Further, both cloud computing and desktop grids are meant to optimize limited resources and at the same time to decrease the expected latency for users. The crucial parameter for optimization both in cloud computing and in desktop grids is the level of redundancy (replication for service requests/workunits. In this paper we study the optimal replication policies by considering three variations of Fork-Join systems in the context of a multi-server queueing system with a versatile point process for the arrivals. For services we consider phase type distributions as well as shifted exponential and Weibull. We use both analytical and simulation approach in our analysis and report some interesting qualitative results.

  15. On the influence of cloud fraction diurnal cycle and sub-grid cloud optical thickness variability on all-sky direct aerosol radiative forcing

    International Nuclear Information System (INIS)

    Min, Min; Zhang, Zhibo

    2014-01-01

    The objective of this study is to understand how cloud fraction diurnal cycle and sub-grid cloud optical thickness variability influence the all-sky direct aerosol radiative forcing (DARF). We focus on the southeast Atlantic region where transported smoke is often observed above low-level water clouds during burning seasons. We use the CALIOP observations to derive the optical properties of aerosols. We developed two diurnal cloud fraction variation models. One is based on sinusoidal fitting of MODIS observations from Terra and Aqua satellites. The other is based on high-temporal frequency diurnal cloud fraction observations from SEVIRI on board of geostationary satellite. Both models indicate a strong cloud fraction diurnal cycle over the southeast Atlantic region. Sensitivity studies indicate that using a constant cloud fraction corresponding to Aqua local equatorial crossing time (1:30 PM) generally leads to an underestimated (less positive) diurnal mean DARF even if solar diurnal variation is considered. Using cloud fraction corresponding to Terra local equatorial crossing time (10:30 AM) generally leads overestimation. The biases are a typically around 10–20%, but up to more than 50%. The influence of sub-grid cloud optical thickness variability on DARF is studied utilizing the cloud optical thickness histogram available in MODIS Level-3 daily data. Similar to previous studies, we found the above-cloud smoke in the southeast Atlantic region has a strong warming effect at the top of the atmosphere. However, because of the plane-parallel albedo bias the warming effect of above-cloud smoke could be significantly overestimated if the grid-mean, instead of the full histogram, of cloud optical thickness is used in the computation. This bias generally increases with increasing above-cloud aerosol optical thickness and sub-grid cloud optical thickness inhomogeneity. Our results suggest that the cloud diurnal cycle and sub-grid cloud variability are important factors

  16. The GridEcon Platform: A Business Scenario Testbed for Commercial Cloud Services

    Science.gov (United States)

    Risch, Marcel; Altmann, Jörn; Guo, Li; Fleming, Alan; Courcoubetis, Costas

    Within this paper, we present the GridEcon Platform, a testbed for designing and evaluating economics-aware services in a commercial Cloud computing setting. The Platform is based on the idea that the exact working of such services is difficult to predict in the context of a market and, therefore, an environment for evaluating its behavior in an emulated market is needed. To identify the components of the GridEcon Platform, a number of economics-aware services and their interactions have been envisioned. The two most important components of the platform are the Marketplace and the Workflow Engine. The Workflow Engine allows the simple composition of a market environment by describing the service interactions between economics-aware services. The Marketplace allows trading goods using different market mechanisms. The capabilities of these components of the GridEcon Platform in conjunction with the economics-aware services are described in this paper in detail. The validation of an implemented market mechanism and a capacity planning service using the GridEcon Platform also demonstrated the usefulness of the GridEcon Platform.

  17. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  18. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  19. Techniques and environments for big data analysis parallel, cloud, and grid computing

    CERN Document Server

    Dehuri, Satchidananda; Kim, Euiwhan; Wang, Gi-Name

    2016-01-01

    This volume is aiming at a wide range of readers and researchers in the area of Big Data by presenting the recent advances in the fields of Big Data Analysis, as well as the techniques and tools used to analyze it. The book includes 10 distinct chapters providing a concise introduction to Big Data Analysis and recent Techniques and Environments for Big Data Analysis. It gives insight into how the expensive fitness evaluation of evolutionary learning can play a vital role in big data analysis by adopting Parallel, Grid, and Cloud computing environments.

  20. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  1. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Document Server

    Van der Ster , D; Medrano Llamas, R; Legger , F; Sciaba, A; Sciacca, G; Ubeda Garca , M

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion p...

  2. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion ...

  3. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    International Nuclear Information System (INIS)

    Elmsheuser, Johannes; Legger, Federica; Llamas, Ramón Medrano; Sciabà, Andrea; García, Mario Úbeda; Ster, Daniel van der; Sciacca, Gianfranco

    2012-01-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  4. Experience in Grid Site Testing for ATLAS, CMS and LHCb with HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Medrano Llamas, Ramón; Legger, Federica; Sciabà, Andrea; Sciacca, Gianfranco; Úbeda García, Mario; van der Ster, Daniel

    2012-12-01

    Frequent validation and stress testing of the network, storage and CPU resources of a grid site is essential to achieve high performance and reliability. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run such tests in an automated or on-demand manner. The ATLAS, CMS and LHCb experiments have all developed VO plugins for the service and have successfully integrated it into their grid operations infrastructures. This work will present the experience in running HammerCloud at full scale for more than 3 years and present solutions to the scalability issues faced by the service. First, we will show the particular challenges faced when integrating with CMS and LHCb offline computing, including customized dashboards to show site validation reports for the VOs and a new API to tightly integrate with the LHCbDIRAC Resource Status System. Next, a study of the automatic site exclusion component used by ATLAS will be presented along with results for tuning the exclusion policies. A study of the historical test results for ATLAS, CMS and LHCb will be presented, including comparisons between the experiments’ grid availabilities and a search for site-based or temporal failure correlations. Finally, we will look to future plans that will allow users to gain new insights into the test results; these include developments to allow increased testing concurrency, increased scale in the number of metrics recorded per test job (up to hundreds), and increased scale in the historical job information (up to many millions of jobs per VO).

  5. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele

    2012-01-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  6. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Andreazza, A; Annovi, A; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Campana, S; Girolamo, A Di; Carlino, G; Doria, A; Merola, L; Musto, E; Ciocca, C; Jha, M K; Cobal, M; Pascolo, F; Salvo, A De; Luminari, L; Sanctis, U De; Galeazzi, F

    2011-01-01

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  7. The Determination of Jurisdiction in Grid and Cloud Service Level Agreements

    Science.gov (United States)

    Parrilli, Davide Maria

    Service Level Agreements in Grid and Cloud scenarios can be a source of disputes particularly in case of breach of the obligations arising under them. It is then important to determine where parties can litigate in relation with such agreements. The paper deals with this question in the peculiar context of the European Union, and so taking into consideration Regulation 44/2001. According to the rules on jurisdiction provided by the Regulation, two general distinctions are drawn in order to determine which (European) courts are competent to adjudicate disputes arising out of a Service Level Agreement. The former is between B2B and B2C transactions, and the latter regards contracts which provide a jurisdiction clause and contracts which do not.

  8. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    International Nuclear Information System (INIS)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-01-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we describe the WNoDeS architecture.

  9. WNoDeS, a tool for integrated Grid and Cloud access and computing farm virtualization

    Science.gov (United States)

    Salomoni, Davide; Italiano, Alessandro; Ronchieri, Elisabetta

    2011-12-01

    INFN CNAF is the National Computing Center, located in Bologna, Italy, of the Italian National Institute for Nuclear Physics (INFN). INFN CNAF, also called the INFN Tier-1, provides computing and storage facilities to the International High-Energy Physics community and to several multi-disciplinary experiments. Currently, the INFN Tier-1 supports more than twenty different collaborations; in this context, optimization of the usage of computing resources is essential. This is one of the main drivers behind the development of a software called WNoDeS (Worker Nodes on Demand Service). WNoDeS, developed at INFN CNAF and deployed on the INFN Tier-1 production infrastructure, is a solution to virtualize computing resources and to make them available through local, Grid or Cloud interfaces. It is designed to be fully integrated with a Local Resource Management System; it is therefore inherently scalable and permits full integration with existing scheduling, policing, monitoring, accounting and security workflows. WNoDeS dynamically instantiates Virtual Machines (VMs) on-demand, i.e. only when the need arises; these VMs can be tailored and used for purposes like batch job execution, interactive analysis or service instantiation. WNoDeS supports interaction with user requests through traditional batch or Grid jobs and also via the Open Cloud Computing Interface standard, making it possible to allocate compute, storage and network resources on a pay-as-you-go basis. User authentication is supported via several authentication methods, while authorization policies are handled via gLite Argus. WNoDeS is an ambitious solution aimed at virtualizing cluster resources in medium or large scale computing centers, with up to several thousands of Virtual Machines up and running at any given time. In this paper, we descrive the WNoDeS architecture.

  10. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    International Nuclear Information System (INIS)

    Limosani, Antonio; Boland, Lucien; Crosby, Sean; Huang, Joanna; Sevior, Martin; Coddington, Paul; Zhang, Shunde; Wilson, Ross

    2014-01-01

    The Australian Government is making a $AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  11. Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud

    Science.gov (United States)

    Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde

    2014-06-01

    The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.

  12. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    Science.gov (United States)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  13. Improving ATLAS grid site reliability with functional tests using HammerCloud

    Science.gov (United States)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  14. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment.

    Science.gov (United States)

    Oh, Jeongsu; Choi, Chi-Hwan; Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA

  15. Large Scale Monte Carlo Simulation of Neutrino Interactions Using the Open Science Grid and Commercial Clouds

    International Nuclear Information System (INIS)

    Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.

    2015-01-01

    Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)

  16. Can Clouds Replace Grids? A Real-Life Exabyte-Scale Test-Case

    CERN Document Server

    Shiers, J

    2008-01-01

    The world’s largest scientific machine – comprising dual 27km circular proton accelerators cooled to 1.9oK and located some 100m underground – currently relies on major production Grid infrastructures for the offline computing needs of the 4 main experiments that will take data at this facility. After many years of sometimes difficult preparation the computing service has been declared â€ワopen” and ready to meet the challenges that will come shortly when the machine restarts in 2009. But the service is not without its problems: reliability – as seen by the experiments, as opposed to that measured by the official tools – still needs to be significantly improved. Prolonged downtimes or degradations of major services or even complete sites are still too common and the operational and coordination effort to keep the overall service running is probably not sustainable at this level. Recently â€ワCloud Computing” – in terms of pay-per-use fabric provisioning – has...

  17. Heat grids today and after the German Renewable Energies Act (EEG). A business segment for the agriculture?

    International Nuclear Information System (INIS)

    Clemens, Dietrich; Billerbeck, Hagen

    2016-01-01

    The development of a centralised and sustainable heat supply through the construction of heat grids offers consumers numerous advantages compared to a decentralised energy supply of residential and commercial properties. Where the migration to centralised heat supply relegates fossil fuels through the long-term incorporation of sustainable renewable energy sources, the projects make an important contribution towards meeting the government's climate protection goals. Heat generation and heat sales from renewable energy sources should be ensured in the long term. In the countryside, biogas plant operators are frequently the initiators of heat grid investments, or they take on the role of supplier for the provision of low-cost CHP heat from cogeneration units. In view of the limited remuneration period under the terms of the German Renewable Energy Act, the clock is ticking for the establishment of a centralised heat supply. This paper presents the advantages and disadvantages of a centralised, sustainable heat supply and additionally considers the flexibi/isation of biogas plants in view of the construction of the heat grid and the associated infrastructure. A focus is placed on the security of supply for customers after the discontinuation of remuneration under the German Renewable Energy Act and on how a competitive heat price from alternative energy sources can continue to be ensured.

  18. Reaching for the cloud: on the lessons learned from grid computing technology transfer process to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Dickmann, Frank; Sax, Ulrich; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which led to the creation of the Grid. The inter domain transfer process of this technology has hitherto been an intuitive process without in depth analysis. Some difficulties facing the life science community in this transfer can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies which have achieved certain stability. Grid and Cloud solutions are technologies, which are still in flux. We show how Grid computing creates new difficulties in the transfer process that are not considered in Bozeman's model. We show why the success of healthgrids should be measured by the qualified scientific human capital and the opportunities created, and not primarily by the market impact. We conclude with recommendations that can help improve the adoption of Grid and Cloud solutions into the biomedical community. These results give a more concise explanation of the difficulties many life science IT projects are facing in the late funding periods, and show leveraging steps that can help overcoming the "vale of tears".

  19. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment.

    Directory of Open Access Journals (Sweden)

    Jeongsu Oh

    Full Text Available High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs. The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD

  20. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment

    Science.gov (United States)

    Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology–a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in

  1. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    Panitkin, Sergey; Bejar, Jose Caballero; Hover, John; Zaytsev, Alexander; Megino, Fernando Barreiro; Girolamo, Alessandro Di; Kucharczyk, Katarzyna; Llamas, Ramon Medrano; Benjamin, Doug; Gable, Ian; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Hendrix, Val; Love, Peter; Ohman, Henrik; Walker, Rodney

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  2. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  3. CERES Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_TRMM-PFM-VIRS_Beta1)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  4. Grid heterogeneity in in-silico experiments: an exploration of drug screening using DOCK on cloud environments.

    Science.gov (United States)

    Yim, Wen-Wai; Chien, Shu; Kusumoto, Yasuyuki; Date, Susumu; Haga, Jason

    2010-01-01

    Large-scale in-silico screening is a necessary part of drug discovery and Grid computing is one answer to this demand. A disadvantage of using Grid computing is the heterogeneous computational environments characteristic of a Grid. In our study, we have found that for the molecular docking simulation program DOCK, different clusters within a Grid organization can yield inconsistent results. Because DOCK in-silico virtual screening (VS) is currently used to help select chemical compounds to test with in-vitro experiments, such differences have little effect on the validity of using virtual screening before subsequent steps in the drug discovery process. However, it is difficult to predict whether the accumulation of these discrepancies over sequentially repeated VS experiments will significantly alter the results if VS is used as the primary means for identifying potential drugs. Moreover, such discrepancies may be unacceptable for other applications requiring more stringent thresholds. This highlights the need for establishing a more complete solution to provide the best scientific accuracy when executing an application across Grids. One possible solution to platform heterogeneity in DOCK performance explored in our study involved the use of virtual machines as a layer of abstraction. This study investigated the feasibility and practicality of using virtual machine and recent cloud computing technologies in a biological research application. We examined the differences and variations of DOCK VS variables, across a Grid environment composed of different clusters, with and without virtualization. The uniform computer environment provided by virtual machines eliminated inconsistent DOCK VS results caused by heterogeneous clusters, however, the execution time for the DOCK VS increased. In our particular experiments, overhead costs were found to be an average of 41% and 2% in execution time for two different clusters, while the actual magnitudes of the execution time

  5. Editorial for special section of grid computing journal on “Cloud Computing and Services Science‿

    NARCIS (Netherlands)

    van Sinderen, Marten J.; Ivanov, Ivan I.

    This editorial briefly discusses characteristics, technology developments and challenges of cloud computing. It then introduces the papers included in the special issue on "Cloud Computing and Services Science" and positions the work reported in these papers with respect to the previously mentioned

  6. How to deal with petabytes of data: the LHC Grid project

    International Nuclear Information System (INIS)

    Britton, D; Lloyd, S L

    2014-01-01

    We review the Grid computing system developed by the international community to deal with the petabytes of data coming from the Large Hadron Collider at CERN in Geneva with particular emphasis on the ATLAS experiment and the UK Grid project, GridPP. Although these developments were started over a decade ago, this article explains their continued relevance as part of the ‘Big Data’ problem and how the Grid has been forerunner of today's cloud computing. (review article)

  7. Integrating Flexible Sensor and Virtual Self-Organizing DC Grid Model With Cloud Computing for Blood Leakage Detection During Hemodialysis.

    Science.gov (United States)

    Huang, Ping-Tzan; Jong, Tai-Lang; Li, Chien-Ming; Chen, Wei-Ling; Lin, Chia-Hung

    2017-08-01

    Blood leakage and blood loss are serious complications during hemodialysis. From the hemodialysis survey reports, these life-threatening events occur to attract nephrology nurses and patients themselves. When the venous needle and blood line are disconnected, it takes only a few minutes for an adult patient to lose over 40% of his / her blood, which is a sufficient amount of blood loss to cause the patient to die. Therefore, we propose integrating a flexible sensor and self-organizing algorithm to design a cloud computing-based warning device for blood leakage detection. The flexible sensor is fabricated via a screen-printing technique using metallic materials on a soft substrate in an array configuration. The self-organizing algorithm constructs a virtual direct current grid-based alarm unit in an embedded system. This warning device is employed to identify blood leakage levels via a wireless network and cloud computing. It has been validated experimentally, and the experimental results suggest specifications for its commercial designs. The proposed model can also be implemented in an embedded system.

  8. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  9. Cooperative Strategy for Optimal Management of Smart Grids by Wavelet RNNs and Cloud Computing.

    Science.gov (United States)

    Napoli, Christian; Pappalardo, Giuseppe; Tina, Giuseppe Marco; Tramontana, Emiliano

    2016-08-01

    Advanced smart grids have several power sources that contribute with their own irregular dynamic to the power production, while load nodes have another dynamic. Several factors have to be considered when using the owned power sources for satisfying the demand, i.e., production rate, battery charge and status, variable cost of externally bought energy, and so on. The objective of this paper is to develop appropriate neural network architectures that automatically and continuously govern power production and dispatch, in order to maximize the overall benefit over a long time. Such a control will improve the fundamental work of a smart grid. For this, status data of several components have to be gathered, and then an estimate of future power production and demand is needed. Hence, the neural network-driven forecasts are apt in this paper for renewable nonprogrammable energy sources. Then, the produced energy as well as the stored one can be supplied to consumers inside a smart grid, by means of digital technology. Among the sought benefits, reduced costs and increasing reliability and transparency are paramount.

  10. Grid Computing

    Indian Academy of Sciences (India)

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers ...

  11. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  12. Advanced Cloud Forecasting for Solar Energy's Impact on Grid Modernization

    International Nuclear Information System (INIS)

    Werth, D.; Nichols, R.

    2017-01-01

    Solar energy production is subject to variability in the solar resource - clouds and aerosols will reduce the available solar irradiance and inhibit power production. The fact that solar irradiance can vary by large amounts at small timescales and in an unpredictable way means that power utilities are reluctant to assign to their solar plants a large portion of future energy demand - the needed power might be unavailable, forcing the utility to make costly adjustments to its daily portfolio. The availability and predictability of solar radiation therefore represent important research topics for increasing the power produced by renewable sources.

  13. Advanced Cloud Forecasting for Solar Energy’s Impact on Grid Modernization

    Energy Technology Data Exchange (ETDEWEB)

    Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Nichols, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Solar energy production is subject to variability in the solar resource – clouds and aerosols will reduce the available solar irradiance and inhibit power production. The fact that solar irradiance can vary by large amounts at small timescales and in an unpredictable way means that power utilities are reluctant to assign to their solar plants a large portion of future energy demand – the needed power might be unavailable, forcing the utility to make costly adjustments to its daily portfolio. The availability and predictability of solar radiation therefore represent important research topics for increasing the power produced by renewable sources.

  14. Using Cloud-to-Ground Lightning Climatologies to Initialize Gridded Lightning Threat Forecasts for East Central Florida

    Science.gov (United States)

    Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew

    2005-01-01

    Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on

  15. Heat grids today and after the German Renewable Energies Act (EEG). A business segment for the agriculture?; Waermenetze heute und nach dem EEG. Ein Betriebszweig fuer die Landwirtschaft?

    Energy Technology Data Exchange (ETDEWEB)

    Clemens, Dietrich; Billerbeck, Hagen [Treurat und Partner Unternehmensberatungsgesellschaft mbH, Lueneburg (Germany). Abt. ' ' Climate and Energy' '

    2016-08-01

    The development of a centralised and sustainable heat supply through the construction of heat grids offers consumers numerous advantages compared to a decentralised energy supply of residential and commercial properties. Where the migration to centralised heat supply relegates fossil fuels through the long-term incorporation of sustainable renewable energy sources, the projects make an important contribution towards meeting the government's climate protection goals. Heat generation and heat sales from renewable energy sources should be ensured in the long term. In the countryside, biogas plant operators are frequently the initiators of heat grid investments, or they take on the role of supplier for the provision of low-cost CHP heat from cogeneration units. In view of the limited remuneration period under the terms of the German Renewable Energy Act, the clock is ticking for the establishment of a centralised heat supply. This paper presents the advantages and disadvantages of a centralised, sustainable heat supply and additionally considers the flexibi/isation of biogas plants in view of the construction of the heat grid and the associated infrastructure. A focus is placed on the security of supply for customers after the discontinuation of remuneration under the German Renewable Energy Act and on how a competitive heat price from alternative energy sources can continue to be ensured.

  16. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  17. CERES Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_Terra-FM1-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  18. CERES) Monthly Gridded Single Satellite Fluxes and Clouds (FSW) in HDF (CER_FSW_Terra-FM2-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator)

    The Monthly Gridded Radiative Fluxes and Clouds (FSW) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The FSW is also produced for combinations of scanner instruments. All instantaneous fluxes from the CERES CRS product for a month are sorted by 1-degree spatial regions and by the Universal Time (UT) hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the FSW along with other flux statistics and scene information. The mean adjusted fluxes at the four atmospheric levels defined by CRS are also included for both clear-sky and total-sky scenes. In addition, four cloud height categories are defined by dividing the atmosphere into four intervals with boundaries at the surface, 700-, 500-, 300-hPa, and the Top-of-the-Atmosphere (TOA). The cloud layers from CRS are put into one of the cloud height categories and averaged over the region. The cloud properties are also column averaged and included on the FSW. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2001-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=180] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 month; Temporal_Resolution_Range=Monthly - < Annual].

  19. Horizontal Variability of Water and Its Relationship to Cloud Fraction near the Tropical Tropopause: Using Aircraft Observations of Water Vapor to Improve the Representation of Grid-scale Cloud Formation in GEOS-5

    Science.gov (United States)

    Selkirk, Henry B.; Molod, Andrea M.

    2014-01-01

    Large-scale models such as GEOS-5 typically calculate grid-scale fractional cloudiness through a PDF parameterization of the sub-gridscale distribution of specific humidity. The GEOS-5 moisture routine uses a simple rectangular PDF varying in height that follows a tanh profile. While below 10 km this profile is informed by moisture information from the AIRS instrument, there is relatively little empirical basis for the profile above that level. ATTREX provides an opportunity to refine the profile using estimates of the horizontal variability of measurements of water vapor, total water and ice particles from the Global Hawk aircraft at or near the tropopause. These measurements will be compared with estimates of large-scale cloud fraction from CALIPSO and lidar retrievals from the CPL on the aircraft. We will use the variability measurements to perform studies of the sensitivity of the GEOS-5 cloud-fraction to various modifications to the PDF shape and to its vertical profile.

  20. Grid Security

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The aim of Grid computing is to enable the easy and open sharing of resources between large and highly distributed communities of scientists and institutes across many independent administrative domains. Convincing site security officers and computer centre managers to allow this to happen in view of today's ever-increasing Internet security problems is a major challenge. Convincing users and application developers to take security seriously is equally difficult. This paper will describe the main Grid security issues, both in terms of technology and policy, that have been tackled over recent years in LCG and related Grid projects. Achievements to date will be described and opportunities for future improvements will be addressed.

  1. Cloud Computing Benefits for Educational Institutions

    OpenAIRE

    Lakshminarayanan, Ramkumar; Kumar, Binod; Raju, M.

    2013-01-01

    Education today is becoming completely associated with the Information Technology on the content delivery, communication and collaboration. The need for servers, storage and software are highly demanding in the universities, colleges and schools. Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on-demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service...

  2. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    Science.gov (United States)

    De Salvo, A.; Kataoka, M.; Sanchez Pineda, A.; Smirnov, Y.

    2015-12-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original Workload Management Service (WMS) and the new PanDA modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over Wide Area Network. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation Database is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system is in production for ATLAS since 2013, having as main sites in HA the INFN Roma Tier 2 and the CERN Agile Infrastructure. The Light Job Submission Framework for Installation (LJSFi) v2 engine is directly interfacing with PanDA for the Job Management, the Atlas Grid Information System (AGIS) for the site parameter configurations, and CVMFS for both core components and the installation of the software itself. LJSFi2 is also able to use other plugins, and is essentially Virtual Organization (VO) agnostic, so can be directly used and extended to cope with the requirements of any Grid or Cloud enabled VO. In this work we will present the architecture, performance, status and possible evolutions to the system for the LHC Run2 and beyond.

  3. Fermilab Today

    Science.gov (United States)

    registration due today Women's Initiative: "Guiltless: Work/Life Balance" - Aug. 13 Nominations for ; -Leah Hesla In Brief Women's Initiative presents 'Guiltless: Work-Life Balance' - Thursday in One West Cowperthwaite-O'Hagan present "Guiltless: Work-Life Balance" on Thursday, Aug. 13, at 3 p.m. in One

  4. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM1-MODIS_Edition2B)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2003-10-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  5. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM2-MODIS_Edition2A)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2003-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  6. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Aqua-FM3-MODIS_Edition2A)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  7. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_Terra-FM2-MODIS_Edition2C)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2005-12-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  8. CERES Monthly Gridded Single Satellite TOA and Surfaces/Clouds (SFC) data in HDF (CER_SFC_TRMM-PFM-VIRS_Beta4)

    Science.gov (United States)

    Wielicki, Bruce A. (Principal Investigator)

    The Monthly Gridded TOA/Surface Fluxes and Clouds (SFC) product contains a month of space and time averaged Clouds and the Earth's Radiant Energy System (CERES) data for a single scanner instrument. The SFC is also produced for combinations of scanner instruments. All instantaneous shortwave, longwave, and window fluxes at the Top-of-the-Atmosphere (TOA) and surface from the CERES SSF product for a month are sorted by 1-degree spatial regions and by the local hour of observation. The mean of the instantaneous fluxes for a given region-hour bin is determined and recorded on the SFC along with other flux statistics and scene information. These average fluxes are given for both clear-sky and total-sky scenes. The regional cloud properties are column averaged and are included on the SFC. [Location=GLOBAL] [Temporal_Coverage: Start_Date=1998-01-01; Stop_Date=2000-03-31] [Spatial_Coverage: Southernmost_Latitude=-90; Northernmost_Latitude=90; Westernmost_Longitude=-180; Easternmost_Longitude=100] [Data_Resolution: Latitude_Resolution=1 degree; Longitude_Resolution=1 degree; Horizontal_Resolution_Range=100 km - < 250 km or approximately 1 degree - < 2.5 degrees; Temporal_Resolution=1 hour; Temporal_Resolution_Range=Hourly - < Daily].

  9. Grid interoperability: joining grid information systems

    International Nuclear Information System (INIS)

    Flechl, M; Field, L

    2008-01-01

    A grid is defined as being 'coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations'. Over recent years a number of grid projects, many of which have a strong regional presence, have emerged to help coordinate institutions and enable grids. Today, we face a situation where a number of grid projects exist, most of which are using slightly different middleware. Grid interoperation is trying to bridge these differences and enable Virtual Organizations to access resources at the institutions independent of their grid project affiliation. Grid interoperation is usually a bilateral activity between two grid infrastructures. Recently within the Open Grid Forum, the Grid Interoperability Now (GIN) Community Group is trying to build upon these bilateral activities. The GIN group is a focal point where all the infrastructures can come together to share ideas and experiences on grid interoperation. It is hoped that each bilateral activity will bring us one step closer to the overall goal of a uniform grid landscape. A fundamental aspect of a grid is the information system, which is used to find available grid services. As different grids use different information systems, interoperation between these systems is crucial for grid interoperability. This paper describes the work carried out to overcome these differences between a number of grid projects and the experiences gained. It focuses on the different techniques used and highlights the important areas for future standardization

  10. Computing networks from cluster to cloud computing

    CERN Document Server

    Vicat-Blanc, Pascale; Guillier, Romaric; Soudan, Sebastien

    2013-01-01

    "Computing Networks" explores the core of the new distributed computing infrastructures we are using today:  the networking systems of clusters, grids and clouds. It helps network designers and distributed-application developers and users to better understand the technologies, specificities, constraints and benefits of these different infrastructures' communication systems. Cloud Computing will give the possibility for millions of users to process data anytime, anywhere, while being eco-friendly. In order to deliver this emerging traffic in a timely, cost-efficient, energy-efficient, and

  11. Einstein today

    International Nuclear Information System (INIS)

    Aspect, A.; Grangier, Ph.; Bouchet, F.R.; Brunet, E.; Derrida, B.; Cohen-Tannoudji, C.; Dalibard, J.; Laloe, F.; Damour, Th.; Darrigol, O.; Pocholle, J.P.

    2005-01-01

    The most important contributions of Einstein involve 5 fields of physics : the existence of quanta (light quanta, stimulated radiation emission and Bose-Einstein condensation), relativity, fluctuations (Brownian motion and thermodynamical fluctuations), the basis of quantum physics and cosmology (cosmological constant and the expansion of the universe). Diverse and renowned physicists have appreciated the development of modern physics from Einstein's ideas to the knowledge of today. This book is a collective book that gathers their work under 7 chapters: 1) 1905, a new beginning; 2) from the Einstein, Podolsky and Rosen's article to quantum information (cryptography and quantum computers); 3) the Bose-Einstein condensation in gases; 4) from stimulated emission to the today's lasers; 5) Brownian motion and the fluctuation-dissipation theory; 6) general relativity; and 7) cosmology. (A.C.)

  12. Essentials of cloud computing

    CERN Document Server

    Chandrasekaran, K

    2014-01-01

    ForewordPrefaceComputing ParadigmsLearning ObjectivesPreambleHigh-Performance ComputingParallel ComputingDistributed ComputingCluster ComputingGrid ComputingCloud ComputingBiocomputingMobile ComputingQuantum ComputingOptical ComputingNanocomputingNetwork ComputingSummaryReview PointsReview QuestionsFurther ReadingCloud Computing FundamentalsLearning ObjectivesPreambleMotivation for Cloud ComputingThe Need for Cloud ComputingDefining Cloud ComputingNIST Definition of Cloud ComputingCloud Computing Is a ServiceCloud Computing Is a Platform5-4-3 Principles of Cloud computingFive Essential Charact

  13. Cloud Computing

    CERN Document Server

    Antonopoulos, Nick

    2010-01-01

    Cloud computing has recently emerged as a subject of substantial industrial and academic interest, though its meaning and scope is hotly debated. For some researchers, clouds are a natural evolution towards the full commercialisation of grid systems, while others dismiss the term as a mere re-branding of existing pay-per-use technologies. From either perspective, 'cloud' is now the label of choice for accountable pay-per-use access to third party applications and computational resources on a massive scale. Clouds support patterns of less predictable resource use for applications and services a

  14. Radiochemistry - today

    International Nuclear Information System (INIS)

    Drawe, H.

    1980-01-01

    After a longer starting period many radiation techniques have prevailed practically. Today radiation processes are usual components of chemistry, biology, medicine, and technologies in the most common sense. This paper deals with the latest state of radiation chemistry, whereas the possible practical applications are in the foreground of discussion as to reach mainly practicians in laboratory and industry. But also physicians, pharmacists and chemical engineers should be informed about the possibilities of application of high energyy radiation. Because radiation chemistry has also enriched works of related subjects, for example physical, organic and inorganic chemistry, this paper will also be of interest for experts of these disciplines. (orig.) [de

  15. Neutrinos today

    International Nuclear Information System (INIS)

    Pontecorvo, B.; Bilen'kij, S.

    1987-01-01

    After the famous 1983 discovery of intermediate W, Z 0 bosons it may be stated with certainty that W, Z 0 are entirely responsible for the production of neutrinos and for their interactions. Neutrino physics notions are presented from this point of view in the first four introductory, quite elementary, paragraphs of the paper. The following seven paragraphs are more sophisticated. They are devoted to the neutrino mass and neutrino mixing question, which is the most actual problem in today neutrino physics. Vacuum neutrino oscillations, matter neutrino oscillations and netrinoless double-decay are considered. Solar neutrino physics is discussed in some detail from the point of view of vacuum and matter neutrino oscillations. The role played by neutrinos in the Universe is briefly considered. In the last paragraph there discussed the probable observation by different groups of neutrinos connected with the Supernova 1987 A: the first observation of gravitational star collapse (at least the general rehearsal of such observation) opens up a new era in astronomy of today exerimental physics and astrophysics is presented at the end of the paper in the form of a Table

  16. Psychoanalysis today

    Science.gov (United States)

    FONAGY, PETER

    2003-01-01

    The paper discusses the precarious position of psychoanalysis, a therapeutic approach which historically has defined itself by freedom from constraint and counted treatment length not in terms of number of sessions but in terms of years, in today's era of empirically validated treatments and brief structured interventions. The evidence that exists for the effectiveness of psychoanalysis as a treatment for psychological disorder is reviewed. The evidence base is significant and growing, but less than might meet criteria for an empirically based therapy. The author goes on to argue that the absence of evidence may be symptomatic of the epistemic difficulties that psychoanalysis faces in the context of 21st century psychiatry, and examines some of the philosophical problems faced by psychoanalysis as a model of the mind. Finally some changes necessary in order to ensure a future for psychoanalysis and psychoanalytic therapies within psychiatry are suggested. PMID:16946899

  17. The Benefits of Grid Networks

    Science.gov (United States)

    Tennant, Roy

    2005-01-01

    In the article, the author talks about the benefits of grid networks. In speaking of grid networks the author is referring to both networks of computers and networks of humans connected together in a grid topology. Examples are provided of how grid networks are beneficial today and the ways in which they have been used.

  18. Evolutionary Hierarchical Multi-Criteria Metaheuristics for Scheduling in Large-Scale Grid Systems

    CERN Document Server

    Kołodziej, Joanna

    2012-01-01

    One of the most challenging issues in modelling today's large-scale computational systems is to effectively manage highly parametrised distributed environments such as computational grids, clouds, ad hoc networks and P2P networks. Next-generation computational grids must provide a wide range of services and high performance computing infrastructures. Various types of information and data processed in the large-scale dynamic grid environment may be incomplete, imprecise, and fragmented, which complicates the specification of proper evaluation criteria and which affects both the availability of resources and the final collective decisions of users. The complexity of grid architectures and grid management may also contribute towards higher energy consumption. All of these issues necessitate the development of intelligent resource management techniques, which are capable of capturing all of this complexity and optimising meaningful metrics for a wide range of grid applications.   This book covers hot topics in t...

  19. Challenges facing production grids

    Energy Technology Data Exchange (ETDEWEB)

    Pordes, Ruth; /Fermilab

    2007-06-01

    Today's global communities of users expect quality of service from distributed Grid systems equivalent to that their local data centers. This must be coupled to ubiquitous access to the ensemble of processing and storage resources across multiple Grid infrastructures. We are still facing significant challenges in meeting these expectations, especially in the underlying security, a sustainable and successful economic model, and smoothing the boundaries between administrative and technical domains. Using the Open Science Grid as an example, I examine the status and challenges of Grids operating in production today.

  20. Meet the Grid

    CERN Multimedia

    Yurkewicz, Katie

    2005-01-01

    Today's cutting-edge scientific projects are larger, more complex, and more expensive than ever. Grid computing provides the resources that allow researchers to share knowledge, data, and computer processing power across boundaries

  1. Cloud Computing

    CERN Document Server

    Baun, Christian; Nimis, Jens; Tai, Stefan

    2011-01-01

    Cloud computing is a buzz-word in today's information technology (IT) that nobody can escape. But what is really behind it? There are many interpretations of this term, but no standardized or even uniform definition. Instead, as a result of the multi-faceted viewpoints and the diverse interests expressed by the various stakeholders, cloud computing is perceived as a rather fuzzy concept. With this book, the authors deliver an overview of cloud computing architecture, services, and applications. Their aim is to bring readers up to date on this technology and thus to provide a common basis for d

  2. Large scale and cloud-based multi-model analytics experiments on climate change data in the Earth System Grid Federation

    Science.gov (United States)

    Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni

    2017-04-01

    In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final

  3. The ATLAS Software Installation System v2: a highly available system to install and validate Grid and Cloud sites via Panda

    CERN Document Server

    De Salvo, Alessandro; The ATLAS collaboration; Sanchez, Arturo; Smirnov, Yuri

    2015-01-01

    The ATLAS Installation System v2 is the evolution of the original system, used since 2003. The original tool has been completely re-designed in terms of database backend and components, adding support for submission to multiple backends, including the original WMS and the new Panda modules. The database engine has been changed from plain MySQL to Galera/Percona and the table structure has been optimized to allow a full High-Availability (HA) solution over WAN. The servlets, running on each frontend, have been also decoupled from local settings, to allow an easy scalability of the system, including the possibility of an HA system with multiple sites. The clients can also be run in multiple copies and in different geographical locations, and take care of sending the installation and validation jobs to the target Grid or Cloud sites. Moreover, the Installation DB is used as source of parameters by the automatic agents running in CVMFS, in order to install the software and distribute it to the sites. The system i...

  4. Moving HammerCloud to CERN's private cloud

    CERN Document Server

    Barrand, Quentin

    2013-01-01

    HammerCloud is a testing framework for the Worldwide LHC Computing Grid. Currently deployed on about 20 hand-managed machines, it was desirable to move it to the Agile Infrastructure, CERN's OpenStack-based private cloud.

  5. Research on cloud computing solutions

    OpenAIRE

    Liudvikas Kaklauskas; Vaida Zdanytė

    2015-01-01

    Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, ...

  6. Efficient Resource Management in Cloud Computing

    OpenAIRE

    Rushikesh Shingade; Amit Patil; Shivam Suryawanshi; M. Venkatesan

    2015-01-01

    Cloud computing, one of the widely used technology to provide cloud services for users who are charged for receiving services. In the aspect of a maximum number of resources, evaluating the performance of Cloud resource management policies are difficult to optimize efficiently. There are different simulation toolkits available for simulation and modelling the Cloud computing environment like GridSim CloudAnalyst, CloudSim, GreenCloud, CloudAuction etc. In proposed Efficient Resource Manage...

  7. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  8. Large-scale, high-performance and cloud-enabled multi-model analytics experiments in the context of the Earth System Grid Federation

    Science.gov (United States)

    Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.

    2017-12-01

    The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the

  9. Toward low-cloud-permitting cloud superparameterization with explicit boundary layer turbulence

    Science.gov (United States)

    Parishani, Hossein; Pritchard, Michael S.; Bretherton, Christopher S.; Wyant, Matthew C.; Khairoutdinov, Marat

    2017-07-01

    Systematic biases in the representation of boundary layer (BL) clouds are a leading source of uncertainty in climate projections. A variation on superparameterization (SP) called "ultraparameterization" (UP) is developed, in which the grid spacing of the cloud-resolving models (CRMs) is fine enough (250 × 20 m) to explicitly capture the BL turbulence, associated clouds, and entrainment in a global climate model capable of multiyear simulations. UP is implemented within the Community Atmosphere Model using 2° resolution (˜14,000 embedded CRMs) with one-moment microphysics. By using a small domain and mean-state acceleration, UP is computationally feasible today and promising for exascale computers. Short-duration global UP hindcasts are compared with SP and satellite observations of top-of-atmosphere radiation and cloud vertical structure. The most encouraging improvement is a deeper BL and more realistic vertical structure of subtropical stratocumulus (Sc) clouds, due to stronger vertical eddy motions that promote entrainment. Results from 90 day integrations show climatological errors that are competitive with SP, with a significant improvement in the diurnal cycle of offshore Sc liquid water. Ongoing concerns with the current UP implementation include a dim bias for near-coastal Sc that also occurs less prominently in SP and a bright bias over tropical continental deep convection zones. Nevertheless, UP makes global eddy-permitting simulation a feasible and interesting alternative to conventionally parameterized GCMs or SP-GCMs with turbulence parameterizations for studying BL cloud-climate and cloud-aerosol feedback.

  10. THE MASS-LOSS RETURN FROM EVOLVED STARS TO THE LARGE MAGELLANIC CLOUD. IV. CONSTRUCTION AND VALIDATION OF A GRID OF MODELS FOR OXYGEN-RICH AGB STARS, RED SUPERGIANTS, AND EXTREME AGB STARS

    International Nuclear Information System (INIS)

    Sargent, Benjamin A.; Meixner, M.; Srinivasan, S.

    2011-01-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the 'Grid of Red supergiant and Asymptotic giant branch star ModelS' (GRAMS). This model grid explores four parameters-stellar effective temperature from 2100 K to 4700 K; luminosity from 10 3 to 10 6 L sun ; dust shell inner radii of 3, 7, 11, and 15 R star ; and 10.0 μm optical depth from 10 -4 to 26. From an initial grid of ∼1200 2Dust models, we create a larger grid of ∼69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  11. Satin: A high-level and efficient grid programming model

    NARCIS (Netherlands)

    van Nieuwpoort, R.V.; Wrzesinska, G.; Jacobs, C.J.H.; Bal, H.E.

    2010-01-01

    Computational grids have an enormous potential to provide compute power. However, this power remains largely unexploited today for most applications, except trivially parallel programs. Developing parallel grid applications simply is too difficult. Grids introduce several problems not encountered

  12. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  13. Smart grid security

    Energy Technology Data Exchange (ETDEWEB)

    Cuellar, Jorge (ed.) [Siemens AG, Muenchen (Germany). Corporate Technology

    2013-11-01

    The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains the following papers: 1. IT Security Architecture Approaches for Smart Metering and Smart Grid. 2. Smart Grid Information Exchange - Securing the Smart Grid from the Ground. 3. A Tool Set for the Evaluation of Security and Reliability in Smart Grids. 4. A Holistic View of Security and Privacy Issues in Smart Grids. 5. Hardware Security for Device Authentication in the Smart Grid. 6. Maintaining Privacy in Data Rich Demand Response Applications. 7. Data Protection in a Cloud-Enabled Smart Grid. 8. Formal Analysis of a Privacy-Preserving Billing Protocol. 9. Privacy in Smart Metering Ecosystems. 10. Energy rate at home Leveraging ZigBee to Enable Smart Grid in Residential Environment.

  14. Security in cloud computing and virtual environments

    OpenAIRE

    Aarseth, Raymond

    2015-01-01

    Cloud computing is a big buzzwords today. Just watch the commercials on TV and I can promise that you will hear the word cloud service at least once. With the growth of cloud technology steadily rising, and everything from cellphones to cars connected to the cloud, how secure is cloud technology? What are the caveats of using cloud technology? And how does it all work? This thesis will discuss cloud security and the underlying technology called Virtualization to ...

  15. Cloud Computing for radiologists.

    Science.gov (United States)

    Kharat, Amit T; Safvi, Amjad; Thind, Ss; Singh, Amarjit

    2012-07-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  16. Cloud Computing for radiologists

    International Nuclear Information System (INIS)

    Kharat, Amit T; Safvi, Amjad; Thind, SS; Singh, Amarjit

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future

  17. Cloud computing for radiologists

    Directory of Open Access Journals (Sweden)

    Amit T Kharat

    2012-01-01

    Full Text Available Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as applications, client, infrastructure, storage, services, and processing power, Cloud computing can help imaging units rapidly scale and descale operations and avoid huge spending on maintenance of costly applications and storage. Cloud computing allows flexibility in imaging. It sets free radiology from the confines of a hospital and creates a virtual mobile office. The downsides to Cloud computing involve security and privacy issues which need to be addressed to ensure the success of Cloud computing in the future.

  18. Cloud Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  19. Cloud Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  20. Cloud Bursting with GlideinWMS: Means to satisfy ever increasing computing needs for Scientific Workflows

    International Nuclear Information System (INIS)

    Mhashilkar, Parag; Tiradani, Anthony; Holzman, Burt; Larson, Krista; Sfiligoi, Igor; Rynge, Mats

    2014-01-01

    Scientific communities have been in the forefront of adopting new technologies and methodologies in the computing. Scientific computing has influenced how science is done today, achieving breakthroughs that were impossible to achieve several decades ago. For the past decade several such communities in the Open Science Grid (OSG) and the European Grid Infrastructure (EGI) have been using GlideinWMS to run complex application workflows to effectively share computational resources over the grid. GlideinWMS is a pilot-based workload management system (WMS) that creates on demand, a dynamically sized overlay HTCondor batch system on grid resources. At present, the computational resources shared over the grid are just adequate to sustain the computing needs. We envision that the complexity of the science driven by 'Big Data' will further push the need for computational resources. To fulfill their increasing demands and/or to run specialized workflows, some of the big communities like CMS are investigating the use of cloud computing as Infrastructure-As-A-Service (IAAS) with GlideinWMS as a potential alternative to fill the void. Similarly, communities with no previous access to computing resources can use GlideinWMS to setup up a batch system on the cloud infrastructure. To enable this, the architecture of GlideinWMS has been extended to enable support for interfacing GlideinWMS with different Scientific and commercial cloud providers like HLT, FutureGrid, FermiCloud and Amazon EC2. In this paper, we describe a solution for cloud bursting with GlideinWMS. The paper describes the approach, architectural changes and lessons learned while enabling support for cloud infrastructures in GlideinWMS.

  1. Safe Grid

    Science.gov (United States)

    Chow, Edward T.; Stewart, Helen; Korsmeyer, David (Technical Monitor)

    2003-01-01

    The biggest users of GRID technologies came from the science and technology communities. These consist of government, industry and academia (national and international). The NASA GRID is moving into a higher technology readiness level (TRL) today; and as a joint effort among these leaders within government, academia, and industry, the NASA GRID plans to extend availability to enable scientists and engineers across these geographical boundaries collaborate to solve important problems facing the world in the 21 st century. In order to enable NASA programs and missions to use IPG resources for program and mission design, the IPG capabilities needs to be accessible from inside the NASA center networks. However, because different NASA centers maintain different security domains, the GRID penetration across different firewalls is a concern for center security people. This is the reason why some IPG resources are been separated from the NASA center network. Also, because of the center network security and ITAR concerns, the NASA IPG resource owner may not have full control over who can access remotely from outside the NASA center. In order to obtain organizational approval for secured remote access, the IPG infrastructure needs to be adapted to work with the NASA business process. Improvements need to be made before the IPG can be used for NASA program and mission development. The Secured Advanced Federated Environment (SAFE) technology is designed to provide federated security across NASA center and NASA partner's security domains. Instead of one giant center firewall which can be difficult to modify for different GRID applications, the SAFE "micro security domain" provide large number of professionally managed "micro firewalls" that can allow NASA centers to accept remote IPG access without the worry of damaging other center resources. The SAFE policy-driven capability-based federated security mechanism can enable joint organizational and resource owner approved remote

  2. OMI/Aura Effective Cloud Pressure and Fraction (Raman Scattering) Daily L2 Global 0.25 deg Lat/Lon Grid V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed Version-3 Aura OMI Level-2G Cloud data product OMCLDRRG has been made available (in April 2012) to the public from the NASA Goddard Earth Sciences...

  3. Power grid complex network evolutions for the smart grid

    NARCIS (Netherlands)

    Pagani, Giuliano Andrea; Aiello, Marco

    2014-01-01

    The shift towards an energy grid dominated by prosumers (consumers and producers of energy) will inevitably have repercussions on the electricity distribution infrastructure. Today the grid is a hierarchical one delivering energy from large scale facilities to end-users. Tomorrow it will be a

  4. OMI/Aura Cloud Pressure and Fraction (O2-O2 Absorption) Daily L2 Global 0.25 deg Lat/Lon Grid V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The reprocessed OMI/Aura Level-2G cloud data product OMCLDO2G, is now available (http://disc.gsfc.nasa.gov/Aura/OMI/omcldo2g_v003.shtml) from the NASA Goddard Earth...

  5. OMI/Aura NO2 Cloud-Screened Total and Tropospheric Column Daily L3 Global 0.25deg Lat/Lon Grid V003

    Data.gov (United States)

    National Aeronautics and Space Administration — The OMI/Aura Level-3 Global Gridded(0.25x0.25 deg) Nitrogen Dioxide Product "OMNO2d" is now released (Jan 10, 2013) to the public from the NASA Goddard Earth...

  6. Smart grid

    International Nuclear Information System (INIS)

    Choi, Dong Bae

    2001-11-01

    This book describes press smart grid from basics to recent trend. It is divided into ten chapters, which deals with smart grid as green revolution in energy with introduction, history, the fields, application and needed technique for smart grid, Trend of smart grid in foreign such as a model business of smart grid in foreign, policy for smart grid in U.S.A, Trend of smart grid in domestic with international standard of smart grid and strategy and rood map, smart power grid as infrastructure of smart business with EMS development, SAS, SCADA, DAS and PQMS, smart grid for smart consumer, smart renewable like Desertec project, convergence IT with network and PLC, application of an electric car, smart electro service for realtime of electrical pricing system, arrangement of smart grid.

  7. House of tomorrow today

    NARCIS (Netherlands)

    Lichtenberg, J.J.N.; Ham, M.; Hensen, J.L.M.

    2011-01-01

    The House of Tomorrow Today is a project focussing on a healthy, energy producing dwelling to be realized with today¿s proven technology. The project aims at an energy plus level based on the principles as formulated in SmartBuilding (Slimbouwen) [1] ActiveHouse [2] and HoTT [3] It can be seen as

  8. Toward Cloud Computing Evolution

    OpenAIRE

    Susanto, Heru; Almunawar, Mohammad Nabil; Kang, Chen Chin

    2012-01-01

    -Information Technology (IT) shaped the success of organizations, giving them a solid foundation that increases both their level of efficiency as well as productivity. The computing industry is witnessing a paradigm shift in the way computing is performed worldwide. There is a growing awareness among consumers and enterprises to access their IT resources extensively through a "utility" model known as "cloud computing." Cloud computing was initially rooted in distributed grid-based computing. ...

  9. Privacy Protection in Cloud Using Rsa Algorithm

    OpenAIRE

    Amandeep Kaur; Manpreet Kaur

    2014-01-01

    The cloud computing architecture has been on high demand nowadays. The cloud has been successful over grid and distributed environment due to its cost and high reliability along with high security. However in the area of research it is observed that cloud computing still has some issues in security regarding privacy. The cloud broker provide services of cloud to general public and ensures that data is protected however they sometimes lag security and privacy. Thus in this work...

  10. An Overview of Cloud Computing in Distributed Systems

    Science.gov (United States)

    Divakarla, Usha; Kumari, Geetha

    2010-11-01

    Cloud computing is the emerging trend in the field of distributed computing. Cloud computing evolved from grid computing and distributed computing. Cloud plays an important role in huge organizations in maintaining huge data with limited resources. Cloud also helps in resource sharing through some specific virtual machines provided by the cloud service provider. This paper gives an overview of the cloud organization and some of the basic security issues pertaining to the cloud.

  11. GEWEX cloud assessment: A review

    Science.gov (United States)

    Stubenrauch, Claudia; Rossow, William B.; Kinne, Stefan; Ackerman, Steve; Cesana, Gregory; Chepfer, Hélène; Di Girolamo, Larry; Getzewich, Brian; Guignard, Anthony; Heidinger, Andy; Maddux, Brent; Menzel, Paul; Minnis, Patrick; Pearl, Cindy; Platnick, Steven; Poulsen, Caroline; Riedi, Jérôme; Sayer, Andrew; Sun-Mack, Sunny; Walther, Andi; Winker, Dave; Zeng, Shen; Zhao, Guangyu

    2013-05-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the entire globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years; however, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provides the first coordinated intercomparison of publicly available, global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multi-angle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. The monthly, gridded database presented here facilitates further assessments, climate studies, and the evaluation of climate models.

  12. First Thuesday - CERN, The Grid gets real

    CERN Multimedia

    Robertson, Leslie

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups.

  13. Current Grid operation and future role of the Grid

    Science.gov (United States)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  14. Current Grid operation and future role of the Grid

    International Nuclear Information System (INIS)

    Smirnova, O

    2012-01-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  15. Security Audit Compliance for Cloud Computing

    OpenAIRE

    Doelitzscher, Frank

    2014-01-01

    Cloud computing has grown largely over the past three years and is widely popular amongst today's IT landscape. In a comparative study between 250 IT decision makers of UK companies they said, that they already use cloud services for 61% of their systems. Cloud vendors promise "infinite scalability and resources" combined with on-demand access from everywhere. This lets cloud users quickly forget, that there is still a real IT infrastructure behind a cloud. Due to virtualization and multi-ten...

  16. MULTI TENANCY SECURITY IN CLOUD COMPUTING

    OpenAIRE

    Manjinder Singh*, Charanjit Singh

    2017-01-01

    The word Cloud is used as a metaphor for the internet, based on standardised use of a cloud like shape to denote a network. Cloud Computing is advanced technology for resource sharing through network with less cost as compare to other technologies. Cloud infrastructure supports various models IAAS, SAAS, PAAS. The term virtualization in cloud computing is very useful today. With the help of virtualization, more than one operating system is supported with all resources on single H/W. We can al...

  17. HP advances Grid Strategy for the adaptive enterprise

    CERN Multimedia

    2003-01-01

    "HP today announced plans to further enable its enterprise infrastructure technologies for grid computing. By leveraging open grid standards, HP plans to help customers simplify the use and management of distributed IT resources. The initiative will integrate industry grid standards, including the Globus Toolkit and Open Grid Services Architecture (OGSA), across HP's enterprise product lines" (1 page).

  18. Cloud@Home: A New Enhanced Computing Paradigm

    Science.gov (United States)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  19. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  20. Smart Control of Energy Distribution Grids over Heterogeneous Communication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Iov, Florin; Hägerling, Christian

    2014-01-01

    The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses) and the qu......The expected growth in distributed generation will significantly affect the operation and control of todays distribution grids. Being confronted with short time power variations of distributed generations, the assurance of a reliable service (grid stability, avoidance of energy losses...

  1. Characterization of Cloud Water-Content Distribution

    Science.gov (United States)

    Lee, Seungwon

    2010-01-01

    The development of realistic cloud parameterizations for climate models requires accurate characterizations of subgrid distributions of thermodynamic variables. To this end, a software tool was developed to characterize cloud water-content distributions in climate-model sub-grid scales. This software characterizes distributions of cloud water content with respect to cloud phase, cloud type, precipitation occurrence, and geo-location using CloudSat radar measurements. It uses a statistical method called maximum likelihood estimation to estimate the probability density function of the cloud water content.

  2. Academic librarianship today

    CERN Document Server

    2017-01-01

    Intended for use by both librarians and students in LIS programs, Academic Librarianship Today is the most current, comprehensive overview of the field available today. Key features include: Each chapter was commissioned specifically for this new book, and the authors are highly regarded academic librarians or library school faculty— or both Cutting-edge topics such as open access, copyright, digital curation and preservation, emerging technologies, new roles for academic librarians, cooperative collection development and resource sharing, and patron-driven acquisitions are explored in depth Each chapter ends with thought-provoking questions for discussion and carefully constructed assignments that faculty can assign or adapt for their courses The book begins with Gilman’s introduction, an overview that briefly synthesizes the contents of the contributors’ chapters by highlighting major themes. The main part of the book is organized into three parts: The Academic Library Landscape Today, ...

  3. The Grid

    CERN Document Server

    Klotz, Wolf-Dieter

    2005-01-01

    Grid technology is widely emerging. Grid computing, most simply stated, is distributed computing taken to the next evolutionary level. The goal is to create the illusion of a simple, robust yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources. This talk will give a short history how, out of lessons learned from the Internet, the vision of Grids was born. Then the extensible anatomy of a Grid architecture will be discussed. The talk will end by presenting a selection of major Grid projects in Europe and US and if time permits a short on-line demonstration.

  4. Smart grid applications and developments

    CERN Document Server

    Mah, Daphne; Li, Victor OK; Balme, Richard

    2014-01-01

    Meeting today's energy and climate challenges require not only technological advancement but also a good understanding of stakeholders' perceptions, political sensitivity, well-informed policy analyses and innovative interdisciplinary solutions. This book will fill this gap. This is an interdisciplinary informative book to provide a holistic and integrated understanding of the technology-stakeholder-policy interactions of smart grid technologies. The unique features of the book include the following: (a) interdisciplinary approach - by bringing in the policy dimensions to smart grid technologi

  5. Micro Grid: A Smart Technology

    OpenAIRE

    Naveenkumar, M; Ratnakar, N

    2012-01-01

    Distributed Generation (DG) is an approach that employs small-scale technologies to produce electricity close to the end users of power. Todays DG technologies often consist of renewable generators, and offer a number of potential benefits. This paper presents a design of micro grid as part of Smart grid technologies with renewable energy resources like solar, wind and Diesel generator. The design of the microgrid with integration of Renewable energy sources are done in PSCAD/EMTDC.This paper...

  6. Mathematics Teaching Today

    Science.gov (United States)

    Martin, Tami S.; Speer, William R.

    2009-01-01

    This article describes features, consistent messages, and new components of "Mathematics Teaching Today: Improving Practice, Improving Student Learning" (NCTM 2007), an updated edition of "Professional Standards for Teaching Mathematics" (NCTM 1991). The new book describes aspects of high-quality mathematics teaching; offers a model for observing,…

  7. The Alchemist of Today

    Science.gov (United States)

    Serret, Natasha

    2010-01-01

    Traditionally, alchemy has involved the power of transmuting base metals such as lead into gold or producing the "elixir of life" for those wealthy people who wanted to live forever. But what of today's developments? One hundred years ago, even breaking the four-minute mile would have been deemed "magic," which is what the alchemists of the past…

  8. Preface: Catalysis Today

    DEFF Research Database (Denmark)

    Li, Yongdan

    2016-01-01

    This special issue of Catalysis Today with the theme “Sustain-able Energy” results from a great success of the session “Catalytic Technologies Accelerating the Establishment of Sustainable and Clean Energy”, one of the two sessions of the 1st International Symposium on Catalytic Science and Techn...

  9. Educational Entrepreneurship Today

    Science.gov (United States)

    Hess, Frederick M., Ed.; McShane, Michael Q., Ed.

    2016-01-01

    In "Educational Entrepreneurship Today", Frederick M. Hess and Michael Q. McShane assemble a diverse lineup of high-profile contributors to examine the contexts in which new initiatives in education are taking shape. They inquire into the impact of entrepreneurship on the larger field--including the development and deployment of new…

  10. Building Tomorrow's Business Today

    Science.gov (United States)

    Ryan, Jim

    2010-01-01

    Modern automobile maintenance, like most skilled-trades jobs, is more than simple nuts and bolts. Today, skilled-trades jobs might mean hydraulics, computerized monitoring equipment, electronic blueprints, even lasers. As chief executive officer of Grainger, a business-to-business maintenance, repair, and operating supplies company that…

  11. Cloud Computing and Security Issues

    OpenAIRE

    Rohan Jathanna; Dhanamma Jagli

    2017-01-01

    Cloud computing has become one of the most interesting topics in the IT world today. Cloud model of computing as a resource has changed the landscape of computing as it promises of increased greater reliability, massive scalability, and decreased costs have attracted businesses and individuals alike. It adds capabilities to Information Technology’s. Over the last few years, cloud computing has grown considerably in Information Technology. As more and more information of individuals and compan...

  12. Using a New Event-Based Simulation Framework for Investigating Resource Provisioning in Clouds

    Directory of Open Access Journals (Sweden)

    Simon Ostermann

    2011-01-01

    Full Text Available Today, Cloud computing proposes an attractive alternative to building large-scale distributed computing environments by which resources are no longer hosted by the scientists' computational facilities, but leased from specialised data centres only when and for how long they are needed. This new class of Cloud resources raises new interesting research questions in the fields of resource management, scheduling, fault tolerance, or quality of service, requiring hundreds to thousands of experiments for finding valid solutions. To enable such research, a scalable simulation framework is typically required for early prototyping, extensive testing and validation of results before the real deployment is performed. The scope of this paper is twofold. In the first part we present GroudSim, a Grid and Cloud simulation toolkit for scientific computing based on a scalable simulation-independent discrete-event engine. GroudSim provides a comprehensive set of features for complex simulation scenarios from simple job executions on leased computing resources to file transfers, calculation of costs and background load on resources. Simulations can be parameterised and are easily extendable by probability distribution packages for failures which normally occur in complex distributed environments. Experimental results demonstrate the improved scalability of GroudSim compared to a related process-based simulation approach. In the second part, we show the use of the GroudSim simulator to analyse the problem of dynamic provisioning of Cloud resources to scientific workflows that do not benefit from sufficient Grid resources as required by their computational demands. We propose and study four strategies for provisioning and releasing Cloud resources that take into account the general leasing model encountered in today's commercial Cloud environments based on resource bulks, fuzzy descriptions and hourly payment intervals. We study the impact of our techniques to the

  13. Packaging Printing Today

    OpenAIRE

    Stanislav Bolanča; Igor Majnarić; Kristijan Golubović

    2015-01-01

    Printing packaging covers today about 50% of all the printing products. Among the printing products there are printing on labels, printing on flexible packaging, printing on folding boxes, printing on the boxes of corrugated board, printing on glass packaging, synthetic and metal ones. The mentioned packaging are printed in flexo printing technique, offset printing technique, intaglio halftone process, silk – screen printing, ink ball printing, digital printing and hybrid printing process. T...

  14. Security and privacy in smart grids

    CERN Document Server

    Xiao, Yang

    2013-01-01

    Presenting the work of prominent researchers working on smart grids and related fields around the world, Security and Privacy in Smart Grids identifies state-of-the-art approaches and novel technologies for smart grid communication and security. It investigates the fundamental aspects and applications of smart grid security and privacy and reports on the latest advances in the range of related areas-making it an ideal reference for students, researchers, and engineers in these fields. The book explains grid security development and deployment and introduces novel approaches for securing today'

  15. Progress in Grid Generation: From Chimera to DRAGON Grids

    Science.gov (United States)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  16. Big Data, indispensable today

    Directory of Open Access Journals (Sweden)

    Radu-Ioan ENACHE

    2015-10-01

    Full Text Available Big data is and will be used more in the future as a tool for everything that happens both online and offline. Of course , online is a real hobbit, Big Data is found in this medium , offering many advantages , being a real help for all consumers. In this paper we talked about Big Data as being a plus in developing new applications, by gathering useful information about the users and their behaviour.We've also presented the key aspects of real-time monitoring and the architecture principles of this technology. The most important benefit brought to this paper is presented in the cloud section.

  17. Securing the Data in Clouds with Hyperelliptic Curve Cryptography

    OpenAIRE

    Mukhopadhyay, Debajyoti; Shirwadkar, Ashay; Gaikar, Pratik; Agrawal, Tanmay

    2014-01-01

    In todays world, Cloud computing has attracted research communities as it provides services in reduced cost due to virtualizing all the necessary resources. Even modern business architecture depends upon Cloud computing .As it is a internet based utility, which provides various services over a network, it is prone to network based attacks. Hence security in clouds is the most important in case of cloud computing. Cloud Security concerns the customer to fully rely on storing data on clouds. Th...

  18. Research on cloud computing solutions

    Directory of Open Access Journals (Sweden)

    Liudvikas Kaklauskas

    2015-07-01

    Full Text Available Cloud computing can be defined as a new style of computing in which dynamically scala-ble and often virtualized resources are provided as a services over the Internet. Advantages of the cloud computing technology include cost savings, high availability, and easy scalability. Voas and Zhang adapted six phases of computing paradigms, from dummy termi-nals/mainframes, to PCs, networking computing, to grid and cloud computing. There are four types of cloud computing: public cloud, private cloud, hybrid cloud and community. The most common and well-known deployment model is Public Cloud. A Private Cloud is suited for sensitive data, where the customer is dependent on a certain degree of security.According to the different types of services offered, cloud computing can be considered to consist of three layers (services models: IaaS (infrastructure as a service, PaaS (platform as a service, SaaS (software as a service. Main cloud computing solutions: web applications, data hosting, virtualization, database clusters and terminal services. The advantage of cloud com-puting is the ability to virtualize and share resources among different applications with the objective for better server utilization and without a clustering solution, a service may fail at the moment the server crashes.DOI: 10.15181/csat.v2i2.914

  19. The Model of the Software Running on a Computer Equipment Hardware Included in the Grid network

    Directory of Open Access Journals (Sweden)

    T. A. Mityushkina

    2012-12-01

    Full Text Available A new approach to building a cloud computing environment using Grid networks is proposed in this paper. The authors describe the functional capabilities, algorithm, model of software running on a computer equipment hardware included in the Grid network, that will allow to implement cloud computing environment using Grid technologies.

  20. Today's markets for superconductivity

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    The worldwide market for superconductive products may exceed $1 billion in 1987. These products are expanding the frontiers of science, revolutionizing the art of medical diagnosis, and developing the energy technology of the future. In general, today's customers for superconductive equipment want the highest possible performance, almost regardless of cost. The products operate within a few degrees of absolute zero, and virtually all are fabricated from niobium or niobium alloys-so far the high-temperature superconductors discovered in 1986 and 1987 have had no impact on these markets. The industry shows potential and profound societal impact, even without the new materials

  1. Man and electrotechnics today

    Energy Technology Data Exchange (ETDEWEB)

    1980-12-01

    Man and electrotechnique today - this topic was discussed by experts of the VDE (Society of German Electrotechnicians) during a podium discussion directed by the TV-journalist Ruediger Proske. The discussion was centred on the popular questions of energy supply, electronics and the technical communication systems. What are the technologies' influences on our society, how can the social results for the places of employment be estimated and what role is played here by the technicians, the industry and by economy. In the debate which was partly very heated the members showed their anxiety of the negative attitude which society has been developing towards technique to cause big problems for the future.

  2. Man and electrotechnics today

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    Man and electrotechnique today - this topic was discussed by experts of the VDE (Society of German Electrotechnicians) during a podium discussion directed by the TV-journalist Ruediger Proske. The discussion was centred on the popular questions of energy supply, electronics and the technical communication systems. What are the technologies' influences on our society, how can the social results for the places of employment be estimated and what role is played here by the technicians, the industry and by economy. In the debate which was partly very heated the members showed their anxiety of the negative attitude which society has been developing towards technique to cause big problems for the future. (orig.) [de

  3. Nuclear energy today

    International Nuclear Information System (INIS)

    2003-01-01

    Energy is the power of the world's economies, whose appetite for this commodity is increasing as the leading economies expand and developing economies grow. How to provide the energy demanded while protecting our environment and conserving natural resources is a vital question facing us today. Many parts of our society are debating how to power the future and whether nuclear energy should play a role. Nuclear energy is a complex technology with serious issues and a controversial past. Yet it also has the potential to provide considerable benefits. In pondering the future of this imposing technology, people want to know. - How safe is nuclear energy? - Is nuclear energy economically competitive? - What role can nuclear energy play in meeting greenhouse gas reduction targets? - What can be done with the radioactive waste it generates? - Does its use increase the risk of proliferation of nuclear weapons? - Are there sufficient and secure resources to permit its prolonged exploitation? - Can tomorrow's nuclear energy be better than today's? This publication provides authoritative and factual replies to these questions. Written primarily to inform policy makers, it will also serve interested members of the public, academics, journalists and industry leaders. (author)

  4. Community Cloud Computing

    Science.gov (United States)

    Marinos, Alexandros; Briscoe, Gerard

    Cloud Computing is rising fast, with its data centres growing at an unprecedented rate. However, this has come with concerns over privacy, efficiency at the expense of resilience, and environmental sustainability, because of the dependence on Cloud vendors such as Google, Amazon and Microsoft. Our response is an alternative model for the Cloud conceptualisation, providing a paradigm for Clouds in the community, utilising networked personal computers for liberation from the centralised vendor model. Community Cloud Computing (C3) offers an alternative architecture, created by combing the Cloud with paradigms from Grid Computing, principles from Digital Ecosystems, and sustainability from Green Computing, while remaining true to the original vision of the Internet. It is more technically challenging than Cloud Computing, having to deal with distributed computing issues, including heterogeneous nodes, varying quality of service, and additional security constraints. However, these are not insurmountable challenges, and with the need to retain control over our digital lives and the potential environmental consequences, it is a challenge we must pursue.

  5. Women in engineering conference: capitalizing on today`s challenges

    Energy Technology Data Exchange (ETDEWEB)

    Metz, S.S.; Martins, S.M. [eds.

    1996-06-01

    This document contains the conference proceedings of the Women in Engineering Conference: Capitalizing on Today`s Challenges, held June 1-4, 1996 in Denver, Colorado. Topics included engineering and science education, career paths, workplace issues, and affirmative action.

  6. Technical report writing today

    CERN Document Server

    Riordan, Daniel G

    2014-01-01

    "Technical Report Writing Today" provides thorough coverage of technical writing basics, techniques, and applications. Through a practical focus with varied examples and exercises, students internalize the skills necessary to produce clear and effective documents and reports. Project worksheets help students organize their thoughts and prepare for assignments, and focus boxes highlight key information and recent developments in technical communication. Extensive individual and collaborative exercises expose students to different kinds of technical writing problems and solutions. Annotated student examples - more than 100 in all - illustrate different writing styles and approaches to problems. Numerous short and long examples throughout the text demonstrate solutions for handling writing assignments in current career situations. The four-color artwork in the chapter on creating visuals keeps pace with contemporary workplace capabilities. The Tenth Edition offers the latest information on using electronic resum...

  7. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  8. First Tuesday@CERN - THE GRID GETS REAL !

    CERN Document Server

    2003-01-01

    A few years ago, "the Grid" was just a vision dreamt up by some computer scientists who wanted to share processor power and data storage capacity between computers around the world - in much the same way as today's Web shares information seamlessly between millions of computers. Today, Grid technology is a huge enterprise, involving hundreds of software engineers, and generating exciting opportunities for industry. "Computing on demand", "utility computing", "web services", and "virtualisation" are just a few of the buzzwords in the IT industry today that are intimately connected to the development of Grid technology. For this third First Tuesday @CERN, the panel will survey some of the latest major breakthroughs in building international computer Grids for science. It will also provide a snapshot of Grid-related industrial activities, with contributions from both major players in the IT sector as well as emerging Grid technology start-ups. Panel: - Les Robertson, Head of the LHC Computing Grid Project, IT ...

  9. Thundercloud: Domain specific information security training for the smart grid

    Science.gov (United States)

    Stites, Joseph

    In this paper, we describe a cloud-based virtual smart grid test bed: ThunderCloud, which is intended to be used for domain-specific security training applicable to the smart grid environment. The test bed consists of virtual machines connected using a virtual internal network. ThunderCloud is remotely accessible, allowing students to undergo educational exercises online. We also describe a series of practical exercises that we have developed for providing the domain-specific training using ThunderCloud. The training exercises and attacks are designed to be realistic and to reflect known vulnerabilities and attacks reported in the smart grid environment. We were able to use ThunderCloud to offer practical domain-specific security training for smart grid environment to computer science students at little or no cost to the department and no risk to any real networks or systems.

  10. Grid Computing

    Indian Academy of Sciences (India)

    IAS Admin

    emergence of supercomputers led to the use of computer simula- tion as an .... Scientific and engineering applications (e.g., Tera grid secure gate way). Collaborative ... Encryption, privacy, protection from malicious software. Physical Layer.

  11. Cloud Computing: Architecture and Services

    OpenAIRE

    Ms. Ravneet Kaur

    2018-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. It is a method for delivering information technology (IT) services where resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server. Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possib...

  12. Knee arthrography today

    International Nuclear Information System (INIS)

    Otto, H.; Kallenberger, R.

    1987-01-01

    The role of knee arthrography today is demonstrated and technical problems are discussed. Among a lot of variants the position of the patient and the choice of contrast media play a great part concerning the result of the examination. Mild complications occur in 0.25% of the examinations, severe and live threatening complications are extremely rare. Diagnosis of meniscal lesions is most important for knee arthrography; arthroscopy and arthrography are complementary examinations and not mutually exclusive, they achieve combined an accuracy of 97-98%. In the same way arthrography is able to evaluate the condropathy of the femoro-tibial joint, whereas accuracy of arthroscopy in the diagnosis of patellar chondropathy is much higher. There is a great reliability of arthrography regarding the evaluation of lesions of the capsule, but accuracy in lesions of the cruciate ligaments is low. Arthrography is very suitable for evaluation of Baker-cysts, since indications for almost occuring internal derangement of the knee are even available. Knee arthrography is a complex and safe procedure with very less discomfort for the patient; it has a central position in the evaluation of lesions of the knee. (orig.) [de

  13. Packaging Printing Today

    Directory of Open Access Journals (Sweden)

    Stanislav Bolanča

    2015-12-01

    Full Text Available Printing packaging covers today about 50% of all the printing products. Among the printing products there are printing on labels, printing on flexible packaging, printing on folding boxes, printing on the boxes of corrugated board, printing on glass packaging, synthetic and metal ones. The mentioned packaging are printed in flexo printing technique, offset printing technique, intaglio halftone process, silk – screen printing, ink ball printing, digital printing and hybrid printing process. The possibilities of particular printing techniques for optimal production of the determined packaging were studied in the paper. The problem was viewed from the technological and economical aspect. The possible printing quality and the time necessary for the printing realization were taken as key parameters. An important segment of the production and the way of life is alocation value and it had also found its place in this paper. The events in the field of packaging printing in the whole world were analyzed. The trends of technique developments and the printing technology for packaging printing in near future were also discussed.

  14. Grid computing

    CERN Multimedia

    2007-01-01

    "Some of today's large-scale scientific activities - modelling climate change, Earth observation, studying the human genome and particle physics experiments - involve handling millions of bytes of data very rapidly." (1 page)

  15. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    Harald Barreiro Megino, Fernando; Van der Ster, Daniel; Benjamin, Doug; De, Kaushik; Gable, Ian; Paterson, Michael; Taylor, Ryan; Hendrix, Val; Vitillo, Roberto A; Panitkin, Sergey; De Silva, Asoka; Walker, Rod

    2012-01-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  16. Monitoring the EGEE/WLCG grid services

    International Nuclear Information System (INIS)

    Duarte, A; Nyczyk, P; Retico, A; Vicinanza, D

    2008-01-01

    Grids have the potential to revolutionise computing by providing ubiquitous, on demand access to computational services and resources. They promise to allow for on demand access and composition of computational services provided by multiple independent sources. Grids can also provide unprecedented levels of parallelism for high-performance applications. On the other hand, grid characteristics, such as high heterogeneity, complexity and distribution create many new technical challenges. Among these technical challenges, failure management is a key area that demands much progress. A recent survey revealed that fault diagnosis is still a major problem for grid users. When a failure appears at the user screen, it becomes very difficult for the user to identify whether the problem is in the application, somewhere in the grid middleware, or even lower in the fabric that comprises the grid. In this paper we present a tool able to check if a given grid service works as expected for a given set of users (Virtual Organisation) on the different resources available on a grid. Our solution deals with grid services as single components that should produce an expected output to a pre-defined input, what is quite similar to unit testing. The tool, called Service Availability Monitoring or SAM, is being currently used by several different Virtual Organizations to monitor more than 300 grid sites belonging to the largest grids available today. We also discuss how this tool is being used by some of those VOs and how it is helping in the operation of the EGEE/WLCG grid

  17. The Neighboring Column Approximation (NCA) – A fast approach for the calculation of 3D thermal heating rates in cloud resolving models

    International Nuclear Information System (INIS)

    Klinger, Carolin; Mayer, Bernhard

    2016-01-01

    Due to computational costs, radiation is usually neglected or solved in plane parallel 1D approximation in today's numerical weather forecast and cloud resolving models. We present a fast and accurate method to calculate 3D heating and cooling rates in the thermal spectral range that can be used in cloud resolving models. The parameterization considers net fluxes across horizontal box boundaries in addition to the top and bottom boundaries. Since the largest heating and cooling rates occur inside the cloud, close to the cloud edge, the method needs in first approximation only the information if a grid box is at the edge of a cloud or not. Therefore, in order to calculate the heating or cooling rates of a specific grid box, only the directly neighboring columns are used. Our so-called Neighboring Column Approximation (NCA) is an analytical consideration of cloud side effects which can be considered a convolution of a 1D radiative transfer result with a kernel or radius of 1 grid-box (5 pt stencil) and which does usually not break the parallelization of a cloud resolving model. The NCA can be easily applied to any cloud resolving model that includes a 1D radiation scheme. Due to the neglect of horizontal transport of radiation further away than one model column, the NCA works best for model resolutions of about 100 m or lager. In this paper we describe the method and show a set of applications of LES cloud field snap shots. Correction terms, gains and restrictions of the NCA are described. Comprehensive comparisons to the 3D Monte Carlo Model MYSTIC and a 1D solution are shown. In realistic cloud fields, the full 3D simulation with MYSTIC shows cooling rates up to −150 K/d (100 m resolution) while the 1D solution shows maximum coolings of only −100 K/d. The NCA is capable of reproducing the larger 3D cooling rates. The spatial distribution of the heating and cooling is improved considerably. Computational costs are only a factor of 1.5–2 higher compared to a 1D

  18. Skateboarding injuries of today

    Science.gov (United States)

    Forsman, L; Eriksson, A

    2001-01-01

    Background—Skateboarding injuries have increased with the rise in popularity of the sport, and the injury pattern can be expected to have changed with the development of both skateboard tricks and the materials used for skateboard construction. Objective—To describe the injury pattern of today. Methods—The pattern of injuries, circumstances, and severity were investigated in a study of all 139 people injured in skateboarding accidents during the period 1995–1998 inclusive and admitted to the University Hospital of Umeå. This is the only hospital in the area, serving a population of 135 000. Results—Three of the 139 injured were pedestrians hit by a skateboard rider; the rest were riders. The age range was 7–47 years (mean 16). The severity of the injuries was minor (AIS 1) to moderate (AIS 2); fractures were classified as moderate. The annual number of injuries increased during the study period. Fractures were found in 29% of the casualties, and four children had concussion. The most common fractures were of the ankle and wrist. Older patients had less severe injuries, mainly sprains and soft tissue injuries. Most children were injured while skateboarding on ramps and at arenas; only 12 (9%) were injured while skateboarding on roads. Some 37% of the injuries occurred because of a loss of balance, and 26% because of a failed trick attempt. Falls caused by surface irregularities resulted in the highest proportion of the moderate injuries. Conclusions—Skateboarding should be restricted to supervised skateboard parks, and skateboarders should be required to wear protective gear. These measures would reduce the number of skateboarders injured in motor vehicle collisions, reduce the personal injuries among skateboarders, and reduce the number of pedestrians injured in collisions with skateboarders. Key Words: skateboard; injury; prevention PMID:11579065

  19. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  20. International Symposium on Grids and Clouds 2013

    CERN Document Server

    2013-01-01

    ISGC 2013 will bring together from the Asia-Pacific region and around the world, researchers that are developing applications to produce these large-scale data sets and the data analytics tools to extract the knowledge from the generated data, and the e-infrastructure providers that integrate the distributed computing, storage and network resources to support these multidisciplinary research collaborations. The meeting will feature workshops, tutorials, keynotes and technical sessions to further support the development of a global e-infrastructure for collaborative Simulation, Modelling and Data Analytics.

  1. Experience in using commercial clouds in CMS

    Energy Technology Data Exchange (ETDEWEB)

    Bauerdick, L. [Fermilab; Bockelman, B. [Nebraska U.; Dykstra, D. [Fermilab; Fuess, S. [Fermilab; Garzoglio, G. [Fermilab; Girone, M. [CERN; Gutsche, O. [Fermilab; Holzman, B. [Fermilab; Hugnagel, D. [Fermilab; Kim, H. [Fermilab; Kennedy, R. [Fermilab; Mason, D. [Fermilab; Spentzouris, P. [Fermilab; Timm, S. [Fermilab; Tiradani, A. [Fermilab; Vaandering, E. [Fermilab

    2017-10-03

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  2. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  3. Academic Training Lecture Regular Programme: Cloud Computing

    CERN Multimedia

    2012-01-01

    Cloud Computing (1/2), by Belmiro Rodrigues Moreira (LIP Laboratorio de Instrumentacao e Fisica Experimental de Part).   Wednesday, May 30, 2012 from 11:00 to 12:00 (Europe/Zurich) at CERN ( 500-1-001 - Main Auditorium ) Cloud computing, the recent years buzzword for distributed computing, continues to attract and keep the interest of both the computing and business world. These lectures aim at explaining "What is Cloud Computing?" identifying and analyzing it's characteristics, models, and applications. The lectures will explore different "Cloud definitions" given by different authors and use them to introduce the particular concepts. The main cloud models (SaaS, PaaS, IaaS), cloud types (public, private, hybrid), cloud standards and security concerns will be presented. The borders between Cloud Computing and Grid Computing, Server Virtualization, Utility Computing will be discussed and analyzed.

  4. Cloud Computing for Technical and Online Organizations

    OpenAIRE

    Hagos Tesfahun Gebremichael; Dr.Vuda Sreenivasa Rao

    2016-01-01

    Cloud computing is a new computing model which is based on the grid computing, distributed computing, parallel computing and virtualization technologies define the shape of a new technology.It is the core technology of the next generation of network computing platform, especially in the field of education and online.Cloud computing as an exciting development in an educational Institute and online perspective.Cloud computing services are growing necessity for business organizations as well ...

  5. Interoperable Resource Management for establishing Federated Clouds

    OpenAIRE

    Kecskeméti, Gábor; Kertész, Attila; Marosi, Attila; Kacsuk, Péter

    2012-01-01

    Cloud Computing builds on the latest achievements of diverse research areas, such as Grid Computing, Service-oriented computing, business process modeling and virtualization. As this new computing paradigm was mostly lead by companies, several proprietary systems arose. Recently, alongside these commercial systems, several smaller-scale privately owned systems are maintained and developed. This chapter focuses on issues faced by users with interests on Multi-Cloud use and by Cloud providers w...

  6. Distributed Optimization of Sustainable Power Dispatch and Flexible Consumer Loads for Resilient Power Grid Operations

    Science.gov (United States)

    Srikantha, Pirathayini

    Today's electric grid is rapidly evolving to provision for heterogeneous system components (e.g. intermittent generation, electric vehicles, storage devices, etc.) while catering to diverse consumer power demand patterns. In order to accommodate this changing landscape, the widespread integration of cyber communication with physical components can be witnessed in all tenets of the modern power grid. This ubiquitous connectivity provides an elevated level of awareness and decision-making ability to system operators. Moreover, devices that were typically passive in the traditional grid are now `smarter' as these can respond to remote signals, learn about local conditions and even make their own actuation decisions if necessary. These advantages can be leveraged to reap unprecedented long-term benefits that include sustainable, efficient and economical power grid operations. Furthermore, challenges introduced by emerging trends in the grid such as high penetration of distributed energy sources, rising power demands, deregulations and cyber-security concerns due to vulnerabilities in standard communication protocols can be overcome by tapping onto the active nature of modern power grid components. In this thesis, distributed constructs in optimization and game theory are utilized to design the seamless real-time integration of a large number of heterogeneous power components such as distributed energy sources with highly fluctuating generation capacities and flexible power consumers with varying demand patterns to achieve optimal operations across multiple levels of hierarchy in the power grid. Specifically, advanced data acquisition, cloud analytics (such as prediction), control and storage systems are leveraged to promote sustainable and economical grid operations while ensuring that physical network, generation and consumer comfort requirements are met. Moreover, privacy and security considerations are incorporated into the core of the proposed designs and these

  7. Transition to the Cloud

    DEFF Research Database (Denmark)

    Hedman, Jonas; Xiao, Xiao

    2016-01-01

    The rising of cloud computing has dramatically changed the way software companies provide and distribute their IT product and related services over the last decades. Today, most software is bought offthe-shelf and distributed over the Internet. This transition is greatly influencing how software...... companies operate. In this paper, we present a case study of an ERP vendor for SMB (small and mediumsize business) in making a transition towards a cloud-based business model. Through the theoretical lens of ecosystem, we are able to analyze the evolution of the vendor and its business network as a whole......, and find that the relationship between vendor and Value-added-Reseller (VAR) is greatly affected. We conclude by presenting critical issues and challenges for managing such cloud transition....

  8. The Future of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anamaroa SIclovan

    2011-12-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offeredto the consumers as a product delivered online. This represents an advantage for the organization both regarding the cost and the opportunity for the new business. This paper presents the future perspectives in cloud computing. The paper presents some issues of the cloud computing paradigm. It is a theoretical paper.Keywords: Cloud Computing, Pay-per-use

  9. Information services today an introduction

    CERN Document Server

    Hirsh, Sandra

    2015-01-01

    This essential overview of what it means to be a library and information professional today provides a broad overview of the transformation of libraries as information organizations, why these organizations are more important today than ever before, the technological influence on how we provide information resources and services in today's digital and global environment, and the various career opportunities available for information professionals. The book begins with a historical overview of libraries and their transformation as information and technology

  10. Security Issues Model on Cloud Computing: A Case of Malaysia

    OpenAIRE

    Komeil Raisian; Jamaiah Yahaya

    2015-01-01

    By developing the cloud computing, viewpoint of many people regarding the infrastructure architectures, software distribution and improvement model changed significantly. Cloud computing associates with the pioneering deployment architecture, which could be done through grid calculating, effectiveness calculating and autonomic calculating. The fast transition towards that, has increased the worries regarding a critical issue for the effective transition of cloud computing. From the security v...

  11. Power grids

    International Nuclear Information System (INIS)

    Viterbo, J.

    2012-01-01

    The implementation of renewable energies represents new challenges for electrical systems. The objective: making power grids smarter so they can handle intermittent production. The advent of smart grids will allow flexible operations like distributing energy in a multidirectional manner instead of just one way and it will make electrical systems capable of integrating actions by different users, consumers and producers in order to maintain efficient, sustainable, economical and secure power supplies. Practically speaking, they associate sensors, instrumentation and controls with information processing and communication systems in order to create massively automated networks. Smart grids require huge investments: for example more than 7 billion dollars have been invested in China and in the Usa in 2010 and France is ranked 9. worldwide with 265 million dollars invested. It is expected that smart grids will promote the development of new business models and a change in the value chain for energy. Decentralized production combined with the probable introduction of more or less flexible rates for sales or purchases and of new supplier-customer relationships will open the way to the creation of new businesses. (A.C.)

  12. Self-Awareness of Cloud Applications

    NARCIS (Netherlands)

    Iosup, Alexandru; Zhu, Xiaoyun; Merchant, Arif; Kalyvianaki, Eva; Maggio, Martina; Spinner, Simon; Abdelzaher, Tarek; Mengshoel, Ole; Bouchenak, Sara

    2016-01-01

    Cloud applications today deliver an increasingly larger portion of the Information and Communication Technology (ICT) services. To address the scale, growth, and reliability of cloud applications, self-aware management and scheduling are becoming commonplace. How are they used in practice? In this

  13. THE EXPANSION OF ACCOUNTING TO THE CLOUD

    OpenAIRE

    Otilia DIMITRIU; Marian MATEI

    2014-01-01

    The world today is witnessing an explosion of technologies that are remodelling our entire reality. The traditional way of thinking in the business field has shifted towards a new IT breakthrough: cloud computing. The cloud paradigm has emerged as a natural step in the evolution of the internet and has captivated everyone’s attention. The accounting profession itself has found a mean to optimize its activity through cloud-based applications. By reviewing the latest and most relevant studies a...

  14. Cloud Governance

    DEFF Research Database (Denmark)

    Berthing, Hans Henrik

    Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing.......Denne præsentation beskriver fordele og værdier ved anvendelse af Cloud Computing. Endvidere inddrager resultater fra en række internationale analyser fra ISACA om Cloud Computing....

  15. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  16. Military clouds: utilization of cloud computing systems at the battlefield

    Science.gov (United States)

    Süleyman, Sarıkürk; Volkan, Karaca; İbrahim, Kocaman; Ahmet, Şirzai

    2012-05-01

    Cloud computing is known as a novel information technology (IT) concept, which involves facilitated and rapid access to networks, servers, data saving media, applications and services via Internet with minimum hardware requirements. Use of information systems and technologies at the battlefield is not new. Information superiority is a force multiplier and is crucial to mission success. Recent advances in information systems and technologies provide new means to decision makers and users in order to gain information superiority. These developments in information technologies lead to a new term, which is known as network centric capability. Similar to network centric capable systems, cloud computing systems are operational today. In the near future extensive use of military clouds at the battlefield is predicted. Integrating cloud computing logic to network centric applications will increase the flexibility, cost-effectiveness, efficiency and accessibility of network-centric capabilities. In this paper, cloud computing and network centric capability concepts are defined. Some commercial cloud computing products and applications are mentioned. Network centric capable applications are covered. Cloud computing supported battlefield applications are analyzed. The effects of cloud computing systems on network centric capability and on the information domain in future warfare are discussed. Battlefield opportunities and novelties which might be introduced to network centric capability by cloud computing systems are researched. The role of military clouds in future warfare is proposed in this paper. It was concluded that military clouds will be indispensible components of the future battlefield. Military clouds have the potential of improving network centric capabilities, increasing situational awareness at the battlefield and facilitating the settlement of information superiority.

  17. Grid pulser

    International Nuclear Information System (INIS)

    Jansweijer, P.P.M.; Es, J.T. van.

    1990-01-01

    This report describes a fast pulse generator. This generator delivers a high-voltage pulse of at most 6000 V with a rise time being smaller than 50 nS. this results in a slew rate of more than 120.000 volts per μS. The pulse generator is used to control the grid of the injector of the electron accelerator MEA. The capacity of this grid is about 60 pF. In order to charge this capacity up to 6000 volts in 50 nS a current of 8 ampere is needed. The maximal pulse length is 50 μS with a repeat frequency of 500 Hz. During this 50 μS the stability of the pulse amplitude is better than 0.1%. (author). 20 figs

  18. The grid

    OpenAIRE

    Morrad, Annie; McArthur, Ian

    2018-01-01

    Project Anywhere Project title: The Grid   Artists: Annie Morrad: Artist/Senior Lecturer, University of Lincoln, School of Film and Media, Lincoln, UK   Dr Ian McArthur: Hybrid Practitioner/Senior Lecturer, UNSW Art & Design, UNSW Australia, Sydney, Australia   Annie Morrad is a London-based artist and musician and senior lecturer at the University of Lincoln, UK. Dr Ian McArthur is a Sydney-based hybrid practitione...

  19. The evolution of cloud computing how to plan for change

    CERN Document Server

    Longbottom, Clive

    2017-01-01

    Cloud computing has been positioned as today's ideal IT platform. This book looks at what cloud promises and how it's likely to evolve in the future. Readers will be able to ensure that decisions made now will hold them in good stead in the future and will gain an understanding of how cloud can deliver the best outcome for their organisations.

  20. A principled approach to grid middleware

    DEFF Research Database (Denmark)

    Berthold, Jost; Bardino, Jonas; Vinter, Brian

    2011-01-01

    This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability and mini......This paper provides an overview of MiG, a Grid middleware for advanced job execution, data storage and group collaboration in an integrated, yet lightweight solution using standard software. In contrast to most other Grid middlewares, MiG is developed with a particular focus on usability...... and minimal system requirements, applying strict principles to keep the middleware free of legacy burdens and overly complicated design. We provide an overview of MiG and describe its features in view of the Grid vision and its relation to more recent cloud computing trends....

  1. A REVIEW ON SECURITY AND PRIVACY ISSUES IN CLOUD COMPUTING

    OpenAIRE

    Gulshan Kumar*, Dr.Vijay Laxmi

    2017-01-01

    Cloud computing is an upcoming paradigm that offers tremendous advantages in economical aspects, such as reduced time to market, flexible computing capabilities, and limitless computing power. To use the full potential of cloud computing, data is transferred, processed and stored by external cloud providers. However, data owners are very skeptical to place their data outside their own control sphere. Cloud computing is a new development of grid, parallel, and distributed computing with visual...

  2. Construction Management Meets Today's Realities.

    Science.gov (United States)

    Day, C. William

    1979-01-01

    Construction management--the control of cost and time from concept through construction--grew out of a need to meet the realities of today's economy. A checklist of services a construction manager provides is presented. (Author/MLF)

  3. "UK today" Tallinnas / Tuuli Oder

    Index Scriptorium Estoniae

    Oder, Tuuli, 1958-

    2001-01-01

    Vabariikliku inglise keele olümpiaadi raames toimus Tallinnas viktoriini "UK today" lõppvoor. Osalesid 22 kooli kaheliikmelised võistkonnad. Viktoriini tulemused koolide lõikes ja küsimused õigete vastustega

  4. Deliverable 1.1 Smart grid scenario

    DEFF Research Database (Denmark)

    Korman, Matus; Ekstedt, Mathias; Gehrke, Oliver

    2015-01-01

    The purpose of the SALVAGE project is to develop better support for managing and designing a secure future smart grid. This approach includes cyber security technologies dedicated to power grid operation as well as support for the migration to the future smart grid solutions, including the legacy...... of ICT that necessarily will be part of it. The objective is further to develop cyber security technology and methodology optimized with the particular needs and context of the power industry, something that is to a large extent lacking in general cyber security best practices and technologies today...

  5. Datacenter Changes vs. Employment Rates for Datacenter Managers In the Cloud Computing Era

    OpenAIRE

    Mirzoev, Timur; Benson, Bruce; Hillhouse, David; Lewis, Mickey

    2014-01-01

    Due to the evolving Cloud Computing paradigm, there is a prevailing concern that in the near future data center managers may be in short supply. Cloud computing, as a whole, is becoming more prevalent into today s computing world. In fact, cloud computing has become so popular that some are now referring to data centers as cloud centers. How does this interest in cloud computing translate into employment rates for data center managers? The popularity of the public and private cloud models are...

  6. Towards autonomous vehicular clouds

    Directory of Open Access Journals (Sweden)

    Stephan Olariu

    2011-09-01

    Full Text Available The dawn of the 21st century has seen a growing interest in vehicular networking and its myriad potential applications. The initial view of practitioners and researchers was that radio-equipped vehicles could keep the drivers informed about potential safety risks and increase their awareness of road conditions. The view then expanded to include access to the Internet and associated services. This position paper proposes and promotes a novel and more comprehensive vision namely, that advances in vehicular networks, embedded devices and cloud computing will enable the formation of autonomous clouds of vehicular computing, communication, sensing, power and physical resources. Hence, we coin the term, autonomous vehicular clouds (AVCs. A key feature distinguishing AVCs from conventional cloud computing is that mobile AVC resources can be pooled dynamically to serve authorized users and to enable autonomy in real-time service sharing and management on terrestrial, aerial, or aquatic pathways or theaters of operations. In addition to general-purpose AVCs, we also envision the emergence of specialized AVCs such as mobile analytics laboratories. Furthermore, we envision that the integration of AVCs with ubiquitous smart infrastructures including intelligent transportation systems, smart cities and smart electric power grids will have an enormous societal impact enabling ubiquitous utility cyber-physical services at the right place, right time and with right-sized resources.

  7. Today's Higher Education IT Workforce

    Science.gov (United States)

    Bichsel, Jacqueline

    2014-01-01

    The professionals making up the current higher education IT workforce have been asked to adjust to a culture of increased IT consumerization, more sourcing options, broader interest in IT's transformative potential, and decreased resources. Disruptions that include the bring-your-own-everything era, cloud computing, new management practices,…

  8. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  9. Re-thinking Grid Security Architecture

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Koeroo, O.; Groep, D.; van Engelen, R.; Govindaraju, M.; Cafaro, M.

    2008-01-01

    The security models used in Grid systems today strongly bear the marks of their diverse origin. Historically retrofitted to the distributed systems they are designed to protect and control, the security model is usually limited in scope and applicability, and its implementation tailored towards a

  10. Smart Grid Risk Management

    Science.gov (United States)

    Abad Lopez, Carlos Adrian

    Current electricity infrastructure is being stressed from several directions -- high demand, unreliable supply, extreme weather conditions, accidents, among others. Infrastructure planners have, traditionally, focused on only the cost of the system; today, resilience and sustainability are increasingly becoming more important. In this dissertation, we develop computational tools for efficiently managing electricity resources to help create a more reliable and sustainable electrical grid. The tools we present in this work will help electric utilities coordinate demand to allow the smooth and large scale integration of renewable sources of energy into traditional grids, as well as provide infrastructure planners and operators in developing countries a framework for making informed planning and control decisions in the presence of uncertainty. Demand-side management is considered as the most viable solution for maintaining grid stability as generation from intermittent renewable sources increases. Demand-side management, particularly demand response (DR) programs that attempt to alter the energy consumption of customers either by using price-based incentives or up-front power interruption contracts, is more cost-effective and sustainable in addressing short-term supply-demand imbalances when compared with the alternative that involves increasing fossil fuel-based fast spinning reserves. An essential step in compensating participating customers and benchmarking the effectiveness of DR programs is to be able to independently detect the load reduction from observed meter data. Electric utilities implementing automated DR programs through direct load control switches are also interested in detecting the reduction in demand to efficiently pinpoint non-functioning devices to reduce maintenance costs. We develop sparse optimization methods for detecting a small change in the demand for electricity of a customer in response to a price change or signal from the utility

  11. GStat 2.0: Grid Information System Status Monitoring

    OpenAIRE

    Field, L; Huang, J; Tsai, M

    2009-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE info...

  12. Cloud Cover

    Science.gov (United States)

    Schaffhauser, Dian

    2012-01-01

    This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…

  13. Cloud Control

    Science.gov (United States)

    Ramaswami, Rama; Raths, David; Schaffhauser, Dian; Skelly, Jennifer

    2011-01-01

    For many IT shops, the cloud offers an opportunity not only to improve operations but also to align themselves more closely with their schools' strategic goals. The cloud is not a plug-and-play proposition, however--it is a complex, evolving landscape that demands one's full attention. Security, privacy, contracts, and contingency planning are all…

  14. Nuclear technology today and tomorrow

    International Nuclear Information System (INIS)

    Lombardi, C.

    2007-01-01

    Nuclear power has returned today to contain the energy problem. It is useful to make a summary of its characteristics and its evolution over the past 50 years and its prospects. The Italy can rely on their way by revitalizing its potential not fully disappeared [it

  15. School Counseling in China Today

    Science.gov (United States)

    Thomason, Timothy C.; Qiong, Xiao

    2008-01-01

    This article provides a brief overview of the development of psychological thinking in China and social influences on the practice of school counseling today. Common problems of students are described, including anxiety due to pressure to perform well on exams, loneliness and social discomfort, and video game addiction. Counseling approaches used…

  16. The MammoGrid Project Grids Architecture

    CERN Document Server

    McClatchey, Richard; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri; Buncic, Predrag; Clatchey, Richard Mc; Buncic, Predrag; Manset, David; Hauer, Tamas; Estrella, Florida; Saiz, Pablo; Rogulin, Dmitri

    2003-01-01

    The aim of the recently EU-funded MammoGrid project is, in the light of emerging Grid technology, to develop a European-wide database of mammograms that will be used to develop a set of important healthcare applications and investigate the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. The MammoGrid consortium intends to use a Grid model to enable distributed computing that spans national borders. This Grid infrastructure will be used for deploying novel algorithms as software directly developed or enhanced within the project. Using the MammoGrid clinicians will be able to harness the use of massive amounts of medical image data to perform epidemiological studies, advanced image processing, radiographic education and ultimately, tele-diagnosis over communities of medical "virtual organisations". This is achieved through the use of Grid-compliant services [1] for managing (versions of) massively distributed files of mammograms, for handling the distri...

  17. Security and Cloud Outsourcing Framework for Economic Dispatch

    International Nuclear Information System (INIS)

    Sarker, Mushfiqur R.; Wang, Jianhui

    2017-01-01

    The computational complexity and problem sizes of power grid applications have increased significantly with the advent of renewable resources and smart grid technologies. The current paradigm of solving these issues consist of inhouse high performance computing infrastructures, which have drawbacks of high capital expenditures, maintenance, and limited scalability. Cloud computing is an ideal alternative due to its powerful computational capacity, rapid scalability, and high cost-effectiveness. A major challenge, however, remains in that the highly confidential grid data is susceptible for potential cyberattacks when outsourced to the cloud. In this work, a security and cloud outsourcing framework is developed for the Economic Dispatch (ED) linear programming application. As a result, the security framework transforms the ED linear program into a confidentiality-preserving linear program, that masks both the data and problem structure, thus enabling secure outsourcing to the cloud. Results show that for large grid test cases the performance gain and costs outperforms the in-house infrastructure.

  18. MCloud: Secure Provenance for Mobile Cloud Users

    Science.gov (United States)

    2016-10-03

    Feasibility of Smartphone Clouds, 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid). 04-MAY- 15, Shenzhen, China...compromised kernel, with the highest privilege. MCloud context data gathered by smartphone sensors can now be relayed correctly and with integrity...aspects of people’s daily online and physical activities. Yet, in critical settings it is especially difficult to ascertain and assert an acceptable level

  19. Cloud Computing Fundamentals

    Science.gov (United States)

    Furht, Borko

    In the introductory chapter we define the concept of cloud computing and cloud services, and we introduce layers and types of cloud computing. We discuss the differences between cloud computing and cloud services. New technologies that enabled cloud computing are presented next. We also discuss cloud computing features, standards, and security issues. We introduce the key cloud computing platforms, their vendors, and their offerings. We discuss cloud computing challenges and the future of cloud computing.

  20. Smart grid communication-enabled intelligence for the electric power grid

    CERN Document Server

    Bush, Stephen F

    2014-01-01

    This book bridges the divide between the fields of power systems engineering and computer communication through the new field of power system information theory. Written by an expert with vast experience in the field, this book explores the smart grid from generation to consumption, both as it is planned today and how it will evolve tomorrow. The book focuses upon what differentiates the smart grid from the ""traditional"" power grid as it has been known for the last century. Furthermore, the author provides the reader with a fundamental understanding of both power systems and communication ne

  1. Exploiting the Potential of Data Centers in the Smart Grid

    Science.gov (United States)

    Wang, Xiaoying; Zhang, Yu-An; Liu, Xiaojing; Cao, Tengfei

    As the number of cloud computing data centers grows rapidly in recent years, from the perspective of smart grid, they are really large and noticeable electric load. In this paper, we focus on the important role and the potential of data centers as controllable loads in the smart grid. We reviewed relevant research in the area of letting data centers participate in the ancillary services market and demand response programs of the grid, and further investigate the possibility of exploiting the impact of data center placement on the grid. Various opportunities and challenges are summarized, which could provide more chances for researches to explore this field.

  2. Strategic planning: today's hot buttons.

    Science.gov (United States)

    Bohlmann, R C

    1998-01-01

    The first generation of mergers and managed care hasn't slowed down group practices' need for strategic planning. Even groups that already went through one merger are asking about new mergers or ownership possibilities, the future of managed care, performance standards and physician unhappiness. Strategic planning, including consideration of bench-marking, production of ancillary services and physician involvement, can help. Even if only a short, general look at the future, strategic planning shows the proactive leadership needed in today's environment.

  3. Hidden in the Clouds: New Ideas in Cloud Computing

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the fullest degree. Please bring questions and opinions, and be ready to share both!   Bio: S...

  4. A Development of Lightweight Grid Interface

    International Nuclear Information System (INIS)

    Iwai, G; Kawai, Y; Sasaki, T; Watase, Y

    2011-01-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  5. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...... the IT scene. In line with the views presented by Nicolas Carr in 2003 (Carr, 2003), it is a popular assumption that cloud computing will be the next utility (like water, electricity and gas) (Buyya, Yeo, Venugopal, Broberg, & Brandic, 2009). However, this assumption disregards the fact that most IT production......), for instance, in establishing and maintaining trust between the involved parties (Sabherwal, 1999). So far, research in cloud computing has neglected this perspective and focused entirely on aspects relating to technology, economy, security and legal questions. While the core technologies of cloud computing (e...

  6. Mobile Clouds

    DEFF Research Database (Denmark)

    Fitzek, Frank; Katz, Marcos

    A mobile cloud is a cooperative arrangement of dynamically connected communication nodes sharing opportunistic resources. In this book, authors provide a comprehensive and motivating overview of this rapidly emerging technology. The book explores how distributed resources can be shared by mobile...... users in very different ways and for various purposes. The book provides many stimulating examples of resource-sharing applications. Enabling technologies for mobile clouds are also discussed, highlighting the key role of network coding. Mobile clouds have the potential to enhance communications...... performance, improve utilization of resources and create flexible platforms to share resources in very novel ways. Energy efficient aspects of mobile clouds are discussed in detail, showing how being cooperative can bring mobile users significant energy saving. The book presents and discusses multiple...

  7. Evaluation of a stratiform cloud parameterization for general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States); McCaa, J. [Univ. of Washington, Seattle, WA (United States)

    1996-04-01

    To evaluate the relative importance of horizontal advection of cloud versus cloud formation within the grid cell of a single column model (SCM), we have performed a series of simulations with our SCM driven by a fixed vertical velocity and various rates of horizontal advection.

  8. Security and Privacy Issues in Cloud Computing

    OpenAIRE

    Sen, Jaydip

    2013-01-01

    Today, cloud computing is defined and talked about across the ICT industry under different contexts and with different definitions attached to it. It is a new paradigm in the evolution of Information Technology, as it is one of the biggest revolutions in this field to have taken place in recent times. According to the National Institute for Standards and Technology (NIST), “cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing ...

  9. New experiment to investigate cosmic connection to clouds

    CERN Multimedia

    United Kingdom. Particle Physics and Astronomy Research Council

    2006-01-01

    "A novel experiment, known as CLOUD (Cosmics Leaving OUtdoor Droplets), begins taking its first data today with a prototype detector in a prticle beam at CERN, the world's largest laboratory for particle physics." (1,5 page)

  10. Grid Integration Research | Wind | NREL

    Science.gov (United States)

    Grid Integration Research Grid Integration Research Researchers study grid integration of wind three wind turbines with transmission lines in the background. Capabilities NREL's grid integration electric power system operators to more efficiently manage wind grid system integration. A photo of

  11. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  12. Web publishing today and tomorrow

    CERN Document Server

    Lie, Hakon W

    1999-01-01

    The three lectures will give participants the grand tour of the Web as we know it today, as well as peeks into the past and the future. Many three-letter acronyms will be expanded, and an overview will be provided to see how the various specifications work together. Web publishing is the common theme throughout the lectures and in the second lecture, special emphasis will be given to data formats for publishing, including HTML, XML, MathML and SMIL. In the last lectures, automatic document manipulation and presentation will be discussed, including CSS, DOM and XTL.

  13. Digital Forensics in Cloud Computing

    Directory of Open Access Journals (Sweden)

    PATRASCU, A.

    2014-05-01

    Full Text Available Cloud Computing is a rather new technology which has the goal of efficiently usage of datacenter resources and offers them to the users on a pay per use model. In this equation we need to know exactly where and how a piece of information is stored or processed. In today's cloud deployments this task is becoming more and more a necessity and a must because we need a way to monitor user activity, and furthermore, in case of legal actions, we must be able to present digital evidence in a way in which it is accepted. In this paper we are going to present a modular and distributed architecture that can be used to implement a cloud digital forensics framework on top of new or existing datacenters.

  14. Development and Usage of Software as a Service for a Cloud and Non-Cloud Based Environment- An Empirical Study

    OpenAIRE

    Pratiyush Guleria Guleria; Vikas Sharma; Manish Arora

    2012-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand. Cloud computing is a natural evolution of the widespread adoption of virtualization, service-oriented architecture and utility computing. The computer applications nowadays are becoming more and more complex; there is an ever increasing demand for computing resources. As this demand has risen, the concepts of cloud computing and grid computing...

  15. Cloud Based Educational Systems and Its Challenges and Opportunities and Issues

    Science.gov (United States)

    Paul, Prantosh Kr.; Lata Dangwal, Kiran

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and…

  16. Dynamic virtual AliEn Grid sites on Nimbus with CernVM

    International Nuclear Information System (INIS)

    Harutyunyan, A; Buncic, P; Freeman, T; Keahey, K

    2010-01-01

    We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented.

  17. Parallel grid population

    Science.gov (United States)

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  18. Simulation For Synchronization Of A Micro-Grid With Three-Phase Systems

    Directory of Open Access Journals (Sweden)

    Mohammad Jafari Far

    2015-08-01

    Full Text Available Abstract today due to the high reliability of the micro-grids they have developed significantly. They have two states of operation the island state and connection to the main grid. Under certain circumstances the micro-grid is connected to or disconnected from the network. Synchronization of a micro-grid with the network must be done when its voltage is synchronized with the voltage in the main grid. Phase lock loops are responsible to identify the voltage phase of the micro-gird and the main grid and when these two voltages are in the same phase they connect the micro-grid to the main grid. In this research the connection of a micro-grid to the main grid in the two phases of synchronous and asynchronous voltage is simulated and investigated.

  19. Soft Clouding

    DEFF Research Database (Denmark)

    Søndergaard, Morten; Markussen, Thomas; Wetton, Barnabas

    2012-01-01

    Soft Clouding is a blended concept, which describes the aim of a collaborative and transdisciplinary project. The concept is a metaphor implying a blend of cognitive, embodied interaction and semantic web. Furthermore, it is a metaphor describing our attempt of curating a new semantics of sound...... archiving. The Soft Clouding Project is part of LARM - a major infrastructure combining research in and access to sound and radio archives in Denmark. In 2012 the LARM infrastructure will consist of more than 1 million hours of radio, combined with metadata who describes the content. The idea is to analyse...... the concept of ‘infrastructure’ and ‘interface’ on a creative play with the fundamentals of LARM (and any sound archive situation combining many kinds and layers of data and sources). This paper will present and discuss the Soft clouding project from the perspective of the three practices and competencies...

  20. Cloud Computing: Should It Be Integrated into the Curriculum?

    Science.gov (United States)

    Changchit, Chuleeporn

    2015-01-01

    Cloud computing has become increasingly popular among users and businesses around the world, and education is no exception. Cloud computing can bring an increased number of benefits to an educational setting, not only for its cost effectiveness, but also for the thirst for technology that college students have today, which allows learning and…

  1. THE EXPANSION OF ACCOUNTING TO THE CLOUD

    Directory of Open Access Journals (Sweden)

    Otilia DIMITRIU

    2014-06-01

    Full Text Available The world today is witnessing an explosion of technologies that are remodelling our entire reality. The traditional way of thinking in the business field has shifted towards a new IT breakthrough: cloud computing. The cloud paradigm has emerged as a natural step in the evolution of the internet and has captivated everyone’s attention. The accounting profession itself has found a mean to optimize its activity through cloud-based applications. By reviewing the latest and most relevant studies and practitioners’ reports, this paper is focused on the implications of cloud accounting, as the fusion between cloud technologies and accounting. We addressed this innovative topic through a business-oriented approach and we brought forward a new accounting model that might revolutionize the economic landscape.

  2. RACORO Extended-Term Aircraft Observations of Boundary-Layer Clouds

    Science.gov (United States)

    Vogelmann, Andrew M.; McFarquhar, Greg M.; Ogren, John A.; Turner, David D.; Comstock, Jennifer M.; Feingold, Graham; Long, Charles N.; Jonsson, Haflidi H.; Bucholtz, Anthony; Collins, Don R.; hide

    2012-01-01

    Small boundary-layer clouds are ubiquitous over many parts of the globe and strongly influence the Earths radiative energy balance. However, our understanding of these clouds is insufficient to solve pressing scientific problems. For example, cloud feedback represents the largest uncertainty amongst all climate feedbacks in general circulation models (GCM). Several issues complicate understanding boundary-layer clouds and simulating them in GCMs. The high spatial variability of boundary-layer clouds poses an enormous computational challenge, since their horizontal dimensions and internal variability occur at spatial scales much finer than the computational grids used in GCMs. Aerosol-cloud interactions further complicate boundary-layer cloud measurement and simulation. Additionally, aerosols influence processes such as precipitation and cloud lifetime. An added complication is that at small scales (order meters to 10s of meters) distinguishing cloud from aerosol is increasingly difficult, due to the effects of aerosol humidification, cloud fragments and photon scattering between clouds.

  3. Cloud Chamber

    DEFF Research Database (Denmark)

    Gfader, Verina

    Cloud Chamber takes its roots in a performance project, titled The Guests 做东, devised by Verina Gfader for the 11th Shanghai Biennale, ‘Why Not Ask Again: Arguments, Counter-arguments, and Stories’. Departing from the inclusion of the biennale audience to write a future folk tale, Cloud Chamber......: fiction and translation and translation through time; post literacy; world picturing-world typing; and cartographic entanglements and expressions of subjectivity; through the lens a social imaginary of worlding or cosmological quest. Art at its core? Contributions by Nikos Papastergiadis, Rebecca Carson...

  4. Smart grid security

    CERN Document Server

    Goel, Sanjay; Papakonstantinou, Vagelis; Kloza, Dariusz

    2015-01-01

    This book on smart grid security is meant for a broad audience from managers to technical experts. It highlights security challenges that are faced in the smart grid as we widely deploy it across the landscape. It starts with a brief overview of the smart grid and then discusses some of the reported attacks on the grid. It covers network threats, cyber physical threats, smart metering threats, as well as privacy issues in the smart grid. Along with the threats the book discusses the means to improve smart grid security and the standards that are emerging in the field. The second part of the b

  5. Sahara Dust Cloud

    Science.gov (United States)

    2005-01-01

    [figure removed for brevity, see original site] Dust Particles Click on the image for Quicktime movie from 7/15-7/24 A continent-sized cloud of hot air and dust originating from the Sahara Desert crossed the Atlantic Ocean and headed towards Florida and the Caribbean. A Saharan Air Layer, or SAL, forms when dry air and dust rise from Africa's west coast and ride the trade winds above the Atlantic Ocean. These dust clouds are not uncommon, especially during the months of July and August. They start when weather patterns called tropical waves pick up dust from the desert in North Africa, carry it a couple of miles into the atmosphere and drift westward. In a sequence of images created by data acquired by the Earth-orbiting Atmospheric Infrared Sounder ranging from July 15 through July 24, we see the distribution of the cloud in the atmosphere as it swirls off of Africa and heads across the ocean to the west. Using the unique silicate spectral signatures of dust in the thermal infrared, AIRS can detect the presence of dust in the atmosphere day or night. This detection works best if there are no clouds present on top of the dust; when clouds are present, they can interfere with the signal, making it much harder to detect dust as in the case of July 24, 2005. In the Quicktime movie, the scale at the bottom of the images shows +1 for dust definitely detected, and ranges down to -1 for no dust detected. The plots are averaged over a number of AIRS observations falling within grid boxes, and so it is possible to obtain fractional numbers. [figure removed for brevity, see original site] Total Water Vapor in the Atmosphere Around the Dust Cloud Click on the image for Quicktime movie The dust cloud is contained within a dry adiabatic layer which originates over the Sahara Desert. This Saharan Air Layer (SAL) advances Westward over the Atlantic Ocean, overriding the cool, moist air nearer the surface. This burst of very dry air is visible in the AIRS retrieved total water

  6. Cloud Computing Strategy

    Science.gov (United States)

    2012-07-01

    regardless of  access point or the device being used across the Global Information Grid ( GIG ).  These data  centers will host existing applications...state.  It  illustrates that the DoD Enterprise Cloud is an integrated environment on the  GIG , consisting of  DoD Components, commercial entities...Operations and Maintenance (O&M) costs by  leveraging  economies  of scale, and automate monitoring and provisioning to reduce the  human cost of service

  7. When STAR meets the Clouds-Virtualization and Cloud Computing Experiences

    International Nuclear Information System (INIS)

    Lauret, J; Hajdu, L; Walker, M; Balewski, J; Goasguen, S; Stout, L; Fenn, M; Keahey, K

    2011-01-01

    In recent years, Cloud computing has become a very attractive paradigm and popular model for accessing distributed resources. The Cloud has emerged as the next big trend. The burst of platform and projects providing Cloud resources and interfaces at the very same time that Grid projects are entering a production phase in their life cycle has however raised the question of the best approach to handling distributed resources. Especially, are Cloud resources scaling at the levels shown by Grids? Are they performing at the same level? What is their overhead on the IT teams and infrastructure? Rather than seeing the two as orthogonal, the STAR experiment has viewed them as complimentary and has studied merging the best of the two worlds with Grid middleware providing the aggregation of both Cloud and traditional resources. Since its first use of Cloud resources on Amazon EC2 in 2008/2009 using a Nimbus/EC2 interface, the STAR software team has tested and experimented with many novel approaches: from a traditional, native EC2 approach to the Virtual Organization Cluster (VOC) at Clemson University and Condor/VM on the GLOW resources at the University of Wisconsin. The STAR team is also planning to run as part of the DOE/Magellan project. In this paper, we will present an overview of our findings from using truly opportunistic resources and scaling-out two orders of magnitude in both tests and practical usage.

  8. Grid production with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2018-01-01

    ATLAS has developed and previously presented a new computing architecture, the Event Service, that allows real time delivery of fine grained workloads which process dispatched events (or event ranges) and immediately streams outputs. The principal aim was to profit from opportunistic resources such as commercial cloud, supercomputing, and volunteer computing, and otherwise unused cycles on clusters and grids. During the development and deployment phase, its utility also on the grid and conventional clusters for the exploitation of otherwise unused cycles became apparent. Here we describe our experience commissioning the Event Service on the grid in the ATLAS production system. We study the performance compared with standard simulation production. We describe the integration with the ATLAS data management system to ensure scalability and compatibility with object stores. Finally, we outline the remaining steps towards a fully commissioned system.

  9. The Grid is operational – it’s official!

    CERN Multimedia

    2008-01-01

    On Friday, 3 October, CERN and its many partners around the world officially marked the end of seven years of development and deployment of the Worldwide LHC Computing Grid (WLCG) and the beginning of continuous operations with an all-day Grid Fest. Wolfgang von Rüden unveils the WLCG sculpture. Les Robertson speaking at the Grid Fest. At the LHC Grid Fest, Bob Jones highlights the far-reaching uses of grid computing. Over 250 grid-enthusiasts gathered in the Globe, including large delegations from the press and from industrial partners, as well as many of the people around the world who manage the distributed operations of the WLCG, which today comprises more than 140 computer centres in 33 countries. As befits a cutting-edge information technology, many participants joined virtually, by video, to mark the occasion. Unlike the start-up of the LHC, there was no single moment of high dram...

  10. Cloud computing.

    Science.gov (United States)

    Wink, Diane M

    2012-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. This article describes how cloud computing can be used in nursing education.

  11. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... There are several types of services available on a cloud. We describe .... CPU speed has been doubling every 18 months at constant cost. Besides this ... Plain text (e.g., email) may be read by anyone who is able to access it.

  12. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2010-01-01

    This book is an introduction to structured and unstructured grid methods in scientific computing, addressing graduate students, scientists as well as practitioners. Basic local and integral grid quality measures are formulated and new approaches to mesh generation are reviewed. In addition to the content of the successful first edition, a more detailed and practice oriented description of monitor metrics in Beltrami and diffusion equations is given for generating adaptive numerical grids. Also, new techniques developed by the author are presented, in particular a technique based on the inverted form of Beltrami’s partial differential equations with respect to control metrics. This technique allows the generation of adaptive grids for a wide variety of computational physics problems, including grid clustering to given function values and gradients, grid alignment with given vector fields, and combinations thereof. Applications of geometric methods to the analysis of numerical grid behavior as well as grid ge...

  13. A performance analysis of EC2 cloud computing services for scientific computing

    NARCIS (Netherlands)

    Ostermann, S.; Iosup, A.; Yigitbasi, M.N.; Prodan, R.; Fahringer, T.; Epema, D.H.J.; Avresky, D.; Diaz, M.; Bode, A.; Bruno, C.; Dekel, E.

    2010-01-01

    Cloud Computing is emerging today as a commercial infrastructure that eliminates the need for maintaining expensive computing hardware. Through the use of virtualization, clouds promise to address with the same shared set of physical resources a large user base with different needs. Thus, clouds

  14. Statistical thermodynamics and the size distributions of tropical convective clouds.

    Science.gov (United States)

    Garrett, T. J.; Glenn, I. B.; Krueger, S. K.; Ferlay, N.

    2017-12-01

    Parameterizations for sub-grid cloud dynamics are commonly developed by using fine scale modeling or measurements to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to formulating these behaviors cloud state for use within a coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical thermodynamics. This second approach is quite widely used elsewhere in the atmospheric sciences: for example to explain the heat capacity of air, blackbody radiation, or even the density profile or air in the atmosphere. Here we describe how entrainment and detrainment across cloud perimeters is limited by the amount of available air and the range of moist static energy in the atmosphere, and that constrains cloud perimeter distributions to a power law with a -1 exponent along isentropes and to a Boltzmann distribution across isentropes. Further, the total cloud perimeter density in a cloud field is directly tied to the buoyancy frequency of the column. These simple results are shown to be reproduced within a complex dynamic simulation of a tropical convective cloud field and in passive satellite observations of cloud 3D structures. The implication is that equilibrium tropical cloud structures can be inferred from the bulk thermodynamic structure of the atmosphere without having to analyze computationally expensive dynamic simulations.

  15. Today's threat and tomorrow's reaction

    International Nuclear Information System (INIS)

    Moore, L.R.

    2002-01-01

    Full text: The events of September 11 have only confirmed our past nightmares and warnings to industries, agencies, and governments. The threat of even more significant catastrophic attacks, using nuclear materials, was just as real ten years ago, as it is today. In many cases, our vulnerability remains the same as years ago. There is a dire need for all organizations to agree upon threats and vulnerabilities, and to implement appropriate protections, for nuclear materials or other 'means' to achieve an event of mass destruction. All appropriate organizations (industries, agencies, and governments) should be able to define, assess, and recognize international threats and vulnerabilities in the same manner. In complimentary fashion, the organizations should be able to implement safeguards against this consistent generic threat. On an international scale the same threats, and most vulnerabilities, pose high risks to all of these organizations and societies. Indeed, in today's world, the vulnerabilities of one nation may clearly pose great risk to another nation. Once threats and vulnerabilities are consistently recognized, we can begin to approach their mitigation in a more 'universal' fashion by the application of internationally recognized and accepted security measures. The path to recognition of these security measures will require agreement on many diverse issues. However, once there is general agreement, we can then proceed to the acquisition of diverse national and international resources with which to implement the security measures 'universally' to eliminate 'weak-links' in the chain of nuclear materials, on a truly international scale. I would like to discuss: developing a internationally acceptable 'generic' statement of threat, vulnerability assessment process, and security measure; proposing this international statement of threat, vulnerability assessment process, and appropriate security measures to organizations (industries, agencies, and governments

  16. Exploring the factors influencing the cloud computing adoption: a systematic study on cloud migration.

    Science.gov (United States)

    Rai, Rashmi; Sahoo, Gadadhar; Mehfuz, Shabana

    2015-01-01

    Today, most of the organizations trust on their age old legacy applications, to support their business-critical systems. However, there are several critical concerns, as maintainability and scalability issues, associated with the legacy system. In this background, cloud services offer a more agile and cost effective platform, to support business applications and IT infrastructure. As the adoption of cloud services has been increasing recently and so has been the academic research in cloud migration. However, there is a genuine need of secondary study to further strengthen this research. The primary objective of this paper is to scientifically and systematically identify, categorize and compare the existing research work in the area of legacy to cloud migration. The paper has also endeavored to consolidate the research on Security issues, which is prime factor hindering the adoption of cloud through classifying the studies on secure cloud migration. SLR (Systematic Literature Review) of thirty selected papers, published from 2009 to 2014 was conducted to properly understand the nuances of the security framework. To categorize the selected studies, authors have proposed a conceptual model for cloud migration which has resulted in a resource base of existing solutions for cloud migration. This study concludes that cloud migration research is in seminal stage but simultaneously it is also evolving and maturing, with increasing participation from academics and industry alike. The paper also identifies the need for a secure migration model, which can fortify organization's trust into cloud migration and facilitate necessary tool support to automate the migration process.

  17. Chimera Grid Tools

    Science.gov (United States)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  18. Bayesian grid matching

    DEFF Research Database (Denmark)

    Hartelius, Karsten; Carstensen, Jens Michael

    2003-01-01

    A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which r...

  19. Smart grid in China

    DEFF Research Database (Denmark)

    Sommer, Simon; Ma, Zheng; Jørgensen, Bo Nørregaard

    2015-01-01

    China is planning to transform its traditional power grid in favour of a smart grid, since it allows a more economically efficient and a more environmentally friendly transmission and distribution of electricity. Thus, a nationwide smart grid is likely to save tremendous amounts of resources...

  20. Religious Renaissance in China Today

    Directory of Open Access Journals (Sweden)

    Richard Madsen

    2011-01-01

    Full Text Available Since the beginning of the Reform Era in 1979, there has been a rapid growth and development of religious belief and practice in China. A substantial new scholarly literature has been generated in the attempt to document and understand this. This essay identifies the most important contributions to that literature and discusses areas of agreement and controversy across the literature. Along with new data, new paradigms have developed to frame research on Chinese religions. The paradigm derived from C. K. Yang’s classic work in the 1960s came from structural functionalism, which served to unite research in the humanities and social sciences. However, structural functionalism has been abandoned by the new generation of scholars. In the humanities, the most popular paradigm derives from Michel Foucault, but there are also scholars who use neo-Durkheimian and neo-Weberian paradigms. In the social sciences, the dominant paradigms tend to focus on state-society relations. None of these paradigms fully captures the complexity of the transformations happening in China. We recommend greater dialogue between the humanities and social sciences in search of more adequate theoretical frameworks for understanding Chinese religions today.

  1. The Coolness of Capitalism Today

    Directory of Open Access Journals (Sweden)

    Jim McGuigan

    2012-05-01

    Full Text Available This paper is about the reconciliation of cultural analysis with political economy in Marxist-inspired research on communications. It traces how these two traditions became separated with the development of a one-dimensional and consumerist cultural studies, on the one-hand, and a more classically Marxist political economy of communications, on the other hand, that was accused of holding a simplistic and erroneous concept of ideology. The paper defends a conception of ideology as distorted communication motivated by unequal power relations and sketches a multidimensional mode of cultural analysis that takes account of the moments of production, consumption and textual meaning in the circulation of communications and culture. In accordance with this framework of analysis, the cool-capitalism thesis is outlined and illustrated with reference to Apple, the ‘cool’ corporation. And, the all-purpose mobile communication device is selected as a key and urgent focus of attention for research on commodity fetishism and labour exploitation on a global scale today.

  2. Tritium. Today's and tomorrow's developments

    International Nuclear Information System (INIS)

    Gazal, S.; Amiard, J.C.; Caussade, Bernard; Chenal, Christian; Hubert, Francoise; Sene, Monique

    2010-01-01

    Radioactive hydrogen isotope, tritium is one of the radionuclides which is the most released in the environment during the normal operation of nuclear facilities. The increase of nuclear activities and the development of future generations of reactors, like the EPR and ITER, would lead to a significant increase of tritium effluents in the atmosphere and in the natural waters, thus raising many worries and questions. Aware about the importance of this question, the national association of local information commissions (ANCLI) wished to make a status of the existing knowledge concerning tritium and organized in 2008 a colloquium at Orsay (France) with an inquiring approach. The scientific committee of the ANCLI, renowned for its expertise skills, mobilized several nuclear specialists to carry out this thought. This book represents a comprehensive synthesis of today's knowledge about tritium, about its management and about its impact on the environment and on human health. Based on recent scientific data and on precise examples, it treats of the overall questions raised by this radionuclide: 1 - tritium properties and different sources (natural and anthropic), 2 - the problem of tritiated wastes management; 3 - the bio-availability and bio-kinetics of the different tritium species; 4 - the tritium labelling of environments; 5 - tritium measurement and modeling of its environmental circulation; 6 - tritium radio-toxicity and its biological and health impacts; 7 - the different French and/or international regulations concerning tritium. (J.S.)

  3. Gas market is today strategical

    International Nuclear Information System (INIS)

    Darricarrere, Y.L.

    2006-01-01

    The energy market, and in particular the gas market, is today seething with excitement. In France, in Europe and in the rest of the world, the energy stakes are in the center of preoccupations. This article is an interview of Y.L. Darricarrere, general director of the gas and electricity division of Total group, who explains his opinions about the opening of European and French energy markets, presents the ambitions of Total group on these markets, and comments some recent events of the European energy scene: concentration between gas and electric utilities, the Suez and Gaz de France (GdF) project of merger, the risks linked with the coming in of national companies from producing countries, like Gazprom and Sonatrach, on the European market, the restriction of access of foreign companies to hydrocarbon reserves in Russia and Latin America (come back of the 'energy nationalism'), Total's policy for anticipating the increase of the world energy demand and the depletion of fossil fuel reserves. (J.S.)

  4. Artificial insemination in pigs today.

    Science.gov (United States)

    Knox, R V

    2016-01-01

    Use of artificial insemination (AI) for breeding pigs has been instrumental for facilitating global improvements in fertility, genetics, labor, and herd health. The establishment of AI centers for management of boars and production of semen has allowed for selection of boars for fertility and sperm production using in vitro and in vivo measures. Today, boars can be managed for production of 20 to 40 traditional AI doses containing 2.5 to 3.0 billion motile sperm in 75 to 100 mL of extender or 40 to 60 doses with 1.5 to 2.0 billion sperm in similar or reduced volumes for use in cervical or intrauterine AI. Regardless of the sperm dose, in liquid form, extenders are designed to sustain sperm fertility for 3 to 7 days. On farm, AI is the predominant form for commercial sow breeding and relies on manual detection of estrus with sows receiving two cervical or two intrauterine inseminations of the traditional or low sperm doses on each day detected in standing estrus. New approaches for increasing rates of genetic improvement through use of AI are aimed at methods to continue to lower the number of sperm in an AI dose and reducing the number of inseminations through use of a single, fixed-time AI after ovulation induction. Both approaches allow greater selection pressure for economically important swine traits in the sires and help extend the genetic advantages through AI on to more production farms. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. GridWise Standards Mapping Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bosquet, Mia L.

    2004-04-01

    ''GridWise'' is a concept of how advanced communications, information and controls technology can transform the nation's energy system--across the spectrum of large scale, central generation to common consumer appliances and equipment--into a collaborative network, rich in the exchange of decision making information and an abundance of market-based opportunities (Widergren and Bosquet 2003) accompanying the electric transmission and distribution system fully into the information and telecommunication age. This report summarizes a broad review of standards efforts which are related to GridWise--those which could ultimately contribute significantly to advancements toward the GridWise vision, or those which represent today's current technological basis upon which this vision must build.

  6. Usage of Cloud Computing Simulators and Future Systems For Computational Research

    OpenAIRE

    Lakshminarayanan, Ramkumar; Ramalingam, Rajasekar

    2016-01-01

    Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) are used as a business model for Cloud Computing. Nowadays, the adoption and deployment of Cloud Computing is increasing in various domains, forcing researchers to conduct research in the area of Cloud Computing ...

  7. Cloud Based Educational Systems 
And Its Challenges And Opportunities And Issues

    OpenAIRE

    PAUL, Prantosh Kr.; DANGWAL, Kiran LATA

    2014-01-01

    Cloud Computing (CC) is actually is a set of hardware, software, networks, storage, services an interface combines to deliver aspects of computing as a service. Cloud Computing (CC) actually uses the central remote servers to maintain data and applications. Practically Cloud Computing (CC) is extension of Grid computing with independency and smarter tools and technological gradients. Healthy Cloud Computing helps in sharing of software, hardware, application and other packages with the help o...

  8. Nuclear Power Today and Tomorrow

    International Nuclear Information System (INIS)

    Bychkov, Alexander

    2013-01-01

    Worldwide, with 437 nuclear power reactors in operation and 68 new reactors under construction, nuclear power's global generating capacity reached 372.5 GW(e) at the end of 2012. Despite public scepticism, and in some cases fear, which arose following the March 2011 Fukushima Daiichi nuclear accident, two years later the demand for nuclear power continues to grow steadily, albeit at a slower pace. A significant number of countries are pressing ahead with plans to implement or expand their nuclear power programmes because the drivers toward nuclear power that were present before Fukushima have not changed. These drivers include climate change, limited fossil fuel supply, and concerns about energy security. Globally, nuclear power looks set to continue to grow steadily, although more slowly than was expected before the Fukushima Daiichi nuclear accident. The IAEA's latest projections show a steady rise in the number of nuclear power plants in the world in the next 20 years. They project a growth in nuclear power capacity by 23% by 2030 in the low projection and by 100% in the high projection. Most new nuclear power reactors planned or under construction are in Asia. In 2012 construction began on seven nuclear power plants: Fuqing 4, Shidaowan 1, Tianwan 3 and Yangjiang 4 in China; Shin Ulchin 1 in Korea; Baltiisk 1 in Russia; and Barakah 1 in the United Arab Emirates. This increase from the previous year's figures indicates an on-going interest and commitment to nuclear power and demonstrates that nuclear power is resilient. Countries are demanding new, innovative reactor designs from vendors to meet strict requirements for safety, national grid capacity, size and construction time, which is a sign that nuclear power is set to keep growing over the next few decades.

  9. Grid Architecture 2

    Energy Technology Data Exchange (ETDEWEB)

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-01-01

    The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.

  10. Cloud management and security

    CERN Document Server

    Abbadi, Imad M

    2014-01-01

    Written by an expert with over 15 years' experience in the field, this book establishes the foundations of Cloud computing, building an in-depth and diverse understanding of the technologies behind Cloud computing. In this book, the author begins with an introduction to Cloud computing, presenting fundamental concepts such as analyzing Cloud definitions, Cloud evolution, Cloud services, Cloud deployment types and highlighting the main challenges. Following on from the introduction, the book is divided into three parts: Cloud management, Cloud security, and practical examples. Part one presents the main components constituting the Cloud and federated Cloud infrastructure(e.g., interactions and deployment), discusses management platforms (resources and services), identifies and analyzes the main properties of the Cloud infrastructure, and presents Cloud automated management services: virtual and application resource management services. Part two analyzes the problem of establishing trustworthy Cloud, discuss...

  11. Development of a cloud microphysical model and parameterizations to describe the effect of CCN on warm cloud

    Directory of Open Access Journals (Sweden)

    N. Kuba

    2006-01-01

    Full Text Available First, a hybrid cloud microphysical model was developed that incorporates both Lagrangian and Eulerian frameworks to study quantitatively the effect of cloud condensation nuclei (CCN on the precipitation of warm clouds. A parcel model and a grid model comprise the cloud model. The condensation growth of CCN in each parcel is estimated in a Lagrangian framework. Changes in cloud droplet size distribution arising from condensation and coalescence are calculated on grid points using a two-moment bin method in a semi-Lagrangian framework. Sedimentation and advection are estimated in the Eulerian framework between grid points. Results from the cloud model show that an increase in the number of CCN affects both the amount and the area of precipitation. Additionally, results from the hybrid microphysical model and Kessler's parameterization were compared. Second, new parameterizations were developed that estimate the number and size distribution of cloud droplets given the updraft velocity and the number of CCN. The parameterizations were derived from the results of numerous numerical experiments that used the cloud microphysical parcel model. The input information of CCN for these parameterizations is only several values of CCN spectrum (they are given by CCN counter for example. It is more convenient than conventional parameterizations those need values concerned with CCN spectrum, C and k in the equation of N=CSk, or, breadth, total number and median radius, for example. The new parameterizations' predictions of initial cloud droplet size distribution for the bin method were verified by using the aforesaid hybrid microphysical model. The newly developed parameterizations will save computing time, and can effectively approximate components of cloud microphysics in a non-hydrostatic cloud model. The parameterizations are useful not only in the bin method in the regional cloud-resolving model but also both for a two-moment bulk microphysical model and

  12. Cloud time

    CERN Document Server

    Lockwood, Dean

    2012-01-01

    The ‘Cloud’, hailed as a new digital commons, a utopia of collaborative expression and constant connection, actually constitutes a strategy of vitalist post-hegemonic power, which moves to dominate immanently and intensively, organizing our affective political involvements, instituting new modes of enclosure, and, crucially, colonizing the future through a new temporality of control. The virtual is often claimed as a realm of invention through which capitalism might be cracked, but it is precisely here that power now thrives. Cloud time, in service of security and profit, assumes all is knowable. We bear witness to the collapse of both past and future virtuals into a present dedicated to the exploitation of the spectres of both.

  13. The Status of Hitler Today

    Directory of Open Access Journals (Sweden)

    Ben Novak

    2009-12-01

    Full Text Available A finales del siglo XX, podía afirmarse que Hitler estaba más vivo y con más influencia que en la cumbre de su poder medio siglo antes. ¿Cómo ha alcanzado Hitler esta posición? ¿No está muerto? ¿Cómo puede ser tan prominente más de cincuenta años después de su defunción? Esta cuestión de Hitler en la actualidad posee diversos problemas históricos que están íntimamente conectados con otros morales. En este trabajo, sin embargo, pretendo concentrarme primordialmente en su dimensión histórica. A la luz del sorprendente ascenso de Hitler en la moderna conciencia histórica, esto puede llevarnos a la siguiente pregunta: ¿No hemos otorgado al dictador nazi un poder aún mayor que cuando estaba vivo y comandando sus divisiones de la Wehrmacht en el punto culminante de sus conquistas?___________________ABSTRACT:By the end of the twentieth century, it can be said, Hitler was more alive and prominent than at the height of his power a half-century before. How did Hitler become this way? Isn’t he dead? How can he become so prominent more than half a century after his death? The issue of Hitler today poses several historical problems that are deeply moral problems as well. In this work, however, I intend to concentrate primarily on their historical dimension. In light of Hitler’s astonishing rise in modern historical consciousness, this leads to the inevitable question: Have we not granted to Hitler a far greater power over us than ever he had when he was alive and commanding his Wehrmacht divisions at the farthest extent of his conquests?

  14. Fluctuations in a quasi-stationary shallow cumulus cloud ensemble

    Directory of Open Access Journals (Sweden)

    M. Sakradzija

    2015-01-01

    Full Text Available We propose an approach to stochastic parameterisation of shallow cumulus clouds to represent the convective variability and its dependence on the model resolution. To collect information about the individual cloud lifecycles and the cloud ensemble as a whole, we employ a large eddy simulation (LES model and a cloud tracking algorithm, followed by conditional sampling of clouds at the cloud-base level. In the case of a shallow cumulus ensemble, the cloud-base mass flux distribution is bimodal, due to the different shallow cloud subtypes, active and passive clouds. Each distribution mode can be approximated using a Weibull distribution, which is a generalisation of exponential distribution by accounting for the change in distribution shape due to the diversity of cloud lifecycles. The exponential distribution of cloud mass flux previously suggested for deep convection parameterisation is a special case of the Weibull distribution, which opens a way towards unification of the statistical convective ensemble formalism of shallow and deep cumulus clouds. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate a shallow convective cloud ensemble. It is formulated as a compound random process, with the number of convective elements drawn from a Poisson distribution, and the cloud mass flux sampled from a mixed Weibull distribution. Convective memory is accounted for through the explicit cloud lifecycles, making the model formulation consistent with the choice of the Weibull cloud mass flux distribution function. The memory of individual shallow clouds is required to capture the correct convective variability. The resulting distribution of the subgrid convective states in the considered shallow cumulus case is scale-adaptive – the smaller the grid size, the broader the distribution.

  15. Smart grid technologies in local electric grids

    Science.gov (United States)

    Lezhniuk, Petro D.; Pijarski, Paweł; Buslavets, Olga A.

    2017-08-01

    The research is devoted to the creation of favorable conditions for the integration of renewable sources of energy into electric grids, which were designed to be supplied from centralized generation at large electric power stations. Development of distributed generation in electric grids influences the conditions of their operation - conflict of interests arises. The possibility of optimal functioning of electric grids and renewable sources of energy, when complex criterion of the optimality is balance reliability of electric energy in local electric system and minimum losses of electric energy in it. Multilevel automated system for power flows control in electric grids by means of change of distributed generation of power is developed. Optimization of power flows is performed by local systems of automatic control of small hydropower stations and, if possible, solar power plants.

  16. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, Florin; Hansen, A.D.; Sørensen, P.

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project "Grid fault and design basis for wind turbine" supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  17. Mapping of grid faults and grid codes

    DEFF Research Database (Denmark)

    Iov, F.; Hansen, Anca Daniela; Sørensen, Poul Ejnar

    loads of wind turbines. The goal is also to clarify and define possible new directions in the certification process of power plant wind turbines, namely wind turbines, which participate actively in the stabilisation of power systems. Practical experience shows that there is a need...... challenges for the design of both the electrical system and the mechanical structure of wind turbines. An overview over the frequency of grid faults and the grid connection requirements in different relevant countries is done in this report. The most relevant study cases for the quantification of the loads......The present report is a part of the research project ''Grid fault and designbasis for wind turbine'' supported by Energinet.dk through the grant PSO F&U 6319. The objective of this project is to investigate into the consequences of the new grid connection requirements for the fatigue and extreme...

  18. Evaluation results of the optimal estimation based, multi-sensor cloud property data sets derived from AVHRR heritage measurements in the Cloud_cci project.

    Science.gov (United States)

    Stapelberg, S.; Jerg, M.; Stengel, M.; Hollmann, R.

    2014-12-01

    In 2010 the ESA Climate Change Initiative (CCI) Cloud project was started with the objectives of generating a long-term coherent data set of cloud properties. The cloud properties considered are cloud mask, cloud top estimates, cloud optical thickness, cloud effective radius and post processed parameters such as cloud liquid and ice water path. During the first phase of the project 3 years of data spanning 2007 to 2009 have been produced on a global gridded daily and monthly mean basis. Next to the processing an extended evaluation study was started in order to gain a first understanding of the quality of the retrieved data. The critical discussion of the results of the evaluation holds a key role for the further development and improvement of the dataset's quality. The presentation will give a short overview of the evaluation study undertaken in the Cloud_cci project. The focus will be on the evaluation of gridded, monthly mean cloud fraction and cloud top data from the Cloud_cci AVHRR-heritage dataset with CLARA-A1, MODIS-Coll5, PATMOS-X and ISCCP data. Exemplary results will be shown. Strengths and shortcomings of the retrieval scheme as well as possible impacts of averaging approaches on the evaluation will be discussed. An Overview of Cloud_cci Phase 2 will be given.

  19. Cloud Computing - A Unified Approach for Surveillance Issues

    Science.gov (United States)

    Rachana, C. R.; Banu, Reshma, Dr.; Ahammed, G. F. Ali, Dr.; Parameshachari, B. D., Dr.

    2017-08-01

    Cloud computing describes highly scalable resources provided as an external service via the Internet on a basis of pay-per-use. From the economic point of view, the main attractiveness of cloud computing is that users only use what they need, and only pay for what they actually use. Resources are available for access from the cloud at any time, and from any location through networks. Cloud computing is gradually replacing the traditional Information Technology Infrastructure. Securing data is one of the leading concerns and biggest issue for cloud computing. Privacy of information is always a crucial pointespecially when an individual’s personalinformation or sensitive information is beingstored in the organization. It is indeed true that today; cloud authorization systems are notrobust enough. This paper presents a unified approach for analyzing the various security issues and techniques to overcome the challenges in the cloud environment.

  20. Eucalyptus Cloud to Remotely Provision e-Governance Applications

    Directory of Open Access Journals (Sweden)

    Sreerama Prabhu Chivukula

    2011-01-01

    Full Text Available Remote rural areas are constrained by lack of reliable power supply, essential for setting up advanced IT infrastructure as servers or storage; therefore, cloud computing comprising an Infrastructure-as-a-Service (IaaS is well suited to provide such IT infrastructure in remote rural areas. Additional cloud layers of Platform-as-a-Service (PaaS and Software-as-a-Service (SaaS can be added above IaaS. Cluster-based IaaS cloud can be set up by using open-source middleware Eucalyptus in data centres of NIC. Data centres of the central and state governments can be integrated with State Wide Area Networks and NICNET together to form the e-governance grid of India. Web service repositories at centre, state, and district level can be built over the national e-governance grid of India. Using Globus Toolkit, we can achieve stateful web services with speed and security. Adding the cloud layer over the e-governance grid will make a grid-cloud environment possible through Globus Nimbus. Service delivery can be in terms of web services delivery through heterogeneous client devices. Data mining using Weka4WS and DataMiningGrid can produce meaningful knowledge discovery from data. In this paper, a plan of action is provided for the implementation of the above proposed architecture.

  1. ATLAS Tier-2 monitoring system for the German cloud

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Joerg; Quadt, Arnulf; Weber, Pavel [II. Physikalisches Institut, Georg-August-Universitaet, Goettingen (Germany)

    2011-07-01

    The ATLAS tier centers in Germany provide their computing resources for the ATLAS experiment. The stable and sustainable operation of this so-called DE-cloud heavily relies on effective monitoring of the Tier-1 center GridKa and its associated Tier-2 centers. Central and local grid information services constantly collect and publish the status information from many computing resources and sites. The cloud monitoring system discussed in this presentation evaluates the information related to different cloud resources and provides a coherent and comprehensive view of the cloud. The main monitoring areas covered by the tool are data transfers, cloud software installation, site batch systems, Service Availability Monitoring (SAM). The cloud monitoring system consists of an Apache-based Python application, which retrieves the information and publishes it on the generated HTML web page. This results in an easy-to-use web interface for the limited number of sites in the cloud with fast and efficient access to the required information starting from a high level summary for the whole cloud to detailed diagnostics for the single site services. This approach provides the efficient identification of correlated site problems and simplifies the administration on both cloud and site level.

  2. Security in Cloud Computing For Service Delivery Models: Challenges and Solutions

    OpenAIRE

    Preeti Barrow; Runni Kumari; Prof. Manjula R

    2016-01-01

    Cloud computing, undoubtedly, is a path to expand the limits or add powerful capabilities on-demand with almost no investment in new framework, training new staff, or authorizing new software. Though today everyone is talking about cloud but, organizations are still in dilemma whether it’s safe to deploy their business on cloud. The reason behind it; is nothing but Security. No cloud service provider provides 100% security assurance to its customers and therefore, businesses are h...

  3. Cloud portability and interoperability issues and current trends

    CERN Document Server

    Di Martino, Beniamino; Esposito, Antonio

    2015-01-01

    This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks

  4. Cloud Computing, Tieto Cloud Server Model

    OpenAIRE

    Suikkanen, Saara

    2013-01-01

    The purpose of this study is to find out what is cloud computing. To be able to make wise decisions when moving to cloud or considering it, companies need to understand what cloud is consists of. Which model suits best to they company, what should be taken into account before moving to cloud, what is the cloud broker role and also SWOT analysis of cloud? To be able to answer customer requirements and business demands, IT companies should develop and produce new service models. IT house T...

  5. LHC computing grid

    International Nuclear Information System (INIS)

    Novaes, Sergio

    2011-01-01

    Full text: We give an overview of the grid computing initiatives in the Americas. High-Energy Physics has played a very important role in the development of grid computing in the world and in Latin America it has not been different. Lately, the grid concept has expanded its reach across all branches of e-Science, and we have witnessed the birth of the first nationwide infrastructures and its use in the private sector. (author)

  6. Urban micro-grids

    International Nuclear Information System (INIS)

    Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel

    2017-02-01

    ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases

  7. High density grids

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Aina E.; Baxter, Elizabeth L.

    2018-01-16

    An X-ray data collection grid device is provided that includes a magnetic base that is compatible with robotic sample mounting systems used at synchrotron beamlines, a grid element fixedly attached to the magnetic base, where the grid element includes at least one sealable sample window disposed through a planar synchrotron-compatible material, where the planar synchrotron-compatible material includes at least one automated X-ray positioning and fluid handling robot fiducial mark.

  8. Micro grids toward the smart grid

    International Nuclear Information System (INIS)

    Guerrero, J.

    2011-01-01

    Worldwide electrical grids are expecting to become smarter in the near future, with interest in Microgrids likely to grow. A microgrid can be defined as a part of the grid with elements of prime energy movers, power electronics converters, distributed energy storage systems and local loads, that can operate autonomously but also interacting with main grid. Thus, the ability of intelligent Microgrids to operate in island mode or connected to the grid will be a keypoint to cope with new functionalities and the integration of renewable energy resources. The functionalities expected for these small grids are: black start operation, frequency and voltage stability, active and reactive power flow control, active power filter capabilities, and storage energy management. In this presentation, a review of the main concepts related to flexible Microgrids will be introduced, with examples of real Microgrids. AC and DC Microgrids to integrate renewable and distributed energy resources will also be presented, as well as distributed energy storage systems, and standardization issues of these Microgrids. Finally, Microgrid hierarchical control will be analyzed looking at three different levels: i) a primary control based on the droop method, including an output impedance virtual loop; ii) a secondary control, which enables restoring any deviations produced by the primary control; and iii) a tertiary control to manage the power flow between the microgrid and the external electrical distribution system.

  9. Simulation For Synchronization Of A Micro-Grid With Three-Phase Systems

    OpenAIRE

    Mohammad Jafari Far

    2015-01-01

    Abstract today due to the high reliability of the micro-grids they have developed significantly. They have two states of operation the island state and connection to the main grid. Under certain circumstances the micro-grid is connected to or disconnected from the network. Synchronization of a micro-grid with the network must be done when its voltage is synchronized with the voltage in the main grid. Phase lock loops are responsible to identify the voltage phase of the micro-gird and the main...

  10. Blue skies for CLOUD

    CERN Multimedia

    2006-01-01

    Through the recently approved CLOUD experiment, CERN will soon be contributing to climate research. Tests are being performed on the first prototype of CLOUD, an experiment designed to assess cosmic radiation influence on cloud formation.

  11. SECURITY AND PRIVACY ISSUES IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    Amina AIT OUAHMAN

    2014-10-01

    Full Text Available Today, cloud computing is defined and talked about across the ICT industry under different contexts and with different definitions attached to it. It is a new paradigm in the evolution of Information Technology, as it is one of the biggest revolutions in this field to have taken place in recent times. According to the National Institute for Standards and Technology (NIST, “cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction” [1]. The importance of Cloud Computing is increasing and it is receiving a growing attention in the scientific and industrial communities. A study by Gartner [2] considered Cloud Computing as the first among the top 10 most important technologies and with a better prospect in successive years by companies and organizations. Clouds bring out tremendous benefits for both individuals and enterprises. Clouds support economic savings, outsourcing mechanisms, resource sharing, any-where any-time accessibility, on-demand scalability, and service flexibility. Clouds minimize the need for user involvement by masking technical details such as software upgrades, licenses, and maintenance from its customers. Clouds could also offer better security advantages over individual server deployments. Since a cloud aggregates resources, cloud providers charter expert security personnel while typical companies could be limited with a network administrator who might not be well versed in cyber security issues. The new concepts introduced by the clouds, such as computation outsourcing, resource sharing, and external data warehousing, increase the security and privacy concerns and create new security challenges. Moreover, the large scale of the clouds, the proliferation of mobile access devices (e

  12. Digi-Clima Grid: image processing and distributed computing for recovering historical climate data

    Directory of Open Access Journals (Sweden)

    Sergio Nesmachnow

    2015-12-01

    Full Text Available This article describes the Digi-Clima Grid project, whose main goals are to design and implement semi-automatic techniques for digitalizing and recovering historical climate records applying parallel computing techniques over distributed computing infrastructures. The specific tool developed for image processing is described, and the implementation over grid and cloud infrastructures is reported. A experimental analysis over institutional and volunteer-based grid/cloud distributed systems demonstrate that the proposed approach is an efficient tool for recovering historical climate data. The parallel implementations allow to distribute the processing load, achieving accurate speedup values.

  13. Ten Years of Cloud Optical and Microphysical Retrievals from MODIS

    Science.gov (United States)

    Platnick, Steven; King, Michael D.; Wind, Galina; Hubanks, Paul; Arnold, G. Thomas; Amarasinghe, Nandana

    2010-01-01

    The MODIS cloud optical properties algorithm (MOD06/MYD06 for Terra and Aqua MODIS, respectively) has undergone extensive improvements and enhancements since the launch of Terra. These changes have included: improvements in the cloud thermodynamic phase algorithm; substantial changes in the ice cloud light scattering look up tables (LUTs); a clear-sky restoral algorithm for flagging heavy aerosol and sunglint; greatly improved spectral surface albedo maps, including the spectral albedo of snow by ecosystem; inclusion of pixel-level uncertainty estimates for cloud optical thickness, effective radius, and water path derived for three error sources that includes the sensitivity of the retrievals to solar and viewing geometries. To improve overall retrieval quality, we have also implemented cloud edge removal and partly cloudy detection (using MOD35 cloud mask 250m tests), added a supplementary cloud optical thickness and effective radius algorithm over snow and sea ice surfaces and over the ocean, which enables comparison with the "standard" 2.1 11m effective radius retrieval, and added a multi-layer cloud detection algorithm. We will discuss the status of the MOD06 algorithm and show examples of pixellevel (Level-2) cloud retrievals for selected data granules, as well as gridded (Level-3) statistics, notably monthly means and histograms (lD and 2D, with the latter giving correlations between cloud optical thickness and effective radius, and other cloud product pairs).

  14. Moving towards Cloud Security

    OpenAIRE

    Edit Szilvia Rubóczki; Zoltán Rajnai

    2015-01-01

    Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment th...

  15. Women in Energy: Rinku Gupta - Argonne Today

    Science.gov (United States)

    -performance clusters and supercomputers. What is the best part of your job? The best part is working with Argonne Today Argonne Today Mission People Work/Life Connections Focal Point Women in Energy: Rinku Gupta Home People Women in Energy: Rinku Gupta Women in Energy: Rinku Gupta Apr 1, 2016 | Posted by Argonne

  16. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    Science.gov (United States)

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework

  17. Cloud-Top Entrainment in Stratocumulus Clouds

    Science.gov (United States)

    Mellado, Juan Pedro

    2017-01-01

    Cloud entrainment, the mixing between cloudy and clear air at the boundary of clouds, constitutes one paradigm for the relevance of small scales in the Earth system: By regulating cloud lifetimes, meter- and submeter-scale processes at cloud boundaries can influence planetary-scale properties. Understanding cloud entrainment is difficult given the complexity and diversity of the associated phenomena, which include turbulence entrainment within a stratified medium, convective instabilities driven by radiative and evaporative cooling, shear instabilities, and cloud microphysics. Obtaining accurate data at the required small scales is also challenging, for both simulations and measurements. During the past few decades, however, high-resolution simulations and measurements have greatly advanced our understanding of the main mechanisms controlling cloud entrainment. This article reviews some of these advances, focusing on stratocumulus clouds, and indicates remaining challenges.

  18. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  19. Sharing lessons learned on developing and operating smart grid pilots with households

    NARCIS (Netherlands)

    Kobus, C.B.A.; Klaassen, E.A.M.; Kohlmann, J.; Knigge, J.D.; Boots, S.

    2013-01-01

    Today, technology is still leading Smart Grid development. Nevertheless, the awareness that it should be a multidisciplinary effort to foster public acceptance and even desirability of Smart Grids is increasing. This paper illustrates the added value of a multidisciplinary approach by sharing the

  20. Silicon Photonics Cloud (SiCloud)

    DEFF Research Database (Denmark)

    DeVore, P. T. S.; Jiang, Y.; Lynch, M.

    2015-01-01

    Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths.......Silicon Photonics Cloud (SiCloud.org) is the first silicon photonics interactive web tool. Here we report new features of this tool including mode propagation parameters and mode distribution galleries for user specified waveguide dimensions and wavelengths....

  1. Scheduling strategies for cycle scavenging in multicluster grid systems

    NARCIS (Netherlands)

    Sonmez, O.O.; Grundeken, B.; Mohamed, H.H.; Iosup, A.; Epema, D.H.J.

    2009-01-01

    The use of today's multicluster grids exhibits periods of submission bursts with periods of normal use and even of idleness. To avoid resource contention, many users employ observational scheduling, that is, they postpone the submission of relatively low-priority jobs until a cluster becomes

  2. GridOrbit public display

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie; Tabard, Aurélien; Bardram, Jakob

    2010-01-01

    We introduce GridOrbit, a public awareness display that visualizes the activity of a community grid used in a biology laboratory. This community grid executes bioin-formatics algorithms and relies on users to donate CPU cycles to the grid. The goal of GridOrbit is to create a shared awareness about...

  3. Security for grids

    Energy Technology Data Exchange (ETDEWEB)

    Humphrey, Marty; Thompson, Mary R.; Jackson, Keith R.

    2005-08-14

    Securing a Grid environment presents a distinctive set of challenges. This paper groups the activities that need to be secured into four categories: naming and authentication; secure communication; trust, policy, and authorization; and enforcement of access control. It examines the current state of the art in securing these processes and introduces new technologies that promise to meet the security requirements of Grids more completely.

  4. The LHCb Grid Simulation

    CERN Multimedia

    Baranov, Alexander

    2016-01-01

    The LHCb Grid access if based on the LHCbDirac system. It provides access to data and computational resources to researchers with different geographical locations. The Grid has a hierarchical topology with multiple sites distributed over the world. The sites differ from each other by their number of CPUs, amount of disk storage and connection bandwidth. These parameters are essential for the Grid work. Moreover, job scheduling and data distribution strategy have a great impact on the grid performance. However, it is hard to choose an appropriate algorithm and strategies as they need a lot of time to be tested on the real grid. In this study, we describe the LHCb Grid simulator. The simulator reproduces the LHCb Grid structure with its sites and their number of CPUs, amount of disk storage and bandwidth connection. We demonstrate how well the simulator reproduces the grid work, show its advantages and limitations. We show how well the simulator reproduces job scheduling and network anomalies, consider methods ...

  5. The play grid

    DEFF Research Database (Denmark)

    Fogh, Rune; Johansen, Asger

    2013-01-01

    In this paper we propose The Play Grid, a model for systemizing different play types. The approach is psychological by nature and the actual Play Grid is based, therefore, on two pairs of fundamental and widely acknowledged distinguishing characteristics of the ego, namely: extraversion vs. intro...

  6. Planning in Smart Grids

    NARCIS (Netherlands)

    Bosman, M.G.C.

    2012-01-01

    The electricity supply chain is changing, due to increasing awareness for sustainability and an improved energy efficiency. The traditional infrastructure where demand is supplied by centralized generation is subject to a transition towards a Smart Grid. In this Smart Grid, sustainable generation

  7. Gridded Species Distribution, Version 1: Global Amphibians Presence Grids

    Data.gov (United States)

    National Aeronautics and Space Administration — The Global Amphibians Presence Grids of the Gridded Species Distribution, Version 1 is a reclassified version of the original grids of amphibian species distribution...

  8. Early experience on using glideinWMS in the cloud

    International Nuclear Information System (INIS)

    Andrews, W; Dost, J; Martin, T; McCrea, A; Pi, H; Sfiligoi, I; Würthwein, F; Bockelman, B; Weitzel, D; Bradley, D; Frey, J; Livny, M; Tannenbaum, T; Evans, D; Fisk, I; Holzman, B; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  9. Early experience on using glidein WMS in the cloud

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, W. [UC, San Diego; Bockelman, B. [Nebraska U.; Bradley, D. [Wisconsin U., Madison; Dost, J. [UC, San Diego; Evans, D. [Fermilab; Fisk, I. [Fermilab; Frey, J. [Wisconsin U., Madison; Holzman, B. [Fermilab; Livny, M. [Wisconsin U., Madison; Martin, T. [UC, San Diego; McCrea, A. [UC, San Diego; Melo, A. [Vanderbilt U.; Metson, S. [Bristol U.; Pi, H. [UC, San Diego; Sfiligoi, I. [UC, San Diego; Sheldon, P. [Vanderbilt U.; Tannenbaum, T. [Wisconsin U., Madison; Tiradani, A. [Fermilab; Wurthwein, F. [UC, San Diego; Weitzel, D. [Nebraska U.

    2011-01-01

    Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included.

  10. Grid generation methods

    CERN Document Server

    Liseikin, Vladimir D

    2017-01-01

    This new edition provides a description of current developments relating to grid methods, grid codes, and their applications to actual problems. Grid generation methods are indispensable for the numerical solution of differential equations. Adaptive grid-mapping techniques, in particular, are the main focus and represent a promising tool to deal with systems with singularities. This 3rd edition includes three new chapters on numerical implementations (10), control of grid properties (11), and applications to mechanical, fluid, and plasma related problems (13). Also the other chapters have been updated including new topics, such as curvatures of discrete surfaces (3). Concise descriptions of hybrid mesh generation, drag and sweeping methods, parallel algorithms for mesh generation have been included too. This new edition addresses a broad range of readers: students, researchers, and practitioners in applied mathematics, mechanics, engineering, physics and other areas of applications.

  11. The GRID seminar

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The Grid infrastructure is a key part of the computing environment for the simulation, processing and analysis of the data of the LHC experiments. These experiments depend on the availability of a worldwide Grid infrastructure in several aspects of their computing model. The Grid middleware will hide much of the complexity of this environment to the user, organizing all the resources in a coherent virtual computer center. The general description of the elements of the Grid, their interconnections and their use by the experiments will be exposed in this talk. The computational and storage capability of the Grid is attracting other research communities beyond the high energy physics. Examples of these applications will be also exposed during the presentation.

  12. Integration of End-User Cloud Storage for CMS Analysis

    CERN Document Server

    Riahi, Hassen; Álvarez Ayllón, Alejandro; Balcas, Justas; Ciangottini, Diego; Hernández, José M; Keeble, Oliver; Magini, Nicolò; Manzi, Andrea; Mascetti, Luca; Mascheroni, Marco; Tanasijczuk, Andres Jorge; Vaandering, Eric Wayne

    2018-01-01

    End-user Cloud storage is increasing rapidly in popularity in research communities thanks to the collaboration capabilities it offers, namely synchronisation and sharing. CERN IT has implemented a model of such storage named, CERNBox, integrated with the CERN AuthN and AuthZ services. To exploit the use of the end-user Cloud storage for the distributed data analysis activity, the CMS experiment has started the integration of CERNBox as a Grid resource. This will allow CMS users to make use of their own storage in the Cloud for their analysis activities as well as to benefit from synchronisation and sharing capabilities to achieve results faster and more effectively. It will provide an integration model of Cloud storages in the Grid, which is implemented and commissioned over the world’s largest computing Grid infrastructure, Worldwide LHC Computing Grid (WLCG). In this paper, we present the integration strategy and infrastructure changes needed in order to transparently integrate end-user Cloud storage with...

  13. Cloud Computing as Evolution of Distributed Computing – A Case Study for SlapOS Distributed Cloud Computing Platform

    Directory of Open Access Journals (Sweden)

    George SUCIU

    2013-01-01

    Full Text Available The cloud computing paradigm has been defined from several points of view, the main two directions being either as an evolution of the grid and distributed computing paradigm, or, on the contrary, as a disruptive revolution in the classical paradigms of operating systems, network layers and web applications. This paper presents a distributed cloud computing platform called SlapOS, which unifies technologies and communication protocols into a new technology model for offering any application as a service. Both cloud and distributed computing can be efficient methods for optimizing resources that are aggregated from a grid of standard PCs hosted in homes, offices and small data centers. The paper fills a gap in the existing distributed computing literature by providing a distributed cloud computing model which can be applied for deploying various applications.

  14. Interoperable Cloud Networking for intelligent power supply; Interoperables Cloud Networking fuer intelligente Energieversorgung

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Dave [Invensys Operations Management, Foxboro, MA (United States)

    2010-09-15

    Intelligent power supply by a so-called Smart Grid will make it possible to control consumption by market-based pricing and signals for load reduction. This necessitates that both the energy rates and the energy information are distributed reliably and in real time to automation systems in domestic and other buildings and in industrial plants over a wide geographic range and across the most varied grid infrastructures. Effective communication at this level of complexity necessitates computer and grid resources that are normally only available in the computer centers of big industries. The cloud computing technology, which is described here in some detail, has all features to provide reliability, interoperability and efficiency for large-scale smart grid applications, at lower cost than traditional computer centers. (orig.)

  15. Connected minds technology and today's learners

    CERN Document Server

    Pedrò, Francesc

    2012-01-01

    In all OECD countries, digital media and connectedness are integral to the lives of todays learners. It is often claimed that these learners are ""new millennium learners"", or ""digital natives"", who have different expectations about education. This book contributes to the debate about the effects of technology attachment and connectedness on todays learners, and their expectations about teaching. The book sets out to answer the following questions: Can the claim that todays students are ""new millenium learners"" or ""digital natives be sustained empirically? Is there consistent research evidence demonstrating the effects of technology on cognitive development, social values, and learning expectations? What are the implications for educational policy and practice?

  16. Decentral Smart Grid Control

    Science.gov (United States)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals.

  17. Decentral Smart Grid Control

    International Nuclear Information System (INIS)

    Schäfer, Benjamin; Matthiae, Moritz; Timme, Marc; Witthaut, Dirk

    2015-01-01

    Stable operation of complex flow and transportation networks requires balanced supply and demand. For the operation of electric power grids—due to their increasing fraction of renewable energy sources—a pressing challenge is to fit the fluctuations in decentralized supply to the distributed and temporally varying demands. To achieve this goal, common smart grid concepts suggest to collect consumer demand data, centrally evaluate them given current supply and send price information back to customers for them to decide about usage. Besides restrictions regarding cyber security, privacy protection and large required investments, it remains unclear how such central smart grid options guarantee overall stability. Here we propose a Decentral Smart Grid Control, where the price is directly linked to the local grid frequency at each customer. The grid frequency provides all necessary information about the current power balance such that it is sufficient to match supply and demand without the need for a centralized IT infrastructure. We analyze the performance and the dynamical stability of the power grid with such a control system. Our results suggest that the proposed Decentral Smart Grid Control is feasible independent of effective measurement delays, if frequencies are averaged over sufficiently large time intervals. (paper)

  18. Assessment of Global Cloud Datasets from Satellites: Project and Database Initiated by the GEWEX Radiation Panel

    Science.gov (United States)

    Stubenrauch, C. J.; Rossow, W. B.; Kinne, S.; Ackerman, S.; Cesana, G.; Chepfer, H.; Getzewich, B.; Di Girolamo, L.; Guignard, A.; Heidinger, A.; hide

    2012-01-01

    Clouds cover about 70% of the Earth's surface and play a dominant role in the energy and water cycle of our planet. Only satellite observations provide a continuous survey of the state of the atmosphere over the whole globe and across the wide range of spatial and temporal scales that comprise weather and climate variability. Satellite cloud data records now exceed more than 25 years in length. However, climatologies compiled from different satellite datasets can exhibit systematic biases. Questions therefore arise as to the accuracy and limitations of the various sensors. The Global Energy and Water cycle Experiment (GEWEX) Cloud Assessment, initiated in 2005 by the GEWEX Radiation Panel, provided the first coordinated intercomparison of publically available, standard global cloud products (gridded, monthly statistics) retrieved from measurements of multi-spectral imagers (some with multiangle view and polarization capabilities), IR sounders and lidar. Cloud properties under study include cloud amount, cloud height (in terms of pressure, temperature or altitude), cloud radiative properties (optical depth or emissivity), cloud thermodynamic phase and bulk microphysical properties (effective particle size and water path). Differences in average cloud properties, especially in the amount of high-level clouds, are mostly explained by the inherent instrument measurement capability for detecting and/or identifying optically thin cirrus, especially when overlying low-level clouds. The study of long-term variations with these datasets requires consideration of many factors. A monthly, gridded database, in common format, facilitates further assessments, climate studies and the evaluation of climate models.

  19. The open science grid

    International Nuclear Information System (INIS)

    Pordes, R.

    2004-01-01

    The U.S. LHC Tier-1 and Tier-2 laboratories and universities are developing production Grids to support LHC applications running across a worldwide Grid computing system. Together with partners in computer science, physics grid projects and active experiments, we will build a common national production grid infrastructure which is open in its architecture, implementation and use. The Open Science Grid (OSG) model builds upon the successful approach of last year's joint Grid2003 project. The Grid3 shared infrastructure has for over eight months provided significant computational resources and throughput to a range of applications, including ATLAS and CMS data challenges, SDSS, LIGO, and biology analyses, and computer science demonstrators and experiments. To move towards LHC-scale data management, access and analysis capabilities, we must increase the scale, services, and sustainability of the current infrastructure by an order of magnitude or more. Thus, we must achieve a significant upgrade in its functionalities and technologies. The initial OSG partners will build upon a fully usable, sustainable and robust grid. Initial partners include the US LHC collaborations, DOE and NSF Laboratories and Universities and Trillium Grid projects. The approach is to federate with other application communities in the U.S. to build a shared infrastructure open to other sciences and capable of being modified and improved to respond to needs of other applications, including CDF, D0, BaBar, and RHIC experiments. We describe the application-driven, engineered services of the OSG, short term plans and status, and the roadmap for a consortium, its partnerships and national focus

  20. Wood biomass gasification in the world today

    International Nuclear Information System (INIS)

    Nikolikj, Ognjen; Perishikj, Radovan; Mikulikj, Jurica

    1999-01-01

    Today gasification technology of different kinds represents a more and more interesting option of the production of energy forms. The article describes a biomass gasification plant (waste wood) Sydkraft, Vernamo from Sweden. (Author)

  1. Review of Huebert’s Libertarianism Today

    OpenAIRE

    Walter E. Block

    2010-01-01

    Libertarianism Today, by Jacob Huebert (Santa Barbara, CA: Praeger, 2010), is an excellent introduction to libertarianism. In contrast to many other recent books about libertarianism, a consistent non-compromising libertarianism is defended throughout this book.

  2. The evolving DOT enterprise : today toward tomorrow.

    Science.gov (United States)

    2013-04-01

    Departments of transportation (DOTs) today are being shaped by a wide range of : factors some of which are directly managed and controlled within the transportation : industry while others are external factors shaping the demand for transportatio...

  3. Desktop grid computing

    CERN Document Server

    Cerin, Christophe

    2012-01-01

    Desktop Grid Computing presents common techniques used in numerous models, algorithms, and tools developed during the last decade to implement desktop grid computing. These techniques enable the solution of many important sub-problems for middleware design, including scheduling, data management, security, load balancing, result certification, and fault tolerance. The book's first part covers the initial ideas and basic concepts of desktop grid computing. The second part explores challenging current and future problems. Each chapter presents the sub-problems, discusses theoretical and practical

  4. Transmission grid security

    CERN Document Server

    Haarla, Liisa; Hirvonen, Ritva; Labeau, Pierre-Etienne

    2011-01-01

    In response to the growing importance of power system security and reliability, ""Transmission Grid Security"" proposes a systematic and probabilistic approach for transmission grid security analysis. The analysis presented uses probabilistic safety assessment (PSA) and takes into account the power system dynamics after severe faults. In the method shown in this book the power system states (stable, not stable, system breakdown, etc.) are connected with the substation reliability model. In this way it is possible to: estimate the system-wide consequences of grid faults; identify a chain of eve

  5. Trends in life science grid: from computing grid to knowledge grid

    Directory of Open Access Journals (Sweden)

    Konagaya Akihiko

    2006-12-01

    Full Text Available Abstract Background Grid computing has great potential to become a standard cyberinfrastructure for life sciences which often require high-performance computing and large data handling which exceeds the computing capacity of a single institution. Results This survey reviews the latest grid technologies from the viewpoints of computing grid, data grid and knowledge grid. Computing grid technologies have been matured enough to solve high-throughput real-world life scientific problems. Data grid technologies are strong candidates for realizing "resourceome" for bioinformatics. Knowledge grids should be designed not only from sharing explicit knowledge on computers but also from community formulation for sharing tacit knowledge among a community. Conclusion Extending the concept of grid from computing grid to knowledge grid, it is possible to make use of a grid as not only sharable computing resources, but also as time and place in which people work together, create knowledge, and share knowledge and experiences in a community.

  6. The CLOUD experiment

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    The Cosmics Leaving Outdoor Droplets (CLOUD) experiment as shown by Jasper Kirkby (spokesperson). Kirkby shows a sketch to illustrate the possible link between galactic cosmic rays and cloud formations. The CLOUD experiment uses beams from the PS accelerator at CERN to simulate the effect of cosmic rays on cloud formations in the Earth's atmosphere. It is thought that cosmic ray intensity is linked to the amount of low cloud cover due to the formation of aerosols, which induce condensation.

  7. BUSINESS INTELLIGENCE IN CLOUD

    OpenAIRE

    Celina M. Olszak

    2014-01-01

    . The paper reviews and critiques current research on Business Intelligence (BI) in cloud. This review highlights that organizations face various challenges using BI cloud. The research objectives for this study are a conceptualization of the BI cloud issue, as well as an investigation of some benefits and risks from BI cloud. The study was based mainly on a critical analysis of literature and some reports on BI cloud using. The results of this research can be used by IT and business leaders ...

  8. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  9. Cloud Processed CCN Suppress Stratus Cloud Drizzle

    Science.gov (United States)

    Hudson, J. G.; Noble, S. R., Jr.

    2017-12-01

    Conversion of sulfur dioxide to sulfate within cloud droplets increases the sizes and decreases the critical supersaturation, Sc, of cloud residual particles that had nucleated the droplets. Since other particles remain at the same sizes and Sc a size and Sc gap is often observed. Hudson et al. (2015) showed higher cloud droplet concentrations (Nc) in stratus clouds associated with bimodal high-resolution CCN spectra from the DRI CCN spectrometer compared to clouds associated with unimodal CCN spectra (not cloud processed). Here we show that CCN spectral shape (bimodal or unimodal) affects all aspects of stratus cloud microphysics and drizzle. Panel A shows mean differential cloud droplet spectra that have been divided according to traditional slopes, k, of the 131 measured CCN spectra in the Marine Stratus/Stratocumulus Experiment (MASE) off the Central California coast. K is generally high within the supersaturation, S, range of stratus clouds (< 0.5%). Because cloud processing decreases Sc of some particles, it reduces k. Panel A shows higher concentrations of small cloud droplets apparently grown on lower k CCN than clouds grown on higher k CCN. At small droplet sizes the concentrations follow the k order of the legend, black, red, green, blue (lowest to highest k). Above 13 µm diameter the lines cross and the hierarchy reverses so that blue (highest k) has the highest concentrations followed by green, red and black (lowest k). This reversed hierarchy continues into the drizzle size range (panel B) where the most drizzle drops, Nd, are in clouds grown on the least cloud-processed CCN (blue), while clouds grown on the most processed CCN (black) have the lowest Nd. Suppression of stratus cloud drizzle by cloud processing is an additional 2nd indirect aerosol effect (IAE) that along with the enhancement of 1st IAE by higher Nc (panel A) are above and beyond original IAE. However, further similar analysis is needed in other cloud regimes to determine if MASE was

  10. Enterprise content management in the cloud

    Directory of Open Access Journals (Sweden)

    Jaroslava Klegová

    2013-01-01

    Full Text Available At present the attention of many organizations concentrates to the Enterprise Content Management system (ECM. Unstructured content grows exponentially, and Enterprise Content Management system helps to capture, store, manage, integrate and deliver all forms of content across the company. Today, decision makers have possibility to move ECM systems to the cloud and take advantages of cloud computing. Cloud solution can provide a crucial competitive advantage. For example, it can reduce fixed IT department cost and ensure faster ECM implementation.To achieve the maximum level of benefits from implementation of ECM in the cloud it is important to understand all possibilities and actions during the implementation. In this paper, the general model of the ECM implementation in the cloud is proposed and described. The risk may relate to all aspects of the implementation, such as cost, schedule or quality. This is the reason why the introduced model places emphasize on risk. The aim of the article is to identify risks of the ECM implementation in the cloud and quantify the impact of risk. The article is focused on the Monte Carlo method. Monte Carlo method is a technique that uses random numbers and probability to solve problems. Based on interviews with an IT managers there is created an example of possible scenarios and the risk is evaluated using the Monte Carlo method.

  11. A Novel Market-Oriented Dynamic Collaborative Cloud Service Platform

    Science.gov (United States)

    Hassan, Mohammad Mehedi; Huh, Eui-Nam

    In today's world the emerging Cloud computing (Weiss, 2007) offer a new computing model where resources such as computing power, storage, online applications and networking infrastructures can be shared as "services" over the internet. Cloud providers (CPs) are incentivized by the profits to be made by charging consumers for accessing these services. Consumers, such as enterprises, are attracted by the opportunity for reducing or eliminating costs associated with "in-house" provision of these services.

  12. Analisis Perbandingan Antara Cloud Computing Dengan Sistem Informasi Konvensional

    OpenAIRE

    Harsono, Bagoes

    2011-01-01

    In this era of globalization, everything can not be separated from technology. The development of advanced technologies that make things easier and cheaper. This is what happens in the development of information technology today. The presence of a new paradigm keeps everyone interested in something new. Cloud computing comes to the middle of the community by presenting some of the highlights. Although still a novelty, not a few people are also who have get the benefited from this cloud. Still...

  13. An architecture based on SOA and virtual enterprise principles: OpenNebula for cloud deployment

    CSIR Research Space (South Africa)

    Mvelase, P

    2012-04-01

    Full Text Available Today enterprises have to survive in a dynamically changing business environment. Cloud computing presents a new business model where the Information Technology services supporting the business are provided by partners rather than in-house. The idea...

  14. Commercial trading of IaaS cloud resources

    CERN Multimedia

    CERN. Geneva; Dr. Watzl, Johannes

    2014-01-01

    Dr. Johannes Watzl is responsible for the Product Management at Deutsche Börse Cloud Exchange. His work is focused on the specification and introduction of new tradable products and and product features. Prior to his role at Deutsche Börse Cloud Exchange, Johannes was a researcher at Ludwig-Maximilians-Universität München where he worked at European Commission funded projects in the field of distributed computing and standardisation in grid and cloud computing and obtained his PhD. He started research on the...

  15. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  16. Lincoln Laboratory Grid

    Data.gov (United States)

    Federal Laboratory Consortium — The Lincoln Laboratory Grid (LLGrid) is an interactive, on-demand parallel computing system that uses a large computing cluster to enable Laboratory researchers to...

  17. CMS computing on grid

    International Nuclear Information System (INIS)

    Guan Wen; Sun Gongxing

    2007-01-01

    CMS has adopted a distributed system of services which implement CMS application view on top of Grid services. An overview of CMS services will be covered. Emphasis is on CMS data management and workload Management. (authors)

  18. Technology Roadmaps: Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    The development of Technology Roadmaps: Smart Grids -- which the IEA defines as an electricity network that uses digital and other advanced technologies to monitor and manage the transport of electricity from all generation sources to meet the varying electricity demands of end users -- is essential if the global community is to achieve shared goals for energy security, economic development and climate change mitigation. Unfortunately, existing misunderstandings of exactly what smart grids are and the physical and institutional complexity of electricity systems make it difficult to implement smart grids on the scale that is needed. This roadmap sets out specific steps needed over the coming years to achieve milestones that will allow smart grids to deliver a clean energy future.

  19. World Wide Grid

    CERN Multimedia

    Grätzel von Grätz, Philipp

    2007-01-01

    Whether for genetic risk analysis or 3D-rekonstruktion of the cerebral vessels: the modern medicine requires more computing power. With a grid infrastructure, this one can be if necessary called by the network. (4 pages)

  20. Spacer grid corner gusset

    International Nuclear Information System (INIS)

    Larson, J.G.

    1984-01-01

    There is provided a spacer grid for a bundle of longitudinally extending rods in spaced generally parallel relationship comprising spacing means for holding the rods in spaced generally parallel relationship; the spacing means includes at least one exterior grid strip circumscribing the bundle of rods along the periphery thereof; with at least one exterior grid strip having a first edge defining the boundary of the strip in one longitudinal direction and a second edge defining the boundary of the strip in the other longitudinal direction; with at least one exterior grid strip having at least one band formed therein parallel to the longitudinal direction; a plurality of corner gussets truncating each of a plurality of corners formed by at least one band and the first edge and the second edge

  1. Smart grids - French Expertise

    International Nuclear Information System (INIS)

    2015-11-01

    The adaptation of electrical systems is the focus of major work worldwide. Bringing electricity to new territories, modernizing existing electricity grids, implementing energy efficiency policies and deploying renewable energies, developing new uses for electricity, introducing electric vehicles - these are the challenges facing a multitude of regions and countries. Smart Grids are the result of the convergence of electrical systems technologies with information and communications technologies. They play a key role in addressing the above challenges. Smart Grid development is a major priority for both public and private-sector actors in France. The experience of French companies has grown with the current French electricity system, a system that already shows extensive levels of 'intelligence', efficiency and competitiveness. French expertise also leverages substantial competence in terms of 'systems engineering', and can provide a tailored response to meet all sorts of needs. French products and services span all the technical and commercial building blocks that make up the Smart Grid value chain. They address the following issues: Improving the use and valuation of renewable energies and decentralized means of production, by optimizing the balance between generation and consumption. Strengthening the intelligence of the transmission and distribution grids: developing 'Supergrid', digitizing substations in transmission networks, and automating the distribution grids are the focus of a great many projects designed to reinforce the 'self-healing' capacity of the grid. Improving the valuation of decentralized flexibilities: this involves, among others, deploying smart meters, reinforcing active energy efficiency measures, and boosting consumers' contribution to grid balancing, via practices such as demand response which implies the aggregation of flexibility among residential, business, and/or industrial sites. Addressing current technological challenges, in

  2. US National Grid

    Data.gov (United States)

    Kansas Data Access and Support Center — This is a polygon feature data layer of United States National Grid (1000m x 1000m polygons ) constructed by the Center for Interdisciplinary Geospatial Information...

  3. Controlling smart grid adaptivity

    NARCIS (Netherlands)

    Toersche, Hermen; Nykamp, Stefan; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2012-01-01

    Methods are discussed for planning oriented smart grid control to cope with scenarios with limited predictability, supporting an increasing penetration of stochastic renewable resources. The performance of these methods is evaluated with simulations using measured wind generation and consumption

  4. Grid Computing Education Support

    Energy Technology Data Exchange (ETDEWEB)

    Steven Crumb

    2008-01-15

    The GGF Student Scholar program enabled GGF the opportunity to bring over sixty qualified graduate and under-graduate students with interests in grid technologies to its three annual events over the three-year program.

  5. SIRTA, a ground-based atmospheric observatory for cloud and aerosol research

    Directory of Open Access Journals (Sweden)

    M. Haeffelin

    2005-02-01

    Full Text Available Ground-based remote sensing observatories have a crucial role to play in providing data to improve our understanding of atmospheric processes, to test the performance of atmospheric models, and to develop new methods for future space-borne observations. Institut Pierre Simon Laplace, a French research institute in environmental sciences, created the Site Instrumental de Recherche par Télédétection Atmosphérique (SIRTA, an atmospheric observatory with these goals in mind. Today SIRTA, located 20km south of Paris, operates a suite a state-of-the-art active and passive remote sensing instruments dedicated to routine monitoring of cloud and aerosol properties, and key atmospheric parameters. Detailed description of the state of the atmospheric column is progressively archived and made accessible to the scientific community. This paper describes the SIRTA infrastructure and database, and provides an overview of the scientific research associated with the observatory. Researchers using SIRTA data conduct research on atmospheric processes involving complex interactions between clouds, aerosols and radiative and dynamic processes in the atmospheric column. Atmospheric modellers working with SIRTA observations develop new methods to test their models and innovative analyses to improve parametric representations of sub-grid processes that must be accounted for in the model. SIRTA provides the means to develop data interpretation tools for future active remote sensing missions in space (e.g. CloudSat and CALIPSO. SIRTA observation and research activities take place in networks of atmospheric observatories that allow scientists to access consistent data sets from diverse regions on the globe.

  6. Relationship between cloud radiative forcing, cloud fraction and cloud albedo, and new surface-based approach for determining cloud albedo

    OpenAIRE

    Y. Liu; W. Wu; M. P. Jensen; T. Toto

    2011-01-01

    This paper focuses on three interconnected topics: (1) quantitative relationship between surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo; (2) surfaced-based approach for measuring cloud albedo; (3) multiscale (diurnal, annual and inter-annual) variations and covariations of surface shortwave cloud radiative forcing, cloud fraction, and cloud albedo. An analytical expression is first derived to quantify the relationship between cloud radiative forcing, cloud fractio...

  7. GStat 2.0: Grid Information System Status Monitoring

    CERN Document Server

    Field, L; Tsai, M; CERN. Geneva. IT Department

    2010-01-01

    Grid Information Systems are mission-critical components in today's production grid infrastructures. They enable users, applications and services to discover which services exist in the infrastructure and further information about the service structure and state. It is therefore important that the information system components themselves are functioning correctly and that the information content is reliable. Grid Status (GStat) is a tool that monitors the structural integrity of the EGEE information system, which is a hierarchical system built out of more than 260 site-level and approximately 70 global aggregation services. It also checks the information content and presents summary and history displays for Grid Operators and System Administrators. A major new version, GStat 2.0, aims to build on the production experience of GStat and provides additional functionality, which enables it to be extended and combined with other tools

  8. Off grid Solar power supply: the real green development

    International Nuclear Information System (INIS)

    Dellinger, B.; Mansard, M.

    2010-01-01

    Solar experience is now 30 years. In spite of the tremendous growth of the developed world grid connect market, quite a number of companies remain seriously involved in the off grid sector. Solar started in the field as the sole solution to give access to energy and water to rural communities. With major actors involved at early stage, a number of reliable technical solutions were developed and implemented. These solutions have gradually drawn the attention of industrial companies investing in emerging countries and needing reliable energy sources. On top of improving standard of living, Off grid solar solutions also create economical opportunity for the local private sector getting involved in maintenance and services around the energy system. As at today, hundreds thousand of sites daily operate on site. However the needs remain extremely high. That is the reasons why off grid solar remains a major tool for sustainable development. (author)

  9. Beyond grid security

    International Nuclear Information System (INIS)

    Hoeft, B; Epting, U; Koenig, T

    2008-01-01

    While many fields relevant to Grid security are already covered by existing working groups, their remit rarely goes beyond the scope of the Grid infrastructure itself. However, security issues pertaining to the internal set-up of compute centres have at least as much impact on Grid security. Thus, this talk will present briefly the EU ISSeG project (Integrated Site Security for Grids). In contrast to groups such as OSCT (Operational Security Coordination Team) and JSPG (Joint Security Policy Group), the purpose of ISSeG is to provide a holistic approach to security for Grid computer centres, from strategic considerations to an implementation plan and its deployment. The generalised methodology of Integrated Site Security (ISS) is based on the knowledge gained during its implementation at several sites as well as through security audits, and this will be briefly discussed. Several examples of ISS implementation tasks at the Forschungszentrum Karlsruhe will be presented, including segregation of the network for administration and maintenance and the implementation of Application Gateways. Furthermore, the web-based ISSeG training material will be introduced. This aims to offer ISS implementation guidance to other Grid installations in order to help avoid common pitfalls

  10. Near-Body Grid Adaption for Overset Grids

    Science.gov (United States)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  11. Quantifying Uncertainty in Satellite-Retrieved Land Surface Temperature from Cloud Detection Errors

    Directory of Open Access Journals (Sweden)

    Claire E. Bulgin

    2018-04-01

    Full Text Available Clouds remain one of the largest sources of uncertainty in remote sensing of surface temperature in the infrared, but this uncertainty has not generally been quantified. We present a new approach to do so, applied here to the Advanced Along-Track Scanning Radiometer (AATSR. We use an ensemble of cloud masks based on independent methodologies to investigate the magnitude of cloud detection uncertainties in area-average Land Surface Temperature (LST retrieval. We find that at a grid resolution of 625 km 2 (commensurate with a 0.25 ∘ grid size at the tropics, cloud detection uncertainties are positively correlated with cloud-cover fraction in the cell and are larger during the day than at night. Daytime cloud detection uncertainties range between 2.5 K for clear-sky fractions of 10–20% and 1.03 K for clear-sky fractions of 90–100%. Corresponding night-time uncertainties are 1.6 K and 0.38 K, respectively. Cloud detection uncertainty shows a weaker positive correlation with the number of biomes present within a grid cell, used as a measure of heterogeneity in the background against which the cloud detection must operate (e.g., surface temperature, emissivity and reflectance. Uncertainty due to cloud detection errors is strongly dependent on the dominant land cover classification. We find cloud detection uncertainties of a magnitude of 1.95 K over permanent snow and ice, 1.2 K over open forest, 0.9–1 K over bare soils and 0.09 K over mosaic cropland, for a standardised clear-sky fraction of 74.2%. As the uncertainties arising from cloud detection errors are of a significant magnitude for many surface types and spatially heterogeneous where land classification varies rapidly, LST data producers are encouraged to quantify cloud-related uncertainties in gridded products.

  12. Cloud CCN feedback

    International Nuclear Information System (INIS)

    Hudson, J.G.

    1992-01-01

    Cloud microphysics affects cloud albedo precipitation efficiency and the extent of cloud feedback in response to global warming. Compared to other cloud parameters, microphysics is unique in its large range of variability and the fact that much of the variability is anthropogenic. Probably the most important determinant of cloud microphysics is the spectra of cloud condensation nuclei (CCN) which display considerable variability and have a large anthropogenic component. When analyzed in combination three field observation projects display the interrelationship between CCN and cloud microphysics. CCN were measured with the Desert Research Institute (DRI) instantaneous CCN spectrometer. Cloud microphysical measurements were obtained with the National Center for Atmospheric Research Lockheed Electra. Since CCN and cloud microphysics each affect the other a positive feedback mechanism can result

  13. The Impact of Cloud Computing Technologies in E-learning

    Directory of Open Access Journals (Sweden)

    Hosam Farouk El-Sofany

    2013-01-01

    Full Text Available Cloud computing is a new computing model which is based on the grid computing, distributed computing, parallel computing and virtualization technologies define the shape of a new technology. It is the core technology of the next generation of network computing platform, especially in the field of education, cloud computing is the basic environment and platform of the future E-learning. It provides secure data storage, convenient internet services and strong computing power. This article mainly focuses on the research of the application of cloud computing in E-learning environment. The research study shows that the cloud platform is valued for both students and instructors to achieve the course objective. The paper presents the nature, benefits and cloud computing services, as a platform for e-learning environment.

  14. The Representation of Tropical Cyclones Within the Global William Putman Non-Hydrostatic Goddard Earth Observing System Model (GEOS-5) at Cloud-Permitting Resolutions

    Science.gov (United States)

    Putman, William M.

    2010-01-01

    The Goddard Earth Observing System Model (GEOS-S), an earth system model developed in the NASA Global Modeling and Assimilation Office (GMAO), has integrated the non-hydrostatic finite-volume dynamical core on the cubed-sphere grid. The extension to a non-hydrostatic dynamical framework and the quasi-uniform cubed-sphere geometry permits the efficient exploration of global weather and climate modeling at cloud permitting resolutions of 10- to 4-km on today's high performance computing platforms. We have explored a series of incremental increases in global resolution with GEOS-S from irs standard 72-level 27-km resolution (approx.5.5 million cells covering the globe from the surface to 0.1 hPa) down to 3.5-km (approx. 3.6 billion cells).

  15. User Inspired Management of Scientific Jobs in Grids and Clouds

    Science.gov (United States)

    Withana, Eran Chinthaka

    2011-01-01

    From time-critical, real time computational experimentation to applications which process petabytes of data there is a continuing search for faster, more responsive computing platforms capable of supporting computational experimentation. Weather forecast models, for instance, process gigabytes of data to produce regional (mesoscale) predictions on…

  16. Understanding the Benefits of Dispersed Grid-Connected Photovoltaics: From Avoiding the Next Major Outage to Taming Wholesale Power Markets

    International Nuclear Information System (INIS)

    Letendre, Steven E.; Perez, Richard

    2006-01-01

    Thanks to new solar resource assessment techniques using cloud cover data available from geostationary satellites, it is apparent that grid-connected PV installations can serve to enhance electric grid reliability, preventing or hastening recovery from major power outages and serving to mitigate extreme price spikes in wholesale energy markets. (author)

  17. Smart Grid Integration Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Troxell, Wade [Colorado State Univ., Fort Collins, CO (United States)

    2011-12-22

    The initial federal funding for the Colorado State University Smart Grid Integration Laboratory is through a Congressionally Directed Project (CDP), DE-OE0000070 Smart Grid Integration Laboratory. The original program requested in three one-year increments for staff acquisition, curriculum development, and instrumentation all which will benefit the Laboratory. This report focuses on the initial phase of staff acquisition which was directed and administered by DOE NETL/ West Virginia under Project Officer Tom George. Using this CDP funding, we have developed the leadership and intellectual capacity for the SGIC. This was accomplished by investing (hiring) a core team of Smart Grid Systems engineering faculty focused on education, research, and innovation of a secure and smart grid infrastructure. The Smart Grid Integration Laboratory will be housed with the separately funded Integrid Laboratory as part of CSU's overall Smart Grid Integration Center (SGIC). The period of performance of this grant was 10/1/2009 to 9/30/2011 which included one no cost extension due to time delays in faculty hiring. The Smart Grid Integration Laboratory's focus is to build foundations to help graduate and undergraduates acquire systems engineering knowledge; conduct innovative research; and team externally with grid smart organizations. Using the results of the separately funded Smart Grid Workforce Education Workshop (May 2009) sponsored by the City of Fort Collins, Northern Colorado Clean Energy Cluster, Colorado State University Continuing Education, Spirae, and Siemens has been used to guide the hiring of faculty, program curriculum and education plan. This project develops faculty leaders with the intellectual capacity to inspire its students to become leaders that substantially contribute to the development and maintenance of Smart Grid infrastructure through topics such as: (1) Distributed energy systems modeling and control; (2) Energy and power conversion; (3

  18. Final Technical Report for "High-resolution global modeling of the effects of subgrid-scale clouds and turbulence on precipitating cloud systems"

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent [Univ. of Wisconsin, Milwaukee, WI (United States)

    2016-11-25

    The Multiscale Modeling Framework (MMF) embeds a cloud-resolving model in each grid column of a General Circulation Model (GCM). A MMF model does not need to use a deep convective parameterization, and thereby dispenses with the uncertainties in such parameterizations. However, MMF models grossly under-resolve shallow boundary-layer clouds, and hence those clouds may still benefit from parameterization. In this grant, we successfully created a climate model that embeds a cloud parameterization (“CLUBB”) within a MMF model. This involved interfacing CLUBB’s clouds with microphysics and reducing computational cost. We have evaluated the resulting simulated clouds and precipitation with satellite observations. The chief benefit of the project is to provide a MMF model that has an improved representation of clouds and that provides improved simulations of precipitation.

  19. Hybrid cloud for dummies

    CERN Document Server

    Hurwitz, Judith; Halper, Fern; Kirsch, Dan

    2012-01-01

    Understand the cloud and implement a cloud strategy for your business Cloud computing enables companies to save money by leasing storage space and accessing technology services through the Internet instead of buying and maintaining equipment and support services. Because it has its own unique set of challenges, cloud computing requires careful explanation. This easy-to-follow guide shows IT managers and support staff just what cloud computing is, how to deliver and manage cloud computing services, how to choose a service provider, and how to go about implementation. It also covers security and

  20. Secure cloud computing

    CERN Document Server

    Jajodia, Sushil; Samarati, Pierangela; Singhal, Anoop; Swarup, Vipin; Wang, Cliff

    2014-01-01

    This book presents a range of cloud computing security challenges and promising solution paths. The first two chapters focus on practical considerations of cloud computing. In Chapter 1, Chandramouli, Iorga, and Chokani describe the evolution of cloud computing and the current state of practice, followed by the challenges of cryptographic key management in the cloud. In Chapter 2, Chen and Sion present a dollar cost model of cloud computing and explore the economic viability of cloud computing with and without security mechanisms involving cryptographic mechanisms. The next two chapters addres

  1. Clouds of Venus

    Energy Technology Data Exchange (ETDEWEB)

    Knollenberg, R G [Particle Measuring Systems, Inc., 1855 South 57th Court, Boulder, Colorado 80301, U.S.A.; Hansen, J [National Aeronautics and Space Administration, New York (USA). Goddard Inst. for Space Studies; Ragent, B [National Aeronautics and Space Administration, Moffett Field, Calif. (USA). Ames Research Center; Martonchik, J [Jet Propulsion Lab., Pasadena, Calif. (USA); Tomasko, M [Arizona Univ., Tucson (USA)

    1977-05-01

    The current state of knowledge of the Venusian clouds is reviewed. The visible clouds of Venus are shown to be quite similar to low level terrestrial hazes of strong anthropogenic influence. Possible nucleation and particle growth mechanisms are presented. The Pioneer Venus experiments that emphasize cloud measurements are described and their expected findings are discussed in detail. The results of these experiments should define the cloud particle composition, microphysics, thermal and radiative heat budget, rough dynamical features and horizontal and vertical variations in these and other parameters. This information should be sufficient to initialize cloud models which can be used to explain the cloud formation, decay, and particle life cycle.

  2. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  3. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  4. Cloudbus Toolkit for Market-Oriented Cloud Computing

    Science.gov (United States)

    Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian

    This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.

  5. Multi-Spectral Cloud Retrievals from Moderate Image Spectrometer (MODIS)

    Science.gov (United States)

    Platnick, Steven

    2004-01-01

    MODIS observations from the NASA EOS Terra spacecraft (1030 local time equatorial sun-synchronous crossing) launched in December 1999 have provided a unique set of Earth observation data. With the launch of the NASA EOS Aqua spacecraft (1330 local time crossing! in May 2002: two MODIS daytime (sunlit) and nighttime observations are now available in a 24-hour period allowing some measure of diurnal variability. A comprehensive set of remote sensing algorithms for cloud masking and the retrieval of cloud physical and optical properties has been developed by members of the MODIS atmosphere science team. The archived products from these algorithms have applications in climate modeling, climate change studies, numerical weather prediction, as well as fundamental atmospheric research. In addition to an extensive cloud mask, products include cloud-top properties (temperature, pressure, effective emissivity), cloud thermodynamic phase, cloud optical and microphysical parameters (optical thickness, effective particle radius, water path), as well as derived statistics. An overview of the instrument and cloud algorithms will be presented along with various examples, including an initial analysis of several operational global gridded (Level-3) cloud products from the two platforms. Statistics of cloud optical and microphysical properties as a function of latitude for land and Ocean regions will be shown. Current algorithm research efforts will also be discussed.

  6. Scalability of Parallel Scientific Applications on the Cloud

    Directory of Open Access Journals (Sweden)

    Satish Narayana Srirama

    2011-01-01

    Full Text Available Cloud computing, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. To study the effects of moving parallel scientific applications onto the cloud, we deployed several benchmark applications like matrix–vector operations and NAS parallel benchmarks, and DOUG (Domain decomposition On Unstructured Grids on the cloud. DOUG is an open source software package for parallel iterative solution of very large sparse systems of linear equations. The detailed analysis of DOUG on the cloud showed that parallel applications benefit a lot and scale reasonable on the cloud. We could also observe the limitations of the cloud and its comparison with cluster in terms of performance. However, for efficiently running the scientific applications on the cloud infrastructure, the applications must be reduced to frameworks that can successfully exploit the cloud resources, like the MapReduce framework. Several iterative and embarrassingly parallel algorithms are reduced to the MapReduce model and their performance is measured and analyzed. The analysis showed that Hadoop MapReduce has significant problems with iterative methods, while it suits well for embarrassingly parallel algorithms. Scientific computing often uses iterative methods to solve large problems. Thus, for scientific computing on the cloud, this paper raises the necessity for better frameworks or optimizations for MapReduce.

  7. Cloud Interaction and Safety Features of Mobile Devices

    Directory of Open Access Journals (Sweden)

    Mirsat Yeşiltepe

    2018-02-01

    Full Text Available In this paper, two current popular mobile operating system, still in relation to the conceptof cloud began to supplant the internet almost Word today, the differences, the concept of cloudsecurity mechanisms they use for themselves and are dealt with in this environment. One ofcomparing mobile operation system is representing open source and the other for close source one.The other issue discussed in this article is how the mobile environment interacts with the cloud thanthe cloud communication with the computers.

  8. Importance of Grid Center Arrangement

    Science.gov (United States)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  9. Strengthen Cloud Computing Security with Federal Identity Management Using Hierarchical Identity-Based Cryptography

    Science.gov (United States)

    Yan, Liang; Rong, Chunming; Zhao, Gansen

    More and more companies begin to provide different kinds of cloud computing services for Internet users at the same time these services also bring some security problems. Currently the majority of cloud computing systems provide digital identity for users to access their services, this will bring some inconvenience for a hybrid cloud that includes multiple private clouds and/or public clouds. Today most cloud computing system use asymmetric and traditional public key cryptography to provide data security and mutual authentication. Identity-based cryptography has some attraction characteristics that seem to fit well the requirements of cloud computing. In this paper, by adopting federated identity management together with hierarchical identity-based cryptography (HIBC), not only the key distribution but also the mutual authentication can be simplified in the cloud.

  10. Radiative properties of clouds

    International Nuclear Information System (INIS)

    Twomey, S.

    1993-01-01

    The climatic effects of condensation nuclei in the formation of cloud droplets and the subsequent role of the cloud droplets as contributors to the planetary short-wave albedo is emphasized. Microphysical properties of clouds, which can be greatly modified by the degree of mixing with cloud-free air from outside, are discussed. The effect of clouds on visible radiation is assessed through multiple scattering of the radiation. Cloudwater or ice absorbs more with increasing wavelength in the near-infrared region, with water vapor providing the stronger absorption over narrower wavelength bands. Cloud thermal infrared absorption can be solely related to liquid water content at least for shallow clouds and clouds in the early development state. Three-dimensional general circulation models have been used to study the climatic effect of clouds. It was found for such studies (which did not consider variations in cloud albedo) that the cooling effects due to the increase in planetary short-wave albedo from clouds were offset by heating effects due to thermal infrared absorption by the cloud. Two permanent direct effects of increased pollution are discussed in this chapter: (a) an increase of absorption in the visible and near infrared because of increased amounts of elemental carbon, which gives rise to a warming effect climatically, and (b) an increased optical thickness of clouds due to increasing cloud droplet number concentration caused by increasing cloud condensation nuclei number concentration, which gives rise to a cooling effect climatically. An increase in cloud albedo from 0.7 to 0.87 produces an appreciable climatic perturbation of cooling up to 2.5 K at the ground, using a hemispheric general circulation model. Effects of pollution on cloud thermal infrared absorption are negligible

  11. GridCom, Grid Commander: graphical interface for Grid jobs and data management

    International Nuclear Information System (INIS)

    Galaktionov, V.V.

    2011-01-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  12. Participatory management in today's health care setting

    International Nuclear Information System (INIS)

    Burnham, B.A.

    1987-01-01

    As the health care revolution progresses, so must the management styles of today's leaders. The authors must ask ourselves if we are managing tomorrow's work force or the work force of the past. Participatory management may better meet the needs of today's work force. This paper identifies the reasons participatory management is a more effective management style, the methods used to implement a participatory management program, its benefits (such as higher productivity and more efficient, effective implementation and acceptance of change), and the difficulties experienced

  13. Film Presentation: Projekt Zukunft/Tomorrow Today

    CERN Multimedia

    Carolyn Lee

    2010-01-01

    Projekt Zukunft/Tomorrow Today, by Deutsche Welle (2009)   Deutsche Welle TV’s weekly science journal explores the LHC at CERN with host Ingolf Baur. Please note that we will show both the German and English versions of this broadcast. Each episode is about 27 minutes long. Projekt Zukunft/Tomorrow Today will be presented on Friday, 29 October from 13:00 to 14:00 in the Main Auditorium Language: German version followed by the English version      

  14. Grid and Entrepreneurship Workshop

    CERN Multimedia

    2006-01-01

    The CERN openlab is organising a special workshop about Grid opportunities for entrepreneurship. This one-day event will provide an overview of what is involved in spin-off technology, with a special reference to the context of computing and data Grids. Lectures by experienced entrepreneurs will introduce the key concepts of entrepreneurship and review, in particular, the industrial potential of EGEE (the EU co-funded Enabling Grids for E-sciencE project, led by CERN). Case studies will be given by CEOs of European start-ups already active in the Grid and computing cluster area, and regional experts will provide an overview of efforts in several European regions to stimulate entrepreneurship. This workshop is designed to encourage students and researchers involved or interested in Grid technology to consider the entrepreneurial opportunities that this technology may create in the coming years. This workshop is organized as part of the CERN openlab student programme, which is co-sponsored by CERN, HP, ...

  15. For smart electric grids

    International Nuclear Information System (INIS)

    Tran Thiet, Jean-Paul; Leger, Sebastien; Bressand, Florian; Perez, Yannick; Bacha, Seddik; Laurent, Daniel; Perrin, Marion

    2012-01-01

    The authors identify and discuss the main challenges faced by the French electric grid: the management of electricity demand and the needed improvement of energy efficiency, the evolution of consumer's state of mind, and the integration of new production capacities. They notably outline that France have been living until recently with an electricity abundance, but now faces the highest consumption peaks in Europe, and is therefore facing higher risks of power cuts. They also notice that the French energy mix is slowly evolving, and outline the problems raised by the fact that renewable energies which are to be developed, are decentralised and intermittent. They propose an overview of present developments of smart grids, and outline their innovative characteristics, challenges raised by their development and compare international examples. They show that smart grids enable a better adapted supply and decentralisation. A set of proposals is formulated about how to finance and to organise the reconfiguration of electric grids, how to increase consumer's responsibility for peak management and demand management, how to create the conditions of emergence of a European market of smart grids, and how to support self-consumption and the building-up of an energy storage sector

  16. Smart grid in Denmark 2.0. Implementing three key recommendations from the Smart Grid Network. [DanGrid]; Smart Grid i Danmark 2.0. Implementering af tre centrale anbefalinger fra Smart Grid netvaerket

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-11-01

    smart grid technology. The second barrier is that network companies today do not have a real opportunity to use price signals as an instrument to recover customers' flexibility. This report has developed a roadmap with special focus on grid companies' role, describing the most important steps towards a smart grid. (LN)

  17. Future electrical distribution grids: Smart Grids

    International Nuclear Information System (INIS)

    Hadjsaid, N.; Sabonnadiere, J.C.; Angelier, J.P.

    2010-01-01

    The new energy paradigm faced by distribution network represents a real scientific challenge. Thus, national and EU objectives in terms of environment and energy efficiency with resulted regulatory incentives for renewable energies, the deployment of smart meters and the need to respond to changing needs including new uses related to electric and plug-in hybrid vehicles introduce more complexity and favour the evolution towards a smarter grid. The economic interest group in Grenoble IDEA in connection with the power laboratory G2ELab at Grenoble Institute of technology, EDF and Schneider Electric are conducting research on the electrical distribution of the future in presence of distributed generation for ten years.Thus, several innovations emerged in terms of flexibility and intelligence of the distribution network. One can notice the intelligence solutions for voltage control, the tools of network optimization, the self-healing techniques, the innovative strategies for connecting distributed and intermittent generation or load control possibilities for the distributor. All these innovations are firmly in the context of intelligent networks of tomorrow 'Smart Grids'. (authors)

  18. Moving towards Cloud Security

    Directory of Open Access Journals (Sweden)

    Edit Szilvia Rubóczki

    2015-01-01

    Full Text Available Cloud computing hosts and delivers many different services via Internet. There are a lot of reasons why people opt for using cloud resources. Cloud development is increasing fast while a lot of related services drop behind, for example the mass awareness of cloud security. However the new generation upload videos and pictures without reason to a cloud storage, but only few know about data privacy, data management and the proprietary of stored data in the cloud. In an enterprise environment the users have to know the rule of cloud usage, however they have little knowledge about traditional IT security. It is important to measure the level of their knowledge, and evolve the training system to develop the security awareness. The article proves the importance of suggesting new metrics and algorithms for measuring security awareness of corporate users and employees to include the requirements of emerging cloud security.

  19. Risk perception and risk management in cloud computing: results from a case study of Swiss companies

    OpenAIRE

    Brender, Nathalie; Markov, Iliya

    2013-01-01

    In today's economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, inve...

  20. Grid deformation strategies for CFD analysis of screw compressors

    OpenAIRE

    Rane, S.; Kovacevic, A.; Stosic, N.; Kethidi, M.

    2013-01-01

    Customized grid generation of twin screw machines for CFD analysis is widely used by the refrigeration and air-conditioning industry today, but is currently not suitable for topologies such as those of single screw, variable pitch or tri screw rotors. This paper investigates a technique called key-frame re-meshing that supplies pre-generated unstructured grids to the CFD solver at different time steps. To evaluate its accuracy, the results of an isentropic compression-expansion process in a r...

  1. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    International Nuclear Information System (INIS)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J

    2008-01-01

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization

  2. The Explicit-Cloud Parameterized-Pollutant hybrid approach for aerosol-cloud interactions in multiscale modeling framework models: tracer transport results

    Energy Technology Data Exchange (ETDEWEB)

    Jr, William I Gustafson; Berg, Larry K; Easter, Richard C; Ghan, Steven J [Atmospheric Science and Global Change Division, Pacific Northwest National Laboratory, PO Box 999, MSIN K9-30, Richland, WA (United States)], E-mail: William.Gustafson@pnl.gov

    2008-04-15

    All estimates of aerosol indirect effects on the global energy balance have either completely neglected the influence of aerosol on convective clouds or treated the influence in a highly parameterized manner. Embedding cloud-resolving models (CRMs) within each grid cell of a global model provides a multiscale modeling framework for treating both the influence of aerosols on convective as well as stratiform clouds and the influence of clouds on the aerosol, but treating the interactions explicitly by simulating all aerosol processes in the CRM is computationally prohibitive. An alternate approach is to use horizontal statistics (e.g., cloud mass flux, cloud fraction, and precipitation) from the CRM simulation to drive a single-column parameterization of cloud effects on the aerosol and then use the aerosol profile to simulate aerosol effects on clouds within the CRM. Here, we present results from the first component of the Explicit-Cloud Parameterized-Pollutant parameterization to be developed, which handles vertical transport of tracers by clouds. A CRM with explicit tracer transport serves as a benchmark. We show that this parameterization, driven by the CRM's cloud mass fluxes, reproduces the CRM tracer transport significantly better than a single-column model that uses a conventional convective cloud parameterization.

  3. Marine cloud brightening

    OpenAIRE

    Latham, John; Bower, Keith; Choularton, Tom; Coe, Hugh; Connolly, Paul; Cooper, Gary; Craft, Tim; Foster, Jack; Gadian, Alan; Galbraith, Lee; Iacovides, Hector; Johnston, David; Launder, Brian; Leslie, Brian; Meyer, John

    2012-01-01

    The idea behind the marine cloud-brightening (MCB) geoengineering technique is that seeding marine stratocumulus clouds with copious quantities of roughly monodisperse sub-micrometre sea water particles might significantly enhance the cloud droplet number concentration, and thereby the cloud albedo and possibly longevity. This would produce a cooling, which general circulation model (GCM) computations suggest could—subject to satisfactory resolution of technical and scientific problems identi...

  4. Cloud computing strategies

    CERN Document Server

    Chorafas, Dimitris N

    2011-01-01

    A guide to managing cloud projects, Cloud Computing Strategies provides the understanding required to evaluate the technology and determine how it can be best applied to improve business and enhance your overall corporate strategy. Based on extensive research, it examines the opportunities and challenges that loom in the cloud. It explains exactly what cloud computing is, what it has to offer, and calls attention to the important issues management needs to consider before passing the point of no return regarding financial commitments.

  5. Towards Indonesian Cloud Campus

    OpenAIRE

    Thamrin, Taqwan; Lukman, Iing; Wahyuningsih, Dina Ika

    2013-01-01

    Nowadays, Cloud Computing is most discussed term in business and academic environment.Cloud campus has many benefits such as accessing the file storages, e-mails, databases,educational resources, research applications and tools anywhere for faculty, administrators,staff, students and other users in university, on demand. Furthermore, cloud campus reduces universities’ IT complexity and cost.This paper discuss the implementation of Indonesian cloud campus and various opportunies and benefits...

  6. Cloud Infrastructure Security

    OpenAIRE

    Velev , Dimiter; Zlateva , Plamena

    2010-01-01

    Part 4: Security for Clouds; International audience; Cloud computing can help companies accomplish more by eliminating the physical bonds between an IT infrastructure and its users. Users can purchase services from a cloud environment that could allow them to save money and focus on their core business. At the same time certain concerns have emerged as potential barriers to rapid adoption of cloud services such as security, privacy and reliability. Usually the information security professiona...

  7. Cloud services in organization

    OpenAIRE

    FUXA, Jan

    2013-01-01

    The work deals with the definition of the word cloud computing, cloud computing models, types, advantages, disadvantages, and comparing SaaS solutions such as: Google Apps and Office 365 in the area of electronic communications. The work deals with the use of cloud computing in the corporate practice, both good and bad practice. The following section describes the methodology for choosing the appropriate cloud service organization. Another part deals with analyzing the possibilities of SaaS i...

  8. Orchestrating Your Cloud Orchestra

    OpenAIRE

    Hindle, Abram

    2015-01-01

    Cloud computing potentially ushers in a new era of computer music performance with exceptionally large computer music instruments consisting of 10s to 100s of virtual machines which we propose to call a `cloud-orchestra'. Cloud computing allows for the rapid provisioning of resources, but to deploy such a complicated and interconnected network of software synthesizers in the cloud requires a lot of manual work, system administration knowledge, and developer/operator skills. This is a barrier ...

  9. Cloud security mechanisms

    OpenAIRE

    2014-01-01

    Cloud computing has brought great benefits in cost and flexibility for provisioning services. The greatest challenge of cloud computing remains however the question of security. The current standard tools in access control mechanisms and cryptography can only partly solve the security challenges of cloud infrastructures. In the recent years of research in security and cryptography, novel mechanisms, protocols and algorithms have emerged that offer new ways to create secure services atop cloud...

  10. Cloud Robotics Model

    OpenAIRE

    Mester, Gyula

    2015-01-01

    Cloud Robotics was born from the merger of service robotics and cloud technologies. It allows robots to benefit from the powerful computational, storage, and communications resources of modern data centres. Cloud robotics allows robots to take advantage of the rapid increase in data transfer rates to offload tasks without hard real time requirements. Cloud Robotics has rapidly gained momentum with initiatives by companies such as Google, Willow Garage and Gostai as well as more than a dozen a...

  11. Genomics With Cloud Computing

    OpenAIRE

    Sukhamrit Kaur; Sandeep Kaur

    2015-01-01

    Abstract Genomics is study of genome which provides large amount of data for which large storage and computation power is needed. These issues are solved by cloud computing that provides various cloud platforms for genomics. These platforms provides many services to user like easy access to data easy sharing and transfer providing storage in hundreds of terabytes more computational power. Some cloud platforms are Google genomics DNAnexus and Globus genomics. Various features of cloud computin...

  12. Cloud and Radiation Studies during SAFARI 2000

    Science.gov (United States)

    Platnick, Steven; King, M. D.; Hobbs, P. V.; Osborne, S.; Piketh, S.; Bruintjes, R.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Though the emphasis of the Southern Africa Regional Science Initiative 2000 (SAFARI-2000) dry season campaign was largely on emission sources and transport, the assemblage of aircraft (including the high altitude NASA ER-2 remote sensing platform and the University of Washington CV-580, UK MRF C130, and South African Weather Bureau JRA in situ aircrafts) provided a unique opportunity for cloud studies. Therefore, as part of the SAFARI initiative, investigations were undertaken to assess regional aerosol-cloud interactions and cloud remote sensing algorithms. In particular, the latter part of the experiment concentrated on marine boundary layer stratocumulus clouds off the southwest coast of Africa. Associated with cold water upwelling along the Benguela current, the Namibian stratocumulus regime has received limited attention but appears to be unique for several reasons. During the dry season, outflow of continental fires and industrial pollution over this area can be extreme. From below, upwelling provides a rich nutrient source for phytoplankton (a source of atmospheric sulphur through DMS production as well as from decay processes). The impact of these natural and anthropogenic sources on the microphysical and optical properties of the stratocumulus is unknown. Continental and Indian Ocean cloud systems of opportunity were also studied during the campaign. Aircraft flights were coordinated with NASA Terra Satellite overpasses for synergy with the Moderate Resolution Imaging Spectroradiometer (MODIS) and other Terra instruments. An operational MODIS algorithm for the retrieval of cloud optical and physical properties (including optical thickness, effective particle radius, and water path) has been developed. Pixel-level MODIS retrievals (11 km spatial resolution at nadir) and gridded statistics of clouds in th SAFARI region will be presented. In addition, the MODIS Airborne Simulator flown on the ER-2 provided high spatial resolution retrievals (50 m at nadir

  13. The Geriatric Child in Today's Culture.

    Science.gov (United States)

    Lamson, Frank E.

    This paper develops the premise that there is today a new "child" in our culture developed in response to expectations of daily functioning, family relationships, societal status, economic level, medical illness, emotional needs, and financial management. This new "child" is a person who has usually passed the age of 65, and has found that the…

  14. Using Today's Headlines for Teaching Gerontology

    Science.gov (United States)

    Haber, David

    2008-01-01

    It is a challenge to attract undergraduate students into the gerontology field. Many do not believe the aging field is exciting and at the cutting edge. Students, however, can be convinced of the timeliness, relevance, and excitement of the field by, literally, bringing up today's headlines in class. The author collected over 250 articles during…

  15. The energy of today and tomorrow

    International Nuclear Information System (INIS)

    Bauquis, E.; Bauquis, P.R.

    2007-01-01

    The authors present a today state of the art concerning the energy domain in the world, offering perspectives on what could be the tomorrow world in matter of energy. They define fundamental notions, the different sources of energy and their price, the energy policies of the different countries and the problem of the consumption impact on the environment. (A.L.B.)

  16. The Genetic Code: Yesterday, Today and Tomorrow

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 17; Issue 12. The Genetic Code: Yesterday, Today and Tomorrow. Jiqiang Ling Dieter Söll. General Article Volume 17 Issue 12 December 2012 pp 1136-1142. Fulltext. Click here to view fulltext PDF. Permanent link:

  17. We, John Dewey's Audience of Today

    Science.gov (United States)

    da Cunha, Marcus Vinicius

    2016-01-01

    This article suggests that John Dewey's "Democracy and Education" does not describe education in an existing society, but it conveys a utopia, in the sense coined by Mannheim: utopian thought aims at instigating actions towards the transformation of reality, intending to attain a better world in the future. Today's readers of Dewey (his…

  18. Organization management today: setting the human resource ...

    African Journals Online (AJOL)

    The paper's discussion focuses on the way the world we live in is being transformed under our own very eyes by factors and forces which are so compelling and overwhelming, in their ramifications. The environment in which business and management are carried on today is becoming more and more complex by the day.

  19. Europa Heute: Filmbegleitheft (Europe Today: Film Manual).

    Science.gov (United States)

    Freudenstein, Reinhold; And Others

    This teacher's guide to the German promotional film "Europe Today", suitable for use in advanced courses, concentrates on linguistic preparation required for full appreciation. The film focuses on the role of European countries as participating members of the Common Market. The manual includes information on the German film industry, a…

  20. Southern forests: Yesterday, today, and tomorrow

    Science.gov (United States)

    R. Neil Sampson

    2004-01-01

    In the 20th century, southern forests changed dramatically. Those changes pale, however, when compared to what happened to the people of the region. In addition to growing over fourfold in numbers, the South's population has urbanized, globalized, and intellectualized in 100 years. Rural and isolated in the 19th century, they are today urban and cosmopolitan. One...

  1. Identity and Diversity in Today's World

    Science.gov (United States)

    Gee, James Paul

    2017-01-01

    This paper develops a thesis about identity and diversity. I first look at activity-based identities, identities like being a gardener, birder, citizen scientist or fan-fiction writer. These are freely chosen identities and they are proliferating at a great rate today thanks to participatory culture, the Maker Movement and digital and social…

  2. Applying Servant Leadership in Today's Schools

    Science.gov (United States)

    Culver, Mary K.

    2009-01-01

    This book illustrates how the ideal of servant leadership can be applied in your school today. With real-life scenarios, discussions, and self assessments, this book gives practical suggestions to help you develop into a caring and effective servant leader. There are 52 scenarios in this book, focusing on situations as varied as: (1) Dealing with…

  3. Primary School Leadership Today and Tomorrow

    Science.gov (United States)

    Southworth, Geoff

    2008-01-01

    The article provides a retrospective and prospective view of primary school leadership. It begins with an analytic description of primary school leadership in the recent past. The second part looks at school leadership today, identifies contemporary issues and examines role continuities and changes. The third part looks at what the future might…

  4. After the Resistance: The Alamo Today

    Centers for Disease Control (CDC) Podcasts

    2014-09-23

    Byron Breedlove reads his essay After the Resistance: The Alamo Today about the Alamo and emerging disease resistance.  Created: 9/23/2014 by National Center for Emerging and Zoonotic Infectious Diseases (NCEZID).   Date Released: 10/20/2014.

  5. Secondary School Administration in Anambra State Today ...

    African Journals Online (AJOL)

    The study, a descriptive survey research design was used to identify the challenges that impede secondary school administration today in Anambra State. The population of the study was all the 259 public secondary school principals in the state. Two research questions and two null hypotheses guided the study. A 20-item ...

  6. Grid sleeve bulge tool

    International Nuclear Information System (INIS)

    Phillips, W.D.; Vaill, R.E.

    1980-01-01

    An improved grid sleeve bulge tool is designed for securing control rod guide tubes to sleeves brazed in a fuel assembly grid. The tool includes a cylinder having an outer diameter less than the internal diameter of the control rod guide tubes. The walls of the cylinder are cut in an axial direction along its length to provide several flexible tines or ligaments. These tines are similar to a fork except they are spaced in a circumferential direction. The end of each alternate tine is equipped with a semispherical projection which extends radially outwardly from the tine surface. A ram or plunger of generally cylindrical configuration and about the same length as the cylinder is designed to fit in and move axially of the cylinder and thereby force the tined projections outwardly when the ram is pulled into the cylinder. The ram surface includes axially extending grooves and plane surfaces which are complimentary to the inner surfaces formed on the tines on the cylinder. As the cylinder is inserted into a control rod guide tube, and the projections on the cylinder placed in a position just below or above a grid strap, the ram is pulled into the cylinder, thus moving the tines and the projections thereon outwardly into contact with the sleeve, to plastically deform both the sleeve and the control rod guide tube, and thereby form four bulges which extend outwardly from the sleeve surface and beyond the outer periphery of the grid peripheral strap. This process is then repeated at the points above the grid to also provide for outwardly projecting surfaces, the result being that the grid is accurately positioned on and mechanically secured to the control rod guide tubes which extend the length of a fuel assembly

  7. Solar Energy Grid Integration Systems (SEGIS): adding functionality while maintaining reliability and economics

    Science.gov (United States)

    Bower, Ward

    2011-09-01

    An overview of the activities and progress made during the US DOE Solar Energy Grid Integration Systems (SEGIS) solicitation, while maintaining reliability and economics is provided. The SEGIS R&D opened pathways for interconnecting PV systems to intelligent utility grids and micro-grids of the future. In addition to new capabilities are "value added" features. The new hardware designs resulted in smaller, less material-intensive products that are being viewed by utilities as enabling dispatchable generation and not just unpredictable negative loads. The technical solutions enable "advanced integrated system" concepts and "smart grid" processes to move forward in a faster and focused manner. The advanced integrated inverters/controllers can now incorporate energy management functionality, intelligent electrical grid support features and a multiplicity of communication technologies. Portals for energy flow and two-way communications have been implemented. SEGIS hardware was developed for the utility grid of today, which was designed for one-way power flow, for intermediate grid scenarios, AND for the grid of tomorrow, which will seamlessly accommodate managed two-way power flows as required by large-scale deployment of solar and other distributed generation. The SEGIS hardware and control developed for today meets existing standards and codes AND provides for future connections to a "smart grid" mode that enables utility control and optimized performance.

  8. Formation of Silicate and Titanium Clouds on Hot Jupiters

    Science.gov (United States)

    Powell, Diana; Zhang, Xi; Gao, Peter; Parmentier, Vivien

    2018-06-01

    We present the first application of a bin-scheme microphysical and vertical transport model to determine the size distribution of titanium and silicate cloud particles in the atmospheres of hot Jupiters. We predict particle size distributions from first principles for a grid of planets at four representative equatorial longitudes, and investigate how observed cloud properties depend on the atmospheric thermal structure and vertical mixing. The predicted size distributions are frequently bimodal and irregular in shape. There is a negative correlation between the total cloud mass and equilibrium temperature as well as a positive correlation between the total cloud mass and atmospheric mixing. The cloud properties on the east and west limbs show distinct differences that increase with increasing equilibrium temperature. Cloud opacities are roughly constant across a broad wavelength range, with the exception of features in the mid-infrared. Forward-scattering is found to be important across the same wavelength range. Using the fully resolved size distribution of cloud particles as opposed to a mean particle size has a distinct impact on the resultant cloud opacities. The particle size that contributes the most to the cloud opacity depends strongly on the cloud particle size distribution. We predict that it is unlikely that silicate or titanium clouds are responsible for the optical Rayleigh scattering slope seen in many hot Jupiters. We suggest that cloud opacities in emission may serve as sensitive tracers of the thermal state of a planet’s deep interior through the existence or lack of a cold trap in the deep atmosphere.

  9. Contrasting the co-variability of daytime cloud and precipitation over tropical land and ocean

    Science.gov (United States)

    Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin; Cho, Nayeong; Tan, Jackson

    2018-03-01

    The co-variability of cloud and precipitation in the extended tropics (35° N-35° S) is investigated using contemporaneous data sets for a 13-year period. The goal is to quantify potential relationships between cloud type fractions and precipitation events of particular strength. Particular attention is paid to whether the relationships exhibit different characteristics over tropical land and ocean. A primary analysis metric is the correlation coefficient between fractions of individual cloud types and frequencies within precipitation histogram bins that have been matched in time and space. The cloud type fractions are derived from Moderate Resolution Imaging Spectroradiometer (MODIS) joint histograms of cloud top pressure and cloud optical thickness in 1° grid cells, and the precipitation frequencies come from the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) data set aggregated to the same grid.It is found that the strongest coupling (positive correlation) between clouds and precipitation occurs over ocean for cumulonimbus clouds and the heaviest rainfall. While the same cloud type and rainfall bin are also best correlated over land compared to other combinations, the correlation magnitude is weaker than over ocean. The difference is attributed to the greater size of convective systems over ocean. It is also found that both over ocean and land the anti-correlation of strong precipitation with weak (i.e., thin and/or low) cloud types is of greater absolute strength than positive correlations between weak cloud types and weak precipitation. Cloud type co-occurrence relationships explain some of the cloud-precipitation anti-correlations. Weak correlations between weaker rainfall and clouds indicate poor predictability for precipitation when cloud types are known, and this is even more true over land than over ocean.

  10. Chargeback for cloud services.

    NARCIS (Netherlands)

    Baars, T.; Khadka, R.; Stefanov, H.; Jansen, S.; Batenburg, R.; Heusden, E. van

    2014-01-01

    With pay-per-use pricing models, elastic scaling of resources, and the use of shared virtualized infrastructures, cloud computing offers more efficient use of capital and agility. To leverage the advantages of cloud computing, organizations have to introduce cloud-specific chargeback practices.

  11. On CLOUD nine

    CERN Multimedia

    2009-01-01

    The team from the CLOUD experiment - the world’s first experiment using a high-energy particle accelerator to study the climate - were on cloud nine after the arrival of their new three-metre diameter cloud chamber. This marks the end of three years’ R&D and design, and the start of preparations for data taking later this year.

  12. Cloud Computing Explained

    Science.gov (United States)

    Metz, Rosalyn

    2010-01-01

    While many talk about the cloud, few actually understand it. Three organizations' definitions come to the forefront when defining the cloud: Gartner, Forrester, and the National Institutes of Standards and Technology (NIST). Although both Gartner and Forrester provide definitions of cloud computing, the NIST definition is concise and uses…

  13. Greening the Cloud

    NARCIS (Netherlands)

    van den Hoed, Robert; Hoekstra, Eric; Procaccianti, G.; Lago, P.; Grosso, Paola; Taal, Arie; Grosskop, Kay; van Bergen, Esther

    The cloud has become an essential part of our daily lives. We use it to store our documents (Dropbox), to stream our music and lms (Spotify and Net ix) and without giving it any thought, we use it to work on documents in the cloud (Google Docs). The cloud forms a massive storage and processing

  14. Security in the cloud.

    Science.gov (United States)

    Degaspari, John

    2011-08-01

    As more provider organizations look to the cloud computing model, they face a host of security-related questions. What are the appropriate applications for the cloud, what is the best cloud model, and what do they need to know to choose the best vendor? Hospital CIOs and security experts weigh in.

  15. Smart Grid Architectures

    DEFF Research Database (Denmark)

    Dondossola, Giovanna; Terruggia, Roberta; Bessler, Sandford

    2014-01-01

    The scope of this paper is to address the evolution of distribution grid architectures following the widespread introduction of renewable energy sources. The increasing connection of distributed resources has a strong impact on the topology and the control functionality of the current distribution...... grids requiring the development of new Information and Communication Technology (ICT) solutions with various degrees of adaptation of the monitoring, communication and control technologies. The costs of ICT based solutions need however to be taken into account, hence it is desirable to work...

  16. Instant jqGrid

    CERN Document Server

    Manricks, Gabriel

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. A step-by-step, practical Starter book, Instant jqGrid embraces you while you take your first steps, and introduces you to the content in an easy-to-follow order.This book is aimed at people who have some knowledge of HTML and JavaScript. Knowledge of PHP and SQL would also prove to be beneficial. No prior knowledge of jqGrid is expected.

  17. Smart Grid, Smart Europe

    OpenAIRE

    VITIELLO SILVIA; FULLI Gianluca; MENGOLINI Anna Maria

    2013-01-01

    Le smart grid, o reti elettriche intelligenti, aprono la strada a nuove applicazioni con conseguenze di vasta portata per l’intero sistema elettrico, tra le quali la principale è la capacità di integrare nella rete esistente più fonti di energia rinnovabili (FER), veicoli elettrici e fonti di generazione distribuita. Le smart grid inoltre garantiscono una più efficiente ed affidabile risposta alla domanda di energia, sia da un punto di vista tecnico, permettendo un monitoraggio e un controll...

  18. Distributed photovoltaic grid transformers

    CERN Document Server

    Shertukde, Hemchandra Madhusudan

    2014-01-01

    The demand for alternative energy sources fuels the need for electric power and controls engineers to possess a practical understanding of transformers suitable for solar energy. Meeting that need, Distributed Photovoltaic Grid Transformers begins by explaining the basic theory behind transformers in the solar power arena, and then progresses to describe the development, manufacture, and sale of distributed photovoltaic (PV) grid transformers, which help boost the electric DC voltage (generally at 30 volts) harnessed by a PV panel to a higher level (generally at 115 volts or higher) once it is

  19. Cloud Computing and Its Applications in GIS

    Science.gov (United States)

    Kang, Cao

    2011-12-01

    Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

  20. On transferring the grid technology to the biomedical community.

    Science.gov (United States)

    Mohammed, Yassene; Sax, Ulrich; Dickmann, Frank; Lippert, Joerg; Solodenko, Juri; von Voigt, Gabriele; Smith, Matthew; Rienhoff, Otto

    2010-01-01

    Natural scientists such as physicists pioneered the sharing of computing resources, which resulted in the Grid. The inter domain transfer process of this technology has been an intuitive process. Some difficulties facing the life science community can be understood using the Bozeman's "Effectiveness Model of Technology Transfer". Bozeman's and classical technology transfer approaches deal with technologies that have achieved certain stability. Grid and Cloud solutions are technologies that are still in flux. We illustrate how Grid computing creates new difficulties for the technology transfer process that are not considered in Bozeman's model. We show why the success of health Grids should be measured by the qualified scientific human capital and opportunities created, and not primarily by the market impact. With two examples we show how the Grid technology transfer theory corresponds to the reality. We conclude with recommendations that can help improve the adoption of Grid solutions into the biomedical community. These results give a more concise explanation of the difficulties most life science IT projects are facing in the late funding periods, and show some leveraging steps which can help to overcome the "vale of tears".

  1. Grid computing the European Data Grid Project

    CERN Document Server

    Segal, B; Gagliardi, F; Carminati, F

    2000-01-01

    The goal of this project is the development of a novel environment to support globally distributed scientific exploration involving multi- PetaByte datasets. The project will devise and develop middleware solutions and testbeds capable of scaling to handle many PetaBytes of distributed data, tens of thousands of resources (processors, disks, etc.), and thousands of simultaneous users. The scale of the problem and the distribution of the resources and user community preclude straightforward replication of the data at different sites, while the aim of providing a general purpose application environment precludes distributing the data using static policies. We will construct this environment by combining and extending newly emerging "Grid" technologies to manage large distributed datasets in addition to computational elements. A consequence of this project will be the emergence of fundamental new modes of scientific exploration, as access to fundamental scientific data is no longer constrained to the producer of...

  2. CERN Computing Colloquium | Hidden in the Clouds: New Ideas in Cloud Computing | 30 May

    CERN Multimedia

    2013-01-01

    by Dr. Shevek (NEBULA) Thursday 30 May 2013 from 2 p.m. to 4 p.m. at CERN ( 40-S2-D01 - Salle Dirac ) Abstract: Cloud computing has become a hot topic. But 'cloud' is no newer in 2013 than MapReduce was in 2005: We've been doing both for years. So why is cloud more relevant today than it ever has been? In this presentation, we will introduce the (current) central thesis of cloud computing, and explore how and why (or even whether) the concept has evolved. While we will cover a little light background, our primary focus will be on the consequences, corollaries and techniques introduced by some of the leading cloud developers and organizations. We each have a different deployment model, different applications and workloads, and many of us are still learning to efficiently exploit the platform services offered by a modern implementation. The discussion will offer the opportunity to share these experiences and help us all to realize the benefits of cloud computing to the ful...

  3. Enabling Campus Grids with Open Science Grid Technology

    International Nuclear Information System (INIS)

    Weitzel, Derek; Fraser, Dan; Pordes, Ruth; Bockelman, Brian; Swanson, David

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  4. Enabling campus grids with open science grid technology

    Energy Technology Data Exchange (ETDEWEB)

    Weitzel, Derek [Nebraska U.; Bockelman, Brian [Nebraska U.; Swanson, David [Nebraska U.; Fraser, Dan [Argonne; Pordes, Ruth [Fermilab

    2011-01-01

    The Open Science Grid is a recognized key component of the US national cyber-infrastructure enabling scientific discovery through advanced high throughput computing. The principles and techniques that underlie the Open Science Grid can also be applied to Campus Grids since many of the requirements are the same, even if the implementation technologies differ. We find five requirements for a campus grid: trust relationships, job submission, resource independence, accounting, and data management. The Holland Computing Center's campus grid at the University of Nebraska-Lincoln was designed to fulfill the requirements of a campus grid. A bridging daemon was designed to bring non-Condor clusters into a grid managed by Condor. Condor features which make it possible to bridge Condor sites into a multi-campus grid have been exploited at the Holland Computing Center as well.

  5. CLOUD STORAGE SERVICES

    OpenAIRE

    Yan, Cheng

    2017-01-01

    Cloud computing is a hot topic in recent research and applications. Because it is widely used in various fields. Up to now, Google, Microsoft, IBM, Amazon and other famous co partnership have proposed their cloud computing application. Look upon cloud computing as one of the most important strategy in the future. Cloud storage is the lower layer of cloud computing system which supports the service of the other layers above it. At the same time, it is an effective way to store and manage heavy...

  6. Benchmarking Cloud Storage Systems

    OpenAIRE

    Wang, Xing

    2014-01-01

    With the rise of cloud computing, many cloud storage systems like Dropbox, Google Drive and Mega have been built to provide decentralized and reliable file storage. It is thus of prime importance to know their features, performance, and the best way to make use of them. In this context, we introduce BenchCloud, a tool designed as part of this thesis to conveniently and efficiently benchmark any cloud storage system. First, we provide a study of six commonly-used cloud storage systems to ident...

  7. The Magellanic clouds

    International Nuclear Information System (INIS)

    1989-01-01

    As the two galaxies nearest to our own, the Magellanic Clouds hold a special place in studies of the extragalactic distance scale, of stellar evolution and the structure of galaxies. In recent years, results from the South African Astronomical Observatory (SAAO) and elsewhere have shown that it is possible to begin understanding the three dimensional structure of the Clouds. Studies of Magellanic Cloud Cepheids have continued, both to investigate the three-dimensional structure of the Clouds and to learn more about Cepheids and their use as extragalactic distance indicators. Other research undertaken at SAAO includes studies on Nova LMC 1988 no 2 and red variables in the Magellanic Clouds

  8. Cloud Computing Bible

    CERN Document Server

    Sosinsky, Barrie

    2010-01-01

    The complete reference guide to the hot technology of cloud computingIts potential for lowering IT costs makes cloud computing a major force for both IT vendors and users; it is expected to gain momentum rapidly with the launch of Office Web Apps later this year. Because cloud computing involves various technologies, protocols, platforms, and infrastructure elements, this comprehensive reference is just what you need if you'll be using or implementing cloud computing.Cloud computing offers significant cost savings by eliminating upfront expenses for hardware and software; its growing popularit

  9. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    Science.gov (United States)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  10. Utah Bouguer Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2.5 kilometer Bouguer anomaly grid for the state of Utah. Number of columns is 196 and number of rows is 245. The order of the data is from the lower left to the...

  11. Modelling Chinese Smart Grid

    DEFF Research Database (Denmark)

    Yuksel, Ender; Nielson, Hanne Riis; Nielson, Flemming

    In this document, we consider a specific Chinese Smart Grid implementation and try to address the verification problem for certain quantitative properties including performance and battery consumption. We employ stochastic model checking approach and present our modelling and analysis study using...

  12. Grid attacks avian flu

    CERN Multimedia

    2006-01-01

    During April, a collaboration of Asian and European laboratories analysed 300,000 possible drug components against the avian flu virus H5N1 using the EGEE Grid infrastructure. Schematic presentation of the avian flu virus.The distribution of the EGEE sites in the world on which the avian flu scan was performed. The goal was to find potential compounds that can inhibit the activities of an enzyme on the surface of the influenza virus, the so-called neuraminidase, subtype N1. Using the Grid to identify the most promising leads for biological tests could speed up the development process for drugs against the influenza virus. Co-ordinated by CERN and funded by the European Commission, the EGEE project (Enabling Grids for E-sciencE) aims to set up a worldwide grid infrastructure for science. The challenge of the in silico drug discovery application is to identify those molecules which can dock on the active sites of the virus in order to inhibit its action. To study the impact of small scale mutations on drug r...

  13. Multi-Grid Lanczos

    Science.gov (United States)

    Clark, M. A.; Jung, Chulwoo; Lehner, Christoph

    2018-03-01

    We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD's 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  14. Multi-Grid Lanczos

    Directory of Open Access Journals (Sweden)

    Clark M. A.

    2018-01-01

    Full Text Available We present a Lanczos algorithm utilizing multiple grids that reduces the memory requirements both on disk and in working memory by one order of magnitude for RBC/UKQCD’s 48I and 64I ensembles at the physical pion mass. The precision of the resulting eigenvectors is on par with exact deflation.

  15. Nevada Isostatic Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2 kilometer Isostatic anomaly grid for the state of Nevada. Number of columns is 269 and number of rows is 394. The order of the data is from the lower left to the...

  16. Steering the Smart Grid

    NARCIS (Netherlands)

    Molderink, Albert; Bakker, Vincent; Bosman, M.G.C.; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2010-01-01

    Increasing energy prices and the greenhouse effect lead to more awareness of energy efficiency of electricity supply. During the last years, a lot of technologies and optimization methodologies were developed to increase the efficiency, maintain the grid stability and support large scale

  17. Cutback for grid operators

    International Nuclear Information System (INIS)

    Meulmeester, P.; De Laat, J.

    2006-01-01

    The Netherlands Competition Authority (NMa), in which the Office of Energy Regulation (DTe) is included plan to decrease the capital cost compensation (or weighted average cost of capital or WACC) for grid operators. In this article it is explained how the compensation is calculated, why this measure will be taken and what the effects of this cutback are [nl

  18. Autonomous Energy Grids: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Kroposki, Benjamin D [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Bernstein, Andrey [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhang, Yingchen [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-10-04

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performance while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.

  19. Bolivian Bouguer Anomaly Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 1 kilometer Bouguer anomaly grid for the country of Bolivia.Number of columns is 550 and number of rows is 900. The order of the data is from the lower left to the...

  20. Smart grid voor comfort

    NARCIS (Netherlands)

    Zeiler, W.; Vissers, D.R.; Maaijen, H.N.; Kling, W.L.; Velden, van der J.A.J.; Larsen, J.P.

    2012-01-01

    Er vindt onderzoek plaats naar een nieuwe regelstrategie gebaseerd op de toepassing van een draadloos sensor netwerk dat is gekoppeld aan het smart grid. Doel van deze regelstrategie is om op gebruikersniveau energie te kunnen besparen met behoud of zelfs verbetering van het individueel comfort. Er

  1. Kids Enjoy Grids

    CERN Multimedia

    2007-01-01

    I want to come back and work here when I'm older,' was the spontaneous reaction of one of the children invited to CERN by the Enabling Grids for E-sciencE project for a 'Grids for Kids' day at the end of January. The EGEE project is led by CERN, and the EGEE gender action team organized the day to introduce children to grid technology at an early age. The school group included both boys and girls, aged 9 to 11. All of the presenters were women. 'In general, before this visit, the children thought that scientists always wore white coats and were usually male, with wild Einstein-like hair,' said Jackie Beaver, the class's teacher at the Institut International de Lancy, a school near Geneva. 'They were surprised and pleased to see that women became scientists, and that scientists were quite 'normal'.' The half-day event included presentations about why Grids are needed, a visit of the computer centre, some online games, and plenty of time for questions. In the end, everyone agreed that it was a big success a...

  2. Reconsidering solar grid parity

    International Nuclear Information System (INIS)

    Yang, C.-J.

    2010-01-01

    Grid parity-reducing the cost of solar energy to be competitive with conventional grid-supplied electricity-has long been hailed as the tipping point for solar dominance in the energy mix. Such expectations are likely to be overly optimistic. A realistic examination of grid parity suggests that the cost-effectiveness of distributed photovoltaic (PV) systems may be further away than many are hoping for. Furthermore, cost-effectiveness may not guarantee commercial competitiveness. Solar hot water technology is currently far more cost-effective than photovoltaic technology and has already reached grid parity in many places. Nevertheless, the market penetration of solar water heaters remains limited for reasons including unfamiliarity with the technologies and high upfront costs. These same barriers will likely hinder the adoption of distributed solar photovoltaic systems as well. The rapid growth in PV deployment in recent years is largely policy-driven and such rapid growth would not be sustainable unless governments continue to expand financial incentives and policy mandates, as well as address regulatory and market barriers.

  3. Maine Bouguer Gravity Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 2 kilometer Bouguer anomaly grid for the state of Maine. Number of columns is 197 and number of rows is 292. The order of the data is from the lower left to the...

  4. Minnesota Bouguer Anomaly Grid

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A 1.5 kilometer Bouguer anomaly grid for the state of Minnesota. Number of columns is 404 and number of rows is 463. The order of the data is from the lower left to...

  5. Molecular Grid Membranes

    National Research Council Canada - National Science Library

    Michl, Josef; Magnera, Thomas

    2008-01-01

    ...) porphyrin triply linked in the meso-meso, and both beta-beta positions four times by carbon-carbon bonds to each of its neighbors to form porphite sheets a grid-type material that would be an analog of graphene...

  6. The Grid challenge

    CERN Multimedia

    Lundquest, E

    2003-01-01

    At a customer panel discussion during OracleWorld in San Franciso, grid computing was being pushed as the next big thing - even if panellists couldsn't quite agree on what it is, what it will cost or when it will appear (1 page).

  7. NSTAR Smart Grid Pilot

    Energy Technology Data Exchange (ETDEWEB)

    Rabari, Anil [NSTAR Electric, Manchester, NH (United States); Fadipe, Oloruntomi [NSTAR Electric, Manchester, NH (United States)

    2014-03-31

    NSTAR Electric & Gas Corporation (“the Company”, or “NSTAR”) developed and implemented a Smart Grid pilot program beginning in 2010 to demonstrate the viability of leveraging existing automated meter reading (“AMR”) deployments to provide much of the Smart Grid functionality of advanced metering infrastructure (“AMI”), but without the large capital investment that AMI rollouts typically entail. In particular, a central objective of the Smart Energy Pilot was to enable residential dynamic pricing (time-of-use “TOU” and critical peak rates and rebates) and two-way direct load control (“DLC”) by continually capturing AMR meter data transmissions and communicating through customer-sited broadband connections in conjunction with a standardsbased home area network (“HAN”). The pilot was supported by the U.S. Department of Energy’s (“DOE”) through the Smart Grid Demonstration program. NSTAR was very pleased to not only receive the funding support from DOE, but the guidance and support of the DOE throughout the pilot. NSTAR is also pleased to report to the DOE that it was able to execute and deliver a successful pilot on time and on budget. NSTAR looks for future opportunities to work with the DOE and others in future smart grid projects.

  8. Controlling smart grid adaptivity

    OpenAIRE

    Toersche, Hermen; Nykamp, Stefan; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2012-01-01

    Methods are discussed for planning oriented smart grid control to cope with scenarios with limited predictability, supporting an increasing penetration of stochastic renewable resources. The performance of these methods is evaluated with simulations using measured wind generation and consumption data. Forecast errors are shown to affect worst case behavior in particular, the severity of which depends on the chosen adaptivity strategy and error model.

  9. Technical Research on the Electric Power Big Data Platform of Smart Grid

    OpenAIRE

    Ruiguang MA; Haiyan Wang; Quanming Zhang; Yuan Liang

    2017-01-01

    Through elaborating on the associated relationship among electric power big data, cloud computing and smart grid, this paper put forward general framework of electric power big data platform based on the smart grid. The general framework of the platform is divided into five layers, namely data source layer, data integration and storage layer, data processing and scheduling layer, data analysis layer and application layer. This paper makes in-depth exploration and studies the integrated manage...

  10. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  11. CLOUD COMPUTING SECURITY

    Directory of Open Access Journals (Sweden)

    Ştefan IOVAN

    2016-05-01

    Full Text Available Cloud computing reprentes the software applications offered as a service online, but also the software and hardware components from the data center.In the case of wide offerd services for any type of client, we are dealing with a public cloud. In the other case, in wich a cloud is exclusively available for an organization and is not available to the open public, this is consider a private cloud [1]. There is also a third type, called hibrid in which case an user or an organization might use both services available in the public and private cloud. One of the main challenges of cloud computing are to build the trust and ofer information privacy in every aspect of service offerd by cloud computingle. The variety of existing standards, just like the lack of clarity in sustenability certificationis not a real help in building trust. Also appear some questions marks regarding the efficiency of traditionsecurity means that are applied in the cloud domain. Beside the economic and technology advantages offered by cloud, also are some advantages in security area if the information is migrated to cloud. Shared resources available in cloud includes the survey, use of the "best practices" and technology for advance security level, above all the solutions offered by the majority of medium and small businesses, big companies and even some guvermental organizations [2].

  12. A European Federated Cloud: Innovative distributed computing solutions by EGI

    Science.gov (United States)

    Sipos, Gergely; Turilli, Matteo; Newhouse, Steven; Kacsuk, Peter

    2013-04-01

    The European Grid Infrastructure (EGI) is the result of pioneering work that has, over the last decade, built a collaborative production infrastructure of uniform services through the federation of national resource providers that supports multi-disciplinary science across Europe and around the world. This presentation will provide an overview of the recently established 'federated cloud computing services' that the National Grid Initiatives (NGIs), operators of EGI, offer to scientific communities. The presentation will explain the technical capabilities of the 'EGI Federated Cloud' and the processes whereby earth and space science researchers can engage with it. EGI's resource centres have been providing services for collaborative, compute- and data-intensive applications for over a decade. Besides the well-established 'grid services', several NGIs already offer privately run cloud services to their national researchers. Many of these researchers recently expressed the need to share these cloud capabilities within their international research collaborations - a model similar to the way the grid emerged through the federation of institutional batch computing and file storage servers. To facilitate the setup of a pan-European cloud service from the NGIs' resources, the EGI-InSPIRE project established a Federated Cloud Task Force in September 2011. The Task Force has a mandate to identify and test technologies for a multinational federated cloud that could be provisioned within EGI by the NGIs. A guiding principle for the EGI Federated Cloud is to remain technology neutral and flexible for both resource providers and users: • Resource providers are allowed to use any cloud hypervisor and management technology to join virtualised resources into the EGI Federated Cloud as long as the site is subscribed to the user-facing interfaces selected by the EGI community. • Users can integrate high level services - such as brokers, portals and customised Virtual Research

  13. Non-Gaussian power grid frequency fluctuations characterized by Lévy-stable laws and superstatistics

    Science.gov (United States)

    Schäfer, Benjamin; Beck, Christian; Aihara, Kazuyuki; Witthaut, Dirk; Timme, Marc

    2018-02-01

    Multiple types of fluctuations impact the collective dynamics of power grids and thus challenge their robust operation. Fluctuations result from processes as different as dynamically changing demands, energy trading and an increasing share of renewable power feed-in. Here we analyse principles underlying the dynamics and statistics of power grid frequency fluctuations. Considering frequency time series for a range of power grids, including grids in North America, Japan and Europe, we find a strong deviation from Gaussianity best described as Lévy-stable and q-Gaussian distributions. We present a coarse framework to analytically characterize the impact of arbitrary noise distributions, as well as a superstatistical approach that systematically interprets heavy tails and skewed distributions. We identify energy trading as a substantial contribution to today's frequency fluctuations and effective damping of the grid as a controlling factor enabling reduction of fluctuation risks, with enhanced effects for small power grids.

  14. Einstein today; Einstein aujourd'hui

    Energy Technology Data Exchange (ETDEWEB)

    Aspect, A.; Grangier, Ph. [Centre National de la Recherche Scientifique (CNRS), Lab. Charles Fabry de l' Institut d' Optique a Orsay, 91 - Orsay (France); Bouchet, F.R. [Institut d' Astrophysique de Paris, CNRS, 75 - Paris (France); Brunet, E.; Derrida, B. [Universite Pierre et Marie Curie, Ecole Normale Superieure, 75 - Paris (France); Cohen-Tannoudji, C. [Academie des Sciences, 75 - Paris (France); Dalibard, J.; Laloe, F. [Laboratoire Kastler Brossel. UMR 8552 (ENS, UPMC, CNRS), 75 - Paris (France); Damour, Th. [Institut des Hautes Etudes Scientifiques, 91 - Bures sur Yvette (France); Darrigol, O. [Centre National de la Recherche Scientifique (CNRS), Groupe Histoire des Sciences Rehseis, 75 - Paris (France); Pocholle, J.P. [Thales Research et Technology France, 91 - Palaiseau (France)

    2005-07-01

    The most important contributions of Einstein involve 5 fields of physics : the existence of quanta (light quanta, stimulated radiation emission and Bose-Einstein condensation), relativity, fluctuations (Brownian motion and thermodynamical fluctuations), the basis of quantum physics and cosmology (cosmological constant and the expansion of the universe). Diverse and renowned physicists have appreciated the development of modern physics from Einstein's ideas to the knowledge of today. This book is a collective book that gathers their work under 7 chapters: 1) 1905, a new beginning; 2) from the Einstein, Podolsky and Rosen's article to quantum information (cryptography and quantum computers); 3) the Bose-Einstein condensation in gases; 4) from stimulated emission to the today's lasers; 5) Brownian motion and the fluctuation-dissipation theory; 6) general relativity; and 7) cosmology. (A.C.)

  15. Grid interoperability: the interoperations cookbook

    Energy Technology Data Exchange (ETDEWEB)

    Field, L; Schulz, M [CERN (Switzerland)], E-mail: Laurence.Field@cern.ch, E-mail: Markus.Schulz@cern.ch

    2008-07-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards.

  16. Grid interoperability: the interoperations cookbook

    International Nuclear Information System (INIS)

    Field, L; Schulz, M

    2008-01-01

    Over recent years a number of grid projects have emerged which have built grid infrastructures that are now the computing backbones for various user communities. A significant number of these communities are limited to one grid infrastructure due to the different middleware and procedures used in each grid. Grid interoperation is trying to bridge these differences and enable virtual organizations to access resources independent of the grid project affiliation. This paper gives an overview of grid interoperation and describes the current methods used to bridge the differences between grids. Actual use cases encountered during the last three years are discussed and the most important interfaces required for interoperability are highlighted. A summary of the standardisation efforts in these areas is given and we argue for moving more aggressively towards standards

  17. Allegheny County Map Index Grid

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Map Index Sheets from Block and Lot Grid of Property Assessment and based on aerial photography, showing 1983 datum with solid line and NAD 27 with 5 second grid...

  18. Alternative motor fuels today and tomorrow

    International Nuclear Information System (INIS)

    Bensaid, B.

    2004-01-01

    Today, petroleum products account for 97% of the energy consumed in road transport. The purpose of replacing these products with alternative energies is to reduce oil dependence as well as greenhouse gas emissions. The high price of oil has promoted the use of 'conventional' alternative motor fuels (biofuels, LPG, NGV) and also renewed interest in syn-fuels (GTL, CTL, BTL) that have already given rise to industrial and pilot projects. (author)

  19. EURO - Before Yesterday, Yesterday, Today, Tomorrow...

    OpenAIRE

    Sylwia Pangsy-Kania

    2002-01-01

    The article was divided into four integrally connected parts concerning the EURO: before yesterday, yesterday, today and tomorrow. On the 1st January 2002 the common European currency became a fact. In eleven European countries there appeared jointly over 13 billion banknotes and 76 billion coins. The introduction of a common currency in the countries of the European Union is the greatest financial operation in world history with such a huge scale and degree of complication. Before yesterdayŠ...

  20. E-learning. Today and tomorrow

    International Nuclear Information System (INIS)

    Gelbke, Silvana

    2010-01-01

    Today, new technologies revolutionize the way of handling information, exchanging knowledge and learning. The definition of the term ''e-learning'' mostly comprehends teaching and learning using a range of electronic media (Internet, CD-ROMs). However, further differentiation is necessary to describe the entire spectrum of methods included in this term. These different approaches are reflected in their implementation by the companies presented. (orig.)

  1. GridCom, Grid Commander: graphical interface for Grid jobs and data management; GridCom, Grid Commander: graficheskij interfejs dlya raboty s zadachami i dannymi v gride

    Energy Technology Data Exchange (ETDEWEB)

    Galaktionov, V V

    2011-07-01

    GridCom - the software package for maintenance of automation of access to means of distributed system Grid (jobs and data). The client part, executed in the form of Java-applets, realises the Web-interface access to Grid through standard browsers. The executive part Lexor (LCG Executor) is started by the user in UI (User Interface) machine providing performance of Grid operations

  2. The Prospects of Radical Change Today

    Directory of Open Access Journals (Sweden)

    Slavoj Žižek

    2018-05-01

    Full Text Available In this contribution, Slavoj Žižek takes the occasion of Marx’s bicentenary for reflecting on the prospects of radical change today. First, it is shown that under Stalinism, Lenin’s works were quoted out of context in an arbitrary way in order to legitimise arbitrary political measures. Marxism thereby became an ideology that justified brutal subjective interventions. Second, this contribution poses the question of the revolutionary subject and democracy today. It stresses the role of both contingency and strategy in revolutions. In political assemblages taking place on public squares, the inert mass of ordinary people is transubstantiated into a politically engaged united force. The basic political problem today is how to best reconfigure democracy. Third, this contribution analyses the “interesting times” we live in. These are times that feature multiple crises, right-wing populism à la Donald Trump and Marine Le Pen, the lower classes’ opposition to immigration, and the refugee crisis. Questions about human rights and their violation and about radical change need to be asked in this context.

  3. Grid scale energy storage in salt caverns

    Energy Technology Data Exchange (ETDEWEB)

    Crotogino, Fritz; Donadei, Sabine [KBB Underground Technologies GmbH, Hannover (Germany)

    2009-07-01

    Fossil energy sources require some 20% of the annual consumption to be stored to secure emergency cover, peak shaving, seasonal balancing, etc. Today the electric power industry benefits from the extreme high energy density of fossil fuels. This is one important reason why the German utilities are able to provide highly reliable grid operation at a electric power storage capacity at their pumped hydro power stations of less then 1 hour (40 GWh) related to the total load in the grid - i.e. only 0,06% related to natural gas. Along with the changeover to renewable wind based electricity production this ''outsourcing'' of storage services to fossil fuels will decline. One important way out will be grid scale energy storage. The present discussion for balancing short term wind and solar power fluctuations focuses primarily on the installation of Compressed Air Energy Storages (CAES) in addition to existing pumped hydro plants. Because of their small energy density, these storage options are, however, generally not suitable for balancing for longer term fluctuations in case of larger amounts of excess wind power or even seasonal fluctuations. Underground hydrogen storages, however, provide a much higher energy density because of chemical energy bond - standard practice since many years. The first part of the article describes the present status and performance of grid scale energy storages in geological formations, mainly salt caverns. It is followed by a compilation of generally suitable locations in Europe and particularly Germany. The second part deals with first results of preliminary investigations in possibilities and limits of offshore CAES power stations. (orig.)

  4. Communication technologies in smart grid

    Directory of Open Access Journals (Sweden)

    Miladinović Nikola

    2013-01-01

    Full Text Available The role of communication technologies in Smart Grid lies in integration of large number of devices into one telecommunication system. This paper provides an overview of the technologies currently in use in electric power grid, that are not necessarily in compliance with the Smart Grid concept. Considering that the Smart Grid is open to the flow of information in all directions, it is necessary to provide reliability, protection and security of information.

  5. The Research of the Parallel Computing Development from the Angle of Cloud Computing

    Science.gov (United States)

    Peng, Zhensheng; Gong, Qingge; Duan, Yanyu; Wang, Yun

    2017-10-01

    Cloud computing is the development of parallel computing, distributed computing and grid computing. The development of cloud computing makes parallel computing come into people’s lives. Firstly, this paper expounds the concept of cloud computing and introduces two several traditional parallel programming model. Secondly, it analyzes and studies the principles, advantages and disadvantages of OpenMP, MPI and Map Reduce respectively. Finally, it takes MPI, OpenMP models compared to Map Reduce from the angle of cloud computing. The results of this paper are intended to provide a reference for the development of parallel computing.

  6. Searchable Encryption in Cloud Storage

    OpenAIRE

    Ren-Junn Hwang; Chung-Chien Lu; Jain-Shing Wu

    2014-01-01

    Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying ...

  7. Grid3: An Application Grid Laboratory for Science

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    level services required by the participating experiments. The deployed infrastructure has been operating since November 2003 with 27 sites, a peak of 2800 processors, work loads from 10 different applications exceeding 1300 simultaneous jobs, and data transfers among sites of greater than 2 TB/day. The Grid3 infrastructure was deployed from grid level services provided by groups and applications within the collaboration. The services were organized into four distinct "grid level services" including: Grid3 Packaging, Monitoring and Information systems, User Authentication and the iGOC Grid Operatio...

  8. Prognostic cloud water in the Los Alamos general circulation model

    International Nuclear Information System (INIS)

    Kristjansson, J.E.; Kao, C.Y.J.

    1993-01-01

    Most of today's general circulation models (GCMS) have a greatly simplified treatment of condensation and clouds. Recent observational studies of the earth's radiation budget have suggested cloud-related feedback mechanisms to be of tremendous importance for the issue of global change. Thus, there has arisen an urgent need for improvements in the treatment of clouds in GCMS, especially as the clouds relate to radiation. In the present paper, we investigate the effects of introducing pregnostic cloud water into the Los Alamos GCM. The cloud water field, produced by both stratiform and convective condensation, is subject to 3-dimensional advection and vertical diffusion. The cloud water enters the radiation calculations through the long wave emissivity calculations. Results from several sensitivity simulations show that realistic cloud water and precipitation fields can be obtained with the applied method. Comparisons with observations show that the most realistic results are obtained when more sophisticated schemes for moist convection are introduced at the same time. The model's cold bias is reduced and the zonal winds become stronger, due to more realistic tropical convection

  9. Grid Integration | Water Power | NREL

    Science.gov (United States)

    Grid Integration Grid Integration For marine and hydrokinetic technologies to play a larger role in supplying the nation's energy needs, integration into the U.S. power grid is an important challenge to address. Efficient integration of variable power resources like water power is a critical part of the

  10. What is a smart grid?

    NARCIS (Netherlands)

    Kumar, A.

    2017-01-01

    The Indian Smart Grid Forum defines a smart grid as "a power system capable of two-way communication between all the entities of the network-generation, transmission, distribution and the consumers". Like most work on smart grids, this view is also mainly technical. This paper aims to progress the

  11. Pyramid solar micro-grid

    Science.gov (United States)

    Huang, Bin-Juine; Hsu, Po-Chien; Wang, Yi-Hung; Tang, Tzu-Chiao; Wang, Jia-Wei; Dong, Xin-Hong; Hsu, Hsin-Yi; Li, Kang; Lee, Kung-Yen

    2018-03-01

    A novel pyramid solar micro-grid is proposed in the present study. All the members within the micro-grid can mutually share excess solar PV power each other through a binary-connection hierarchy. The test results of a 2+2 pyramid solar micro-grid consisting of 4 individual solar PV systems for self-consumption are reported.

  12. Aerosol-cloud interactions in a multi-scale modeling framework

    Science.gov (United States)

    Lin, G.; Ghan, S. J.

    2017-12-01

    Atmospheric aerosols play an important role in changing the Earth's climate through scattering/absorbing solar and terrestrial radiation and interacting with clouds. However, quantification of the aerosol effects remains one of the most uncertain aspects of current and future climate projection. Much of the uncertainty results from the multi-scale nature of aerosol-cloud interactions, which is very challenging to represent in traditional global climate models (GCMs). In contrast, the multi-scale modeling framework (MMF) provides a viable solution, which explicitly resolves the cloud/precipitation in the cloud resolved model (CRM) embedded in the GCM grid column. In the MMF version of community atmospheric model version 5 (CAM5), aerosol processes are treated with a parameterization, called the Explicit Clouds Parameterized Pollutants (ECPP). It uses the cloud/precipitation statistics derived from the CRM to treat the cloud processing of aerosols on the GCM grid. However, this treatment treats clouds on the CRM grid but aerosols on the GCM grid, which is inconsistent with the reality that cloud-aerosol interactions occur on the cloud scale. To overcome the limitation, here, we propose a new aerosol treatment in the MMF: Explicit Clouds Explicit Aerosols (ECEP), in which we resolve both clouds and aerosols explicitly on the CRM grid. We first applied the MMF with ECPP to the Accelerated Climate Modeling for Energy (ACME) model to have an MMF version of ACME. Further, we also developed an alternative version of ACME-MMF with ECEP. Based on these two models, we have conducted two simulations: one with the ECPP and the other with ECEP. Preliminary results showed that the ECEP simulations tend to predict higher aerosol concentrations than ECPP simulations, because of the more efficient vertical transport from the surface to the higher atmosphere but the less efficient wet removal. We also found that the cloud droplet number concentrations are also different between the

  13. Enterprise Cloud Adoption - Cloud Maturity Assessment Model

    OpenAIRE

    Conway, Gerry; Doherty, Eileen; Carcary, Marian; Crowley, Catherine

    2017-01-01

    The introduction and use of cloud computing by an organization has the promise of significant benefits that include reduced costs, improved services, and a pay-per-use model. Organizations that successfully harness these benefits will potentially have a distinct competitive edge, due to their increased agility and flexibility to rapidly respond to an ever changing and complex business environment. However, as cloud technology is a relatively new ph...

  14. Star clouds of Magellan

    International Nuclear Information System (INIS)

    Tucker, W.

    1981-01-01

    The Magellanic Clouds are two irregular galaxies belonging to the local group which the Milky Way belongs to. By studying the Clouds, astronomers hope to gain insight into the origin and composition of the Milky Way. The overall structure and dynamics of the Clouds are clearest when studied in radio region of the spectrum. One benefit of directly observing stellar luminosities in the Clouds has been the discovery of the period-luminosity relation. Also, the Clouds are a splendid laboratory for studying stellar evolution. It is believed that both Clouds may be in the very early stage in the development of a regular, symmetric galaxy. This raises a paradox because some of the stars in the star clusters of the Clouds are as old as the oldest stars in our galaxy. An explanation for this is given. The low velocity of the Clouds with respect to the center of the Milky Way shows they must be bound to it by gravity. Theories are given on how the Magellanic Clouds became associated with the galaxy. According to current ideas the Clouds orbits will decay and they will spiral into the Galaxy

  15. Cloud Computing Governance Lifecycle

    Directory of Open Access Journals (Sweden)

    Soňa Karkošková

    2016-06-01

    Full Text Available Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is unclear how to achieve them. Cloud computing governance helps to create business value through obtain benefits from use of cloud computing services while optimizing investment and risk. Challenge, which organizations are facing in relation to governing of cloud services, is how to design and implement cloud computing governance to gain expected benefits. This paper aims to provide guidance on implementation activities of proposed Cloud computing governance lifecycle from cloud consumer perspective. Proposed model is based on SOA Governance Framework and consists of lifecycle for implementation and continuous improvement of cloud computing governance model.

  16. THE CALIFORNIA MOLECULAR CLOUD

    International Nuclear Information System (INIS)

    Lada, Charles J.; Lombardi, Marco; Alves, Joao F.

    2009-01-01

    We present an analysis of wide-field infrared extinction maps of a region in Perseus just north of the Taurus-Auriga dark cloud complex. From this analysis we have identified a massive, nearby, but previously unrecognized, giant molecular cloud (GMC). Both a uniform foreground star density and measurements of the cloud's velocity field from CO observations indicate that this cloud is likely a coherent structure at a single distance. From comparison of foreground star counts with Galactic models, we derive a distance of 450 ± 23 pc to the cloud. At this distance the cloud extends over roughly 80 pc and has a mass of ∼ 10 5 M sun , rivaling the Orion (A) molecular cloud as the largest and most massive GMC in the solar neighborhood. Although surprisingly similar in mass and size to the more famous Orion molecular cloud (OMC) the newly recognized cloud displays significantly less star formation activity with more than an order of magnitude fewer young stellar objects than found in the OMC, suggesting that both the level of star formation and perhaps the star formation rate in this cloud are an order of magnitude or more lower than in the OMC. Analysis of extinction maps of both clouds shows that the new cloud contains only 10% the amount of high extinction (A K > 1.0 mag) material as is found in the OMC. This, in turn, suggests that the level of star formation activity and perhaps the star formation rate in these two clouds may be directly proportional to the total amount of high extinction material and presumably high density gas within them and that there might be a density threshold for star formation on the order of n(H 2 ) ∼ a few x 10 4 cm -3 .

  17. MICROARRAY IMAGE GRIDDING USING GRID LINE REFINEMENT TECHNIQUE

    Directory of Open Access Journals (Sweden)

    V.G. Biju

    2015-05-01

    Full Text Available An important stage in microarray image analysis is gridding. Microarray image gridding is done to locate sub arrays in a microarray image and find co-ordinates of spots within each sub array. For accurate identification of spots, most of the proposed gridding methods require human intervention. In this paper a fully automatic gridding method which enhances spot intensity in the preprocessing step as per a histogram based threshold method is used. The gridding step finds co-ordinates of spots from horizontal and vertical profile of the image. To correct errors due to the grid line placement, a grid line refinement technique is proposed. The algorithm is applied on different image databases and results are compared based on spot detection accuracy and time. An average spot detection accuracy of 95.06% depicts the proposed method’s flexibility and accuracy in finding the spot co-ordinates for different database images.

  18. Smart Grid Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Craig [National Rural Electric Cooperative Association, Arlington, VA (United States); Carroll, Paul [National Rural Electric Cooperative Association, Arlington, VA (United States); Bell, Abigail [National Rural Electric Cooperative Association, Arlington, VA (United States)

    2015-03-11

    The National Rural Electric Cooperative Association (NRECA) organized the NRECA-U.S. Department of Energy (DOE) Smart Grid Demonstration Project (DE-OE0000222) to install and study a broad range of advanced smart grid technologies in a demonstration that spanned 23 electric cooperatives in 12 states. More than 205,444 pieces of electronic equipment and more than 100,000 minor items (bracket, labels, mounting hardware, fiber optic cable, etc.) were installed to upgrade and enhance the efficiency, reliability, and resiliency of the power networks at the participating co-ops. The objective of this project was to build a path for other electric utilities, and particularly electrical cooperatives, to adopt emerging smart grid technology when it can improve utility operations, thus advancing the co-ops’ familiarity and comfort with such technology. Specifically, the project executed multiple subprojects employing a range of emerging smart grid technologies to test their cost-effectiveness and, where the technology demonstrated value, provided case studies that will enable other electric utilities—particularly electric cooperatives— to use these technologies. NRECA structured the project according to the following three areas: Demonstration of smart grid technology; Advancement of standards to enable the interoperability of components; and Improvement of grid cyber security. We termed these three areas Technology Deployment Study, Interoperability, and Cyber Security. Although the deployment of technology and studying the demonstration projects at coops accounted for the largest portion of the project budget by far, we see our accomplishments in each of the areas as critical to advancing the smart grid. All project deliverables have been published. Technology Deployment Study: The deliverable was a set of 11 single-topic technical reports in areas related to the listed technologies. Each of these reports has already been submitted to DOE, distributed to co-ops, and

  19. Gridded ionization chamber

    International Nuclear Information System (INIS)

    Houston, J.M.

    1977-01-01

    An improved ionization chamber type x-ray detector comprises a heavy gas at high pressure disposed between an anode and a cathode. An open grid structure is disposed adjacent the anode and is maintained at a voltsge intermediate between the cathode and anode potentials. The electric field which is produced by positive ions drifting toward the cathode is thus shielded from the anode. Current measuring circuits connected to the anode are, therefore, responsive only to electron current flow within the chamber and the recovery time of the chamber is shortened. The grid structure also serves to shield the anode from electrical currents which might otherwise be induced by mechanical vibrations in the ionization chamber structure

  20. Can Nuclear Installations and Research Centres Adopt Cloud Computing Platform-

    International Nuclear Information System (INIS)

    Pichan, A.; Lazarescu, M.; Soh, S.T.

    2015-01-01

    Cloud Computing is arguably one of the recent and highly significant advances in information technology today. It produces transformative changes in the history of computing and presents many promising technological and economic opportunities. The pay-per-use model, the computing power, abundance of storage, skilled resources, fault tolerance and the economy of scale it offers, provides significant advantages to enterprises to adopt cloud platform for their business needs. However, customers especially those dealing with national security, high end scientific research institutions, critical national infrastructure service providers (like power, water) remain very much reluctant to move their business system to the cloud. One of the main concerns is the question of information security in the cloud and the threat of the unknown. Cloud Service Providers (CSP) indirectly encourages this perception by not letting their customers see what is behind their virtual curtain. Jurisdiction (information assets being stored elsewhere), data duplication, multi-tenancy, virtualisation and decentralized nature of data processing are the default characteristics of cloud computing. Therefore traditional approach of enforcing and implementing security controls remains a big challenge and largely depends upon the service provider. The other biggest challenge and open issue is the ability to perform digital forensic investigations in the cloud in case of security breaches. Traditional approaches to evidence collection and recovery are no longer practical as they rely on unrestricted access to the relevant systems and user data, something that is not available in the cloud model. This continues to fuel high insecurity for the cloud customers. In this paper we analyze the cyber security and digital forensics challenges, issues and opportunities for nuclear facilities to adopt cloud computing. We also discuss the due diligence process and applicable industry best practices which shall be