WorldWideScience

Sample records for experiment computing infrastructure

  1. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    International Nuclear Information System (INIS)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D

    2008-01-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation

  2. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Read, A; Taga, A; O-Saada, F; Pajchel, K; Samset, B H; Cameron, D [Department of Physics, University of Oslo, P.b. 1048 Blindern, N-0316 Oslo (Norway)], E-mail: a.l.read@fys.uio.no

    2008-07-15

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  3. Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael S.; Hix, W. Raphael; Bardayan, Daniel W.; Blackmon, Jeffery C.; Lingerfelt, Eric J.; Scott, Jason P.; Nesaraja, Caroline D.; Chae, Kyungyuk; Guidry, Michael W.; Koura, Hiroyuki; Meyer, Richard A.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that is freely available online at nucastrodata.org. Features of, and future plans for, this software suite are given

  4. The IceCube Computing Infrastructure Model

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Besides the big LHC experiments a number of mid-size experiments is coming online which need to define new computing models to meet the demands on processing and storage requirements of those experiments. We present the hybrid computing model of IceCube which leverages GRID models with a more flexible direct user model as an example of a possible solution. In IceCube a central datacenter at UW-Madison servers as Tier-0 with a single Tier-1 datacenter at DESY Zeuthen. We describe the setup of the IceCube computing infrastructure and report on our experience in successfully provisioning the IceCube computing needs.

  5. Handling Worldwide LHC Computing Grid Critical Service Incidents : The infrastructure and experience behind nearly 5 years of GGUS ALARMs

    CERN Multimedia

    Dimou, M; Dulov, O; Grein, G

    2013-01-01

    In the Wordwide LHC Computing Grid (WLCG) project the Tier centres are of paramount importance for storing and accessing experiment data and for running the batch jobs necessary for experiment production activities. Although Tier2 sites provide a significant fraction of the resources a non-availability of resources at the Tier0 or the Tier1s can seriously harm not only WLCG Operations but also the experiments' workflow and the storage of LHC data which are very expensive to reproduce. This is why availability requirements for these sites are high and committed in the WLCG Memorandum of Understanding (MoU). In this talk we describe the workflow of GGUS ALARMs, the only 24/7 mechanism available to LHC experiment experts for reporting to the Tier0 or the Tier1s problems with their Critical Services. Conclusions and experience gained from the detailed drills performed in each such ALARM for the last 4 years are explained and the shift with time of Type of Problems met. The physical infrastructure put in place to ...

  6. Social experience infrastructure

    DEFF Research Database (Denmark)

    Kvistgaard, Peter

    2006-01-01

    and explorative fashion to share with others thoughts and ideas concerning the development of new ways to construct/reconstruct recreational spaces with a better coherence with regard to designing experiences. This article claims that it is possible to design recreational spaces with good social experience...

  7. Computational Infrastructure for Geodynamics (CIG)

    Science.gov (United States)

    Gurnis, M.; Kellogg, L. H.; Bloxham, J.; Hager, B. H.; Spiegelman, M.; Willett, S.; Wysession, M. E.; Aivazis, M.

    2004-12-01

    Solid earth geophysicists have a long tradition of writing scientific software to address a wide range of problems. In particular, computer simulations came into wide use in geophysics during the decade after the plate tectonic revolution. Solution schemes and numerical algorithms that developed in other areas of science, most notably engineering, fluid mechanics, and physics, were adapted with considerable success to geophysics. This software has largely been the product of individual efforts and although this approach has proven successful, its strength for solving problems of interest is now starting to show its limitations as we try to share codes and algorithms or when we want to recombine codes in novel ways to produce new science. With funding from the NSF, the US community has embarked on a Computational Infrastructure for Geodynamics (CIG) that will develop, support, and disseminate community-accessible software for the greater geodynamics community from model developers to end-users. The software is being developed for problems involving mantle and core dynamics, crustal and earthquake dynamics, magma migration, seismology, and other related topics. With a high level of community participation, CIG is leveraging state-of-the-art scientific computing into a suite of open-source tools and codes. The infrastructure that we are now starting to develop will consist of: (a) a coordinated effort to develop reusable, well-documented and open-source geodynamics software; (b) the basic building blocks - an infrastructure layer - of software by which state-of-the-art modeling codes can be quickly assembled; (c) extension of existing software frameworks to interlink multiple codes and data through a superstructure layer; (d) strategic partnerships with the larger world of computational science and geoinformatics; and (e) specialized training and workshops for both the geodynamics and broader Earth science communities. The CIG initiative has already started to

  8. SPRUCE experiment data infrastructure

    Science.gov (United States)

    Krassovski, M.; Hanson, P. J.; Boden, T.; Riggs, J.; Nettles, W. R.; Hook, L. A.

    2013-12-01

    The Carbon Dioxide Information Analysis Center (CDIAC) at Oak Ridge National Laboratory (ORNL), USA has provided scientific data management support for the US Department of Energy and international climate change science since 1982. Among the many data activities CDIAC performs are design and implementation of the data systems. One current example is the data system and network for SPRUCE experiment. The SPRUCE experiment (http://mnspruce.ornl.gov) is the primary component of the Terrestrial Ecosystem Science Scientific Focus Area of ORNL's Climate Change Program, focused on terrestrial ecosystems and the mechanisms that underlie their responses to climatic change. The experimental work is to be conducted in a bog forest in northern Minnesota, 40 km north of Grand Rapids, in the USDA Forest Service Marcell Experimental Forest (MEF). The site is located at the southern margin of the boreal peatland forest. Experimental work in the 8.1-ha S1 bog will be a climate change manipulation focusing on the combined responses to multiple levels of warming at ambient or elevated CO2 (eCO2) levels. The experiment provides a platform for testing mechanisms controlling the vulnerability of organisms, biogeochemical processes and ecosystems to climatic change (e.g., thresholds for organism decline or mortality, limitations to regeneration, biogeochemical limitations to productivity, the cycling and release of CO2 and CH4 to the atmosphere). The manipulation will evaluate the response of the existing biological communities to a range of warming levels from ambient to +9°C, provided via large, modified open-top chambers. The ambient and +9°C warming treatments will also be conducted at eCO2 (in the range of 800 to 900 ppm). Both direct and indirect effects of these experimental perturbations will be analyzed to develop and refine models needed for full Earth system analyses. SPRUCE provides wide range continuous and discrete measurements. To successfully manage SPRUCE data flow

  9. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    International Nuclear Information System (INIS)

    Capone, V; Esposito, R; Pardi, S; Taurino, F; Tortone, G

    2012-01-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  10. Design and implementation of a reliable and cost-effective cloud computing infrastructure: the INFN Napoli experience

    Science.gov (United States)

    Capone, V.; Esposito, R.; Pardi, S.; Taurino, F.; Tortone, G.

    2012-12-01

    Over the last few years we have seen an increasing number of services and applications needed to manage and maintain cloud computing facilities. This is particularly true for computing in high energy physics, which often requires complex configurations and distributed infrastructures. In this scenario a cost effective rationalization and consolidation strategy is the key to success in terms of scalability and reliability. In this work we describe an IaaS (Infrastructure as a Service) cloud computing system, with high availability and redundancy features, which is currently in production at INFN-Naples and ATLAS Tier-2 data centre. The main goal we intended to achieve was a simplified method to manage our computing resources and deliver reliable user services, reusing existing hardware without incurring heavy costs. A combined usage of virtualization and clustering technologies allowed us to consolidate our services on a small number of physical machines, reducing electric power costs. As a result of our efforts we developed a complete solution for data and computing centres that can be easily replicated using commodity hardware. Our architecture consists of 2 main subsystems: a clustered storage solution, built on top of disk servers running GlusterFS file system, and a virtual machines execution environment. GlusterFS is a network file system able to perform parallel writes on multiple disk servers, providing this way live replication of data. High availability is also achieved via a network configuration using redundant switches and multiple paths between hypervisor hosts and disk servers. We also developed a set of management scripts to easily perform basic system administration tasks such as automatic deployment of new virtual machines, adaptive scheduling of virtual machines on hypervisor hosts, live migration and automated restart in case of hypervisor failures.

  11. Urban Green Infrastructure: German Experience

    Directory of Open Access Journals (Sweden)

    Diana Olegovna Dushkova

    2016-06-01

    Full Text Available The paper presents a concept of urban green infrastructure and analyzes the features of its implementation in the urban development programmes of German cities. We analyzed the most shared articles devoted to the urban green infrastructure to see different approaches to definition of this term. It is based on materials of field research in the cities of Berlin and Leipzig in 2014-2015, international and national scientific publications. During the process of preparing the paper, consultations have been held with experts from scientific institutions and Administrations of Berlin and Leipzig as well as local experts from environmental organizations of both cities. Using the German cities of Berlin and Leipzig as examples, this paper identifies how the concept can be implemented in the program of urban development. It presents the main elements of green city model, which include mitigation of negative anthropogenic impact on the environment under the framework of urban sustainable development. Essential part of it is a complex ecological policy as a major necessary tool for the implementation of the green urban infrastructure concept. This ecological policy should embody not only some ecological measurements, but also a greening of all urban infrastructure elements as well as implementation of sustainable living with a greater awareness of the resources, which are used in everyday life, and development of environmental thinking among urban citizens. Urban green infrastructure is a unity of four main components: green building, green transportation, eco-friendly waste management, green transport routes and ecological corridors. Experience in the development of urban green infrastructure in Germany can be useful to improve the environmental situation in Russian cities.

  12. Urban Green Infrastructure: German Experience

    OpenAIRE

    Diana Olegovna Dushkova; Sergey Nikolaevich Kirillov

    2016-01-01

    The paper presents a concept of urban green infrastructure and analyzes the features of its implementation in the urban development programmes of German cities. We analyzed the most shared articles devoted to the urban green infrastructure to see different approaches to definition of this term. It is based on materials of field research in the cities of Berlin and Leipzig in 2014-2015, international and national scientific publications. During the process of preparing the paper, consultations...

  13. German contributions to the CMS computing infrastructure

    International Nuclear Information System (INIS)

    Scheurer, A

    2010-01-01

    The CMS computing model anticipates various hierarchically linked tier centres to counter the challenges provided by the enormous amounts of data which will be collected by the CMS detector at the Large Hadron Collider, LHC, at CERN. During the past years, various computing exercises were performed to test the readiness of the computing infrastructure, the Grid middleware and the experiment's software for the startup of the LHC which took place in September 2008. In Germany, several tier sites are set up to allow for an efficient and reliable way to simulate possible physics processes as well as to reprocess, analyse and interpret the numerous stored collision events of the experiment. It will be shown that the German computing sites played an important role during the experiment's preparation phase and during data-taking of CMS and, therefore, scientific groups in Germany will be ready to compete for discoveries in this new era of particle physics. This presentation focuses on the German Tier-1 centre GridKa, located at Forschungszentrum Karlsruhe, the German CMS Tier-2 federation DESY/RWTH with installations at the University of Aachen and the research centre DESY. In addition, various local computing resources in Aachen, Hamburg and Karlsruhe are briefly introduced as well. It will be shown that an excellent cooperation between the different German institutions and physicists led to well established computing sites which cover all parts of the CMS computing model. Therefore, the following topics are discussed and the achieved goals and the gained knowledge are depicted: data management and distribution among the different tier sites, Grid-based Monte Carlo production at the Tier-2 as well as Grid-based and locally submitted inhomogeneous user analyses at the Tier-3s. Another important task is to ensure a proper and reliable operation 24 hours a day, especially during the time of data-taking. For this purpose, the meta-monitoring tool 'HappyFace', which was

  14. Cloud computing can simplify HIT infrastructure management.

    Science.gov (United States)

    Glaser, John

    2011-08-01

    Software as a Service (SaaS), built on cloud computing technology, is emerging as the forerunner in IT infrastructure because it helps healthcare providers reduce capital investments. Cloud computing leads to predictable, monthly, fixed operating expenses for hospital IT staff. Outsourced cloud computing facilities are state-of-the-art data centers boasting some of the most sophisticated networking equipment on the market. The SaaS model helps hospitals safeguard against technology obsolescence, minimizes maintenance requirements, and simplifies management.

  15. A Distributed Computational Infrastructure for Science and Education

    Directory of Open Access Journals (Sweden)

    Rustam K. Bazarov

    2014-06-01

    Full Text Available Researchers have lately been paying increasingly more attention to parallel and distributed algorithms for solving high-dimensionality problems. In this regard, the issue of acquiring or renting computational resources becomes a topical one for employees of scientific and educational institutions. This article examines technology and methods for organizing a distributed computational infrastructure. The author addresses the experience of creating a high-performance system powered by existing clusterization and grid computing technology. The approach examined in the article helps minimize financial costs, aggregate territorially distributed computational resources and ensures a more rational use of available computer equipment, eliminating its downtimes.

  16. Activity-Driven Computing Infrastructure - Pervasive Computing in Healthcare

    DEFF Research Database (Denmark)

    Bardram, Jakob Eyvind; Christensen, Henrik Bærbak; Olesen, Anders Konring

    In many work settings, and especially in healthcare, work is distributed among many cooperating actors, who are constantly moving around and are frequently interrupted. In line with other researchers, we use the term pervasive computing to describe a computing infrastructure that supports work...

  17. Review of CERN Computer Centre Infrastructure

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote computer centres. This presentation will give the details on the project’s motivations, current status and areas for future investigation.

  18. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  19. National Computational Infrastructure for Lattice Gauge Theory

    Energy Technology Data Exchange (ETDEWEB)

    Brower, Richard C.

    2014-04-15

    SciDAC-2 Project The Secret Life of Quarks: National Computational Infrastructure for Lattice Gauge Theory, from March 15, 2011 through March 14, 2012. The objective of this project is to construct the software needed to study quantum chromodynamics (QCD), the theory of the strong interactions of sub-atomic physics, and other strongly coupled gauge field theories anticipated to be of importance in the energy regime made accessible by the Large Hadron Collider (LHC). It builds upon the successful efforts of the SciDAC-1 project National Computational Infrastructure for Lattice Gauge Theory, in which a QCD Applications Programming Interface (QCD API) was developed that enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers. This project serves the entire USQCD Collaboration, which consists of nearly all the high energy and nuclear physicists in the United States engaged in the numerical study of QCD and related strongly interacting quantum field theories. All software developed in it is publicly available, and can be downloaded from a link on the USQCD Collaboration web site, or directly from the github repositories with entrance linke http://usqcd-software.github.io

  20. Computational infrastructure for law enforcement. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lades, M.; Kunz, C.; Strikos, I.

    1997-02-01

    This project planned to demonstrate the leverage of enhanced computational infrastructure for law enforcement by demonstrating the face recognition capability at LLNL. The project implemented a face finder module extending the segmentation capabilities of the current face recognition so it was capable of processing different image formats and sizes and create the pilot of a network-accessible image database for the demonstration of face recognition capabilities. The project was funded at $40k (2 man-months) for a feasibility study. It investigated several essential components of a networked face recognition system which could help identify, apprehend, and convict criminals.

  1. Infrastructure Support for Collaborative Pervasive Computing Systems

    DEFF Research Database (Denmark)

    Vestergaard Mogensen, Martin

    Collaborative Pervasive Computing Systems (CPCS) are currently being deployed to support areas such as clinical work, emergency situations, education, ad-hoc meetings, and other areas involving information sharing and collaboration.These systems allow the users to work together synchronously......, but from different places, by sharing information and coordinating activities. Several researchers have shown the value of such distributed collaborative systems. However, building these systems is by no means a trivial task and introduces a lot of yet unanswered questions. The aforementioned areas......, are all characterized by unstable, volatile environments, either due to the underlying components changing or the nomadic work habits of users. A major challenge, for the creators of collaborative pervasive computing systems, is the construction of infrastructures supporting the system. The complexity...

  2. Eucalyptus: an open-source cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii, E-mail: rich@cs.ucsb.ed [Computer Science Department, University of California, Santa Barbara, CA 93106 (United States) and Eucalyptus Systems Inc., 130 Castilian Dr., Goleta, CA 93117 (United States)

    2009-07-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  3. Eucalyptus: an open-source cloud computing infrastructure

    International Nuclear Information System (INIS)

    Nurmi, Daniel; Wolski, Rich; Grzegorczyk, Chris; Obertelli, Graziano; Soman, Sunil; Youseff, Lamia; Zagorodnov, Dmitrii

    2009-01-01

    Utility computing, elastic computing, and cloud computing are all terms that refer to the concept of dynamically provisioning processing time and storage space from a ubiquitous 'cloud' of computational resources. Such systems allow users to acquire and release the resources on demand and provide ready access to data from processing elements, while relegating the physical location and exact parameters of the resources. Over the past few years, such systems have become increasingly popular, but nearly all current cloud computing offerings are either proprietary or depend upon software infrastructure that is invisible to the research community. In this work, we present Eucalyptus, an open-source software implementation of cloud computing that utilizes compute resources that are typically available to researchers, such as clusters and workstation farms. In order to foster community research exploration of cloud computing systems, the design of Eucalyptus emphasizes modularity, allowing researchers to experiment with their own security, scalability, scheduling, and interface implementations. In this paper, we outline the design of Eucalyptus, describe our own implementations of the modular system components, and provide results from experiments that measure performance and scalability of a Eucalyptus installation currently deployed for public use. The main contribution of our work is the presentation of the first research-oriented open-source cloud computing system focused on enabling methodical investigations into the programming, administration, and deployment of systems exploring this novel distributed computing model.

  4. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  5. Analysis of CERN computing infrastructure and monitoring data

    Science.gov (United States)

    Nieke, C.; Lassnig, M.; Menichetti, L.; Motesnitsalis, E.; Duellmann, D.

    2015-12-01

    Optimizing a computing infrastructure on the scale of LHC requires a quantitative understanding of a complex network of many different resources and services. For this purpose the CERN IT department and the LHC experiments are collecting a large multitude of logs and performance probes, which are already successfully used for short-term analysis (e.g. operational dashboards) within each group. The IT analytics working group has been created with the goal to bring data sources from different services and on different abstraction levels together and to implement a suitable infrastructure for mid- to long-term statistical analysis. It further provides a forum for joint optimization across single service boundaries and the exchange of analysis methods and tools. To simplify access to the collected data, we implemented an automated repository for cleaned and aggregated data sources based on the Hadoop ecosystem. This contribution describes some of the challenges encountered, such as dealing with heterogeneous data formats, selecting an efficient storage format for map reduce and external access, and will describe the repository user interface. Using this infrastructure we were able to quantitatively analyze the relationship between CPU/wall fraction, latency/throughput constraints of network and disk and the effective job throughput. In this contribution we will first describe the design of the shared analysis infrastructure and then present a summary of first analysis results from the combined data sources.

  6. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    International Nuclear Information System (INIS)

    Llamas, Ramón Medrano; Megino, Fernando Harald Barreiro; Cinquilli, Mattia; Kucharczyk, Katarzyna; Denis, Marek Kamil

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  7. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    Science.gov (United States)

    Medrano Llamas, Ramón; Harald Barreiro Megino, Fernando; Kucharczyk, Katarzyna; Kamil Denis, Marek; Cinquilli, Mattia

    2014-06-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain the integration of the experiment workload management systems (WMS) with the cloud resources. The second section will revisit the performance and stress testing performed with HammerCloud in order to evaluate and compare the suitability for the experiment workloads. The third section will go deeper into the dynamic provisioning techniques, such as the use of the cloud APIs directly by the WMS. The paper finishes with a review of the conclusions and the challenges ahead.

  8. New Features in the Computational Infrastructure for Nuclear Astrophysics

    International Nuclear Information System (INIS)

    Smith, Michael Scott; Lingerfelt, Eric; Scott, J. P.; Nesaraja, Caroline D; Chae, Kyung YuK.; Koura, Hiroyuki; Roberts, Luke F.; Hix, William Raphael; Bardayan, Daniel W.; Blackmon, Jeff C.

    2006-01-01

    A Computational Infrastructure for Nuclear Astrophysics has been developed to streamline the inclusion of the latest nuclear physics data in astrophysics simulations. The infrastructure consists of a platform-independent suite of computer codes that are freely available online at http://nucastrodata.org. The newest features of, and future plans for, this software suite are given

  9. Commissioning the CERN IT Agile Infrastructure with experiment workloads

    CERN Document Server

    Medrano Llamas, Ramón; Kucharczyk, Katarzyna; Denis, Marek Kamil; Cinquilli, Mattia

    2014-01-01

    In order to ease the management of their infrastructure, most of the WLCG sites are adopting cloud based strategies. In the case of CERN, the Tier 0 of the WLCG, is completely restructuring the resource and configuration management of their computing center under the codename Agile Infrastructure. Its goal is to manage 15,000 Virtual Machines by means of an OpenStack middleware in order to unify all the resources in CERN's two datacenters: the one placed in Meyrin and the new on in Wigner, Hungary. During the commissioning of this infrastructure, CERN IT is offering an attractive amount of computing resources to the experiments (800 cores for ATLAS and CMS) through a private cloud interface. ATLAS and CMS have joined forces to exploit them by running stress tests and simulation workloads since November 2012. This work will describe the experience of the first deployments of the current experiment workloads on the CERN private cloud testbed. The paper is organized as follows: the first section will explain th...

  10. ORGANIZATION OF CLOUD COMPUTING INFRASTRUCTURE BASED ON SDN NETWORK

    Directory of Open Access Journals (Sweden)

    Alexey A. Efimenko

    2013-01-01

    Full Text Available The article presents the main approaches to cloud computing infrastructure based on the SDN network in present data processing centers (DPC. The main indexes of management effectiveness of network infrastructure of DPC are determined. The examples of solutions for the creation of virtual network devices are provided.

  11. Network and computing infrastructure for scientific applications in Georgia

    Science.gov (United States)

    Kvatadze, R.; Modebadze, Z.

    2016-09-01

    Status of network and computing infrastructure and available services for research and education community of Georgia are presented. Research and Educational Networking Association - GRENA provides the following network services: Internet connectivity, network services, cyber security, technical support, etc. Computing resources used by the research teams are located at GRENA and at major state universities. GE-01-GRENA site is included in European Grid infrastructure. Paper also contains information about programs of Learning Center and research and development projects in which GRENA is participating.

  12. PUBLIC AND PRIVATE PARTENERSHIP IN INFRASTRUCTURE DEVELOPMENT: ESSENCE, EXPERIENCE, PROBLEMS

    Directory of Open Access Journals (Sweden)

    Alexander E. Lantsov

    2014-01-01

    Full Text Available Infrastructure is of high importance for human society, so the state pay great attention to it. Characteristics inherent to infrastructure, its development, maintenance and consumption don’t always explain only the state involvement in the sector.The article considers preconditions and basis of private sector involvement in the process of infrastructure supply, experience of different countries, public and private sectors relationships in the matter and private sector effectiveness in infrastructure supply.

  13. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  14. Kenya's Integrated Nuclear Infrastructure Review Experience

    International Nuclear Information System (INIS)

    Ayacko, Ochilo G.M.

    2015-01-01

    Lessons learnt for INIR preparation: → A detailed Self Evaluation report is critical to proper evaluation of each infrastructure; → Involvement of all relevant organizations in preparation of self evaluation report and the main mission; → Meetings on individual infrastructure issues to consolidate the country position; → Openness during interviews and provision of adequate information

  15. Grid computing infrastructure, service, and applications

    CERN Document Server

    Jie, Wei; Chen, Jinjun

    2009-01-01

    Offering a comprehensive discussion of advances in grid computing, this book summarizes the concepts, methods, technologies, and applications. It covers topics such as philosophy, middleware, architecture, services, and applications. It also includes technical details to demonstrate how grid computing works in the real world

  16. Grids in Europe - a computing infrastructure for science

    International Nuclear Information System (INIS)

    Kranzlmueller, D.

    2008-01-01

    Grids provide sheer unlimited computing power and access to a variety of resources to todays scientists. Moving from a research topic of computer science to a commodity tool for science and research in general, grid infrastructures are built all around the world. This talk provides an overview of the developments of grids in Europe, the status of the so-called national grid initiatives as well as the efforts towards an integrated European grid infrastructure. The latter, summarized under the title of the European Grid Initiative (EGI), promises a permanent and reliable grid infrastructure and its services in a way similar to research networks today. The talk describes the status of these efforts, the plans for the setup of this pan-European e-Infrastructure, and the benefits for the application communities. (author)

  17. National Computational Infrastructure for Lattice Gauge Theory: Final Report

    International Nuclear Information System (INIS)

    Richard Brower; Norman Christ; Michael Creutz; Paul Mackenzie; John Negele; Claudio Rebbi; David Richards; Stephen Sharpe; Robert Sugar

    2006-01-01

    This is the final report of Department of Energy SciDAC Grant ''National Computational Infrastructure for Lattice Gauge Theory''. It describes the software developed under this grant, which enables the effective use of a wide variety of supercomputers for the study of lattice quantum chromodynamics (lattice QCD). It also describes the research on and development of commodity clusters optimized for the study of QCD. Finally, it provides some high lights of research enabled by the infrastructure created under this grant, as well as a full list of the papers resulting from research that made use of this infrastructure

  18. Strategic Plan for a Scientific Cloud Computing infrastructure for Europe

    CERN Document Server

    Lengert, Maryline

    2011-01-01

    Here we present the vision, concept and direction for forming a European Industrial Strategy for a Scientific Cloud Computing Infrastructure to be implemented by 2020. This will be the framework for decisions and for securing support and approval in establishing, initially, an R&D European Cloud Computing Infrastructure that serves the need of European Research Area (ERA ) and Space Agencies. This Cloud Infrastructure will have the potential beyond this initial user base to evolve to provide similar services to a broad range of customers including government and SMEs. We explain how this plan aims to support the broader strategic goals of our organisations and identify the benefits to be realised by adopting an industrial Cloud Computing model. We also outline the prerequisites and commitment needed to achieve these objectives.

  19. Experiences with the ALICE Mesos infrastructure

    Science.gov (United States)

    Berzano, D.; Eulisse, G.; Grigoraş, C.; Napoli, K.

    2017-10-01

    Apache Mesos is a resource management system for large data centres, initially developed by UC Berkeley, and now maintained under the Apache Foundation umbrella. It is widely used in the industry by companies like Apple, Twitter, and Airbnb and it is known to scale to 10 000s of nodes. Together with other tools of its ecosystem, such as Mesosphere Marathon or Metronome, it provides an end-to-end solution for datacenter operations and a unified way to exploit large distributed systems. We present the experience of the ALICE Experiment Offline & Computing in deploying and using in production the Apache Mesos ecosystem for a variety of tasks on a small 500 cores cluster, using hybrid OpenStack and bare metal resources. We will initially introduce the architecture of our setup and its operation, we will then describe the tasks which are performed by it, including release building and QA, release validation, and simple Monte Carlo production. We will show how we developed Mesos enabled components (called “Mesos Frameworks”) to carry out ALICE specific needs. In particular, we will illustrate our effort to integrate Work Queue, a lightweight batch processing engine developed by University of Notre Dame, which ALICE uses to orchestrate release validation. Finally, we will give an outlook on how to use Mesos as resource manager for DDS, a software deployment system developed by GSI which will be the foundation of the system deployment for ALICE next generation Online-Offline (O2).

  20. Fostering incidental experiences of nature through green infrastructure planning

    DEFF Research Database (Denmark)

    Beery, Thomas H; Raymond, Christopher M; Kyttä, Marketta

    2017-01-01

    of such experience for human well-being is considered. The role of green infrastructure to provide the opportunity for incidental nature experience may serve as a nudge or guide toward meaningful interaction. These ideas are explored using examples of green infrastructure design in two Nordic municipalities...... to consider this seldom addressed aspect of human interaction with nature in green infrastructure planning. Special attention has been paid to the ability of incidental nature experience to redirect attention from a primary activity toward an unplanned focus (in this case, nature phenomena). The value...

  1. Copyright and personal use of CERN’s computing infrastructure

    CERN Multimedia

    IT Department

    2009-01-01

    (La version française sera en ligne prochainement)The rules covering the personal use of CERN’s computing infrastructure are defined in Operational Circular No. 5 and its Subsidiary Rules (see http://cern.ch/ComputingRules). All users of CERN’s computing infrastructure must comply with these rules, whether they access CERN’s computing facilities from within the Organization’s site or at another location. In particular, OC5 clause 17 requires that proprietary rights (the rights in software, music, video, etc.) must be respected. The user is liable for damages resulting from non-compliance. Recently, there have been several violations of OC5, where copyright material was discovered on public world-readable disk space. Please ensure that all material under your responsibility (in particular in files owned by your account) respects proprietary rights, including with respect to the restriction of access by third parties. CERN Security Team

  2. Autonomic Management of Application Workflows on Hybrid Computing Infrastructure

    Directory of Open Access Journals (Sweden)

    Hyunjoo Kim

    2011-01-01

    Full Text Available In this paper, we present a programming and runtime framework that enables the autonomic management of complex application workflows on hybrid computing infrastructures. The framework is designed to address system and application heterogeneity and dynamics to ensure that application objectives and constraints are satisfied. The need for such autonomic system and application management is becoming critical as computing infrastructures become increasingly heterogeneous, integrating different classes of resources from high-end HPC systems to commodity clusters and clouds. For example, the framework presented in this paper can be used to provision the appropriate mix of resources based on application requirements and constraints. The framework also monitors the system/application state and adapts the application and/or resources to respond to changing requirements or environment. To demonstrate the operation of the framework and to evaluate its ability, we employ a workflow used to characterize an oil reservoir executing on a hybrid infrastructure composed of TeraGrid nodes and Amazon EC2 instances of various types. Specifically, we show how different applications objectives such as acceleration, conservation and resilience can be effectively achieved while satisfying deadline and budget constraints, using an appropriate mix of dynamically provisioned resources. Our evaluations also demonstrate that public clouds can be used to complement and reinforce the scheduling and usage of traditional high performance computing infrastructure.

  3. Evolution of Cloud Storage as Cloud Computing Infrastructure Service

    OpenAIRE

    Rajan, Arokia Paul; Shanmugapriyaa

    2013-01-01

    Enterprises are driving towards less cost, more availability, agility, managed risk - all of which is accelerated towards Cloud Computing. Cloud is not a particular product, but a way of delivering IT services that are consumable on demand, elastic to scale up and down as needed, and follow a pay-for-usage model. Out of the three common types of cloud computing service models, Infrastructure as a Service (IaaS) is a service model that provides servers, computing power, network bandwidth and S...

  4. Managing a tier-2 computer centre with a private cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-01-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

  5. Fostering incidental experiences of nature through green infrastructure planning.

    Science.gov (United States)

    Beery, Thomas H; Raymond, Christopher M; Kyttä, Marketta; Olafsson, Anton Stahl; Plieninger, Tobias; Sandberg, Mattias; Stenseke, Marie; Tengö, Maria; Jönsson, K Ingemar

    2017-11-01

    Concern for a diminished human experience of nature and subsequent decreased human well-being is addressed via a consideration of green infrastructure's potential to facilitate unplanned or incidental nature experience. Incidental nature experience is conceptualized and illustrated in order to consider this seldom addressed aspect of human interaction with nature in green infrastructure planning. Special attention has been paid to the ability of incidental nature experience to redirect attention from a primary activity toward an unplanned focus (in this case, nature phenomena). The value of such experience for human well-being is considered. The role of green infrastructure to provide the opportunity for incidental nature experience may serve as a nudge or guide toward meaningful interaction. These ideas are explored using examples of green infrastructure design in two Nordic municipalities: Kristianstad, Sweden, and Copenhagen, Denmark. The outcome of the case study analysis coupled with the review of literature is a set of sample recommendations for how green infrastructure can be designed to support a range of incidental nature experiences with the potential to support human well-being.

  6. Cloud Computing and Virtual Desktop Infrastructures in Afloat Environments

    OpenAIRE

    Gillette, Stefan E.

    2012-01-01

    The phenomenon of “cloud computing” has become ubiquitous among users of the Internet and many commercial applications. Yet, the U.S. Navy has conducted limited research in this nascent technology. This thesis explores the application and integration of cloud computing both at the shipboard level and in a multi-ship environment. A virtual desktop infrastructure, mirroring a shipboard environment, was built and analyzed in the Cloud Lab at the Naval Postgraduate School, which offers a potentia...

  7. Design of Computer Experiments

    DEFF Research Database (Denmark)

    Dehlendorff, Christian

    The main topic of this thesis is design and analysis of computer and simulation experiments and is dealt with in six papers and a summary report. Simulation and computer models have in recent years received increasingly more attention due to their increasing complexity and usability. Software...... packages make the development of rather complicated computer models using predefined building blocks possible. This implies that the range of phenomenas that are analyzed by means of a computer model has expanded significantly. As the complexity grows so does the need for efficient experimental designs...... and analysis methods, since the complex computer models often are expensive to use in terms of computer time. The choice of performance parameter is an important part of the analysis of computer and simulation models and Paper A introduces a new statistic for waiting times in health care units. The statistic...

  8. Defense strategies for cloud computing multi-site server infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; He, Fei [Texas A& M University, Kingsville, TX, USA

    2018-01-01

    We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, and also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.

  9. Distributed Monitoring Infrastructure for Worldwide LHC Computing Grid

    CERN Document Server

    Andrade, Pedro; Bhatt, Kislay; Chand, Phool; Collados, David; Duggal, Vibhuti; Fuente, Paloma; Hayashi, Soichi; Imamagic, Emir; Joshi, Pradyumna; Kalmady, Rajesh; Karnani, Urvashi; Kumar, Vaibhav; Lapka, Wojciech; Quick, Robert; Tarragon, Jacobo; Teige, Scott; Triantafyllidis, Christos

    2012-01-01

    The journey of a monitoring probe from its development phase to the moment its execution result is presented in an availability report is a complex process. It goes through multiple phases such as development, testing, integration, release, deployment, execution, data aggregation, computation, and reporting. Further, it involves people with different roles (developers, site managers, VO managers, service managers, management), from different middleware providers (ARC, dCache, gLite, UNICORE and VDT), consortiums (WLCG, EMI, EGI, OSG), and operational teams (GOC, OMB, OTAG, CSIRT). The seamless harmonization of these distributed actors is in daily use for monitoring of the WLCG infrastructure. In this paper we describe the monitoring of the WLCG infrastructure from the operational perspective. We explain the complexity of the journey of a monitoring probe from its execution on a grid node to the visualization on the MyWLCG portal where it is exposed to other clients. This monitoring workflow profits from the i...

  10. X-ray-induced acoustic computed tomography of concrete infrastructure

    Science.gov (United States)

    Tang, Shanshan; Ramseyer, Chris; Samant, Pratik; Xiang, Liangzhong

    2018-02-01

    X-ray-induced Acoustic Computed Tomography (XACT) takes advantage of both X-ray absorption contrast and high ultrasonic resolution in a single imaging modality by making use of the thermoacoustic effect. In XACT, X-ray absorption by defects and other structures in concrete create thermally induced pressure jumps that launch ultrasonic waves, which are then received by acoustic detectors to form images. In this research, XACT imaging was used to non-destructively test and identify defects in concrete. For concrete structures, we conclude that XACT imaging allows multiscale imaging at depths ranging from centimeters to meters, with spatial resolutions from sub-millimeter to centimeters. XACT imaging also holds promise for single-side testing of concrete infrastructure and provides an optimal solution for nondestructive inspection of existing bridges, pavement, nuclear power plants, and other concrete infrastructure.

  11. A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing

    Directory of Open Access Journals (Sweden)

    Christoph Endres

    2005-01-01

    Full Text Available In this survey, we discuss 29 software infrastructures and frameworks which support the construction of distributed interactive systems. They range from small projects with one implemented prototype to large scale research efforts, and they come from the fields of Augmented Reality (AR, Intelligent Environments, and Distributed Mobile Systems. In their own way, they can all be used to implement various aspects of the ubiquitous computing vision as described by Mark Weiser [60]. This survey is meant as a starting point for new projects, in order to choose an existing infrastructure for reuse, or to get an overview before designing a new one. It tries to provide a systematic, relatively broad (and necessarily not very deep overview, while pointing to relevant literature for in-depth study of the systems discussed.

  12. The computing and data infrastructure to interconnect EEE stations

    International Nuclear Information System (INIS)

    Noferini, F.

    2016-01-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  13. The computing and data infrastructure to interconnect EEE stations

    Energy Technology Data Exchange (ETDEWEB)

    Noferini, F., E-mail: noferini@bo.infn.it [Museo Storico della Fisica e Centro Studi e Ricerche “Enrico Fermi”, Rome (Italy); INFN CNAF, Bologna (Italy)

    2016-07-11

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  14. The computing and data infrastructure to interconnect EEE stations

    Science.gov (United States)

    Noferini, F.; EEE Collaboration

    2016-07-01

    The Extreme Energy Event (EEE) experiment is devoted to the search of high energy cosmic rays through a network of telescopes installed in about 50 high schools distributed throughout the Italian territory. This project requires a peculiar data management infrastructure to collect data registered in stations very far from each other and to allow a coordinated analysis. Such an infrastructure is realized at INFN-CNAF, which operates a Cloud facility based on the OpenStack opensource Cloud framework and provides Infrastructure as a Service (IaaS) for its users. In 2014 EEE started to use it for collecting, monitoring and reconstructing the data acquired in all the EEE stations. For the synchronization between the stations and the INFN-CNAF infrastructure we used BitTorrent Sync, a free peer-to-peer software designed to optimize data syncronization between distributed nodes. All data folders are syncronized with the central repository in real time to allow an immediate reconstruction of the data and their publication in a monitoring webpage. We present the architecture and the functionalities of this data management system that provides a flexible environment for the specific needs of the EEE project.

  15. Software Development Infrastructure for the FAIR Experiments

    International Nuclear Information System (INIS)

    Uhlig, F; Al-Turany, M; Bertini, D; Karabowicz, R

    2011-01-01

    The proposed project FAIR (Facility for Anti-proton and Ion Research) is an international accelerator facility of the next generation. It builds on top of the experience and technological developments already made at the existing GSI facility, and incorporate new technological concepts. The four scientific pillars of FAIR are NUSTAR (nuclear structure and astrophysics), PANDA (QCD studies with cooled beams of anti-protons), CBM (physics of hadronic matter at highest baryon densities), and APPA (atomic physics, plasma physics, and applications). The FairRoot framework used by all of the big FAIR experiments as a base for their own specific developments, provides basic functionality like IO, geometry handling etc. The challenge is to support all the different experiments with their heterogeneous requirements. Due to the limited manpower, one of the first design decisions was to (re)use as much as possible already available and tested software and to focus on the development of the framework. Beside the framework itself, the FairRoot core team also provides some software development tools. We will describe the complete set of tools in this article. The Makefiles for all projects are generated using CMake. For software testing and the corresponding quality assurance, we use CTest to generate the results and CDash as web front end. The tools are completed by subversion as source code repository and trac as tool for the complete source code management. This set of tools allows us to offer the full functionality we have for FairRoot also to the experiments based on FairRoot.

  16. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Barberis, Dario; Crepe-Renaudin, Sabine Chrystel; De, Kaushik; Fassi, Farida; Stradling, Alden; Svatos, Michal; Vartapetian, Armen; Wolters, Helmut

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing...

  17. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  18. First Experiences with LHC Grid Computing and Distributed Analysis

    CERN Document Server

    Fisk, Ian

    2010-01-01

    In this presentation the experiences of the LHC experiments using grid computing were presented with a focus on experience with distributed analysis. After many years of development, preparation, exercises, and validation the LHC (Large Hadron Collider) experiments are in operations. The computing infrastructure has been heavily utilized in the first 6 months of data collection. The general experience of exploiting the grid infrastructure for organized processing and preparation is described, as well as the successes employing the infrastructure for distributed analysis. At the end the expected evolution and future plans are outlined.

  19. Grid Computing Making the Global Infrastructure a Reality

    CERN Document Server

    Fox, Geoffrey C; Hey, Anthony J G

    2003-01-01

    Grid computing is applying the resources of many computers in a network to a single problem at the same time Grid computing appears to be a promising trend for three reasons: (1) Its ability to make more cost-effective use of a given amount of computer resources, (2) As a way to solve problems that can't be approached without an enormous amount of computing power (3) Because it suggests that the resources of many computers can be cooperatively and perhaps synergistically harnessed and managed as a collaboration toward a common objective. A number of corporations, professional groups, university consortiums, and other groups have developed or are developing frameworks and software for managing grid computing projects. The European Community (EU) is sponsoring a project for a grid for high-energy physics, earth observation, and biology applications. In the United States, the National Technology Grid is prototyping a computational grid for infrastructure and an access grid for people. Sun Microsystems offers Gri...

  20. CMS distributed computing workflow experience

    Science.gov (United States)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D.; Prosper, Harrison B.; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao, Junhui; Pin, Arnaud; Schul, Nicolas; De Lentdecker, Gilles; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey; Barge, Derek; Lahiff, Andrew

    2011-12-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  1. CMS distributed computing workflow experience

    International Nuclear Information System (INIS)

    Adelman-McCarthy, Jennifer; Gutsche, Oliver; Haas, Jeffrey D; Prosper, Harrison B; Dutta, Valentina; Gomez-Ceballos, Guillelmo; Hahn, Kristian; Klute, Markus; Mohapatra, Ajit; Spinoso, Vincenzo; Kcira, Dorian; Caudron, Julien; Liao Junhui; Pin, Arnaud; Schul, Nicolas; Lentdecker, Gilles De; McCartin, Joseph; Vanelderen, Lukas; Janssen, Xavier; Tsyganov, Andrey

    2011-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simulation of proton-proton collisions for the CMS experiment is primarily carried out at the second tier of the CMS computing infrastructure. Half of the Tier-2 sites of CMS are reserved for central Monte Carlo (MC) production while the other half is available for user analysis. This paper summarizes the large throughput of the MC production operation during the data taking period of 2010 and discusses the latencies and efficiencies of the various types of MC production workflows. We present the operational procedures to optimize the usage of available resources and we the operational model of CMS for including opportunistic resources, such as the larger Tier-3 sites, into the central production operation.

  2. Experiments in computing: a survey.

    Science.gov (United States)

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  3. First results from a combined analysis of CERN computing infrastructure metrics

    Science.gov (United States)

    Duellmann, Dirk; Nieke, Christian

    2017-10-01

    The IT Analysis Working Group (AWG) has been formed at CERN across individual computing units and the experiments to attempt a cross cutting analysis of computing infrastructure and application metrics. In this presentation we will describe the first results obtained using medium/long term data (1 months — 1 year) correlating box level metrics, job level metrics from LSF and HTCondor, IO metrics from the physics analysis disk pools (EOS) and networking and application level metrics from the experiment dashboards. We will cover in particular the measurement of hardware performance and prediction of job duration, the latency sensitivity of different job types and a search for bottlenecks with the production job mix in the current infrastructure. The presentation will conclude with the proposal of a small set of metrics to simplify drawing conclusions also in the more constrained environment of public cloud deployments.

  4. National Computational Infrastructure for Lattice Gauge Theory: Final report

    International Nuclear Information System (INIS)

    Reed, Daniel A.

    2008-01-01

    In this document we describe work done under the SciDAC-1 Project National Computerational Infrastructure for Lattice Gauge Theory. The objective of this project was to construct the computational infrastructure needed to study quantum chromodynamics (QCD). Nearly all high energy and nuclear physicists in the United States working on the numerical study of QCD are involved in the project, as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL), and Thomas Jefferson National Accelerator Facility (JLab). A list of the senior participants is given in Appendix A.2. The project includes the development of community software for the effective use of the terascale computers, and the research and development of commodity clusters optimized for the study of QCD. The software developed as part of this effort is publicly available, and is being widely used by physicists in the United States and abroad. The prototype clusters built with SciDAC-1 fund have been used to test the software, and are available to lattice gauge theorists in the United States on a peer reviewed basis

  5. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi

    With all the technical services running, the attention has moved toward the next shutdown that will be spent to perform those modifications needed to enhance the reliability of CMS Infrastructures. Just to give an example for the cooling circuit, a set of re-circulating bypasses will be installed into the TS/CV area to limit the pressure surge when a circuit is partially shut-off. This problem has affected especially the Endcap Muon cooling circuit in the past. Also the ventilation of the UXC55 has to be revisited, allowing the automatic switching to full extraction in case of magnet quench. (Normally 90% of the cavern air is re-circulated by the ventilation system.) Minor modifications will concern the gas distribution, while the DSS action-matrix has to be refined according to the experience gained with operating the detector for a while. On the powering side, some LV power lines have been doubled and the final schematics of the UPS coverage for the counting rooms have been released. The most relevant inte...

  6. A virtual computing infrastructure for TS-CV SCADA systems

    CERN Document Server

    Poulsen, S

    2008-01-01

    In modern data centres, it is an emerging trend to operate and manage computers as software components or logical resources and not as physical machines. This technique is known as â€ワvirtualisation” and the new computers are referred to as â€ワvirtual machines” (VMs). Multiple VMs can be consolidated on a single hardware platform and managed in ways that are not possible with physical machines. However, this is not yet widely practiced for control system deployment. In TS-CV, a collection of VMs or a â€ワvirtual infrastructure” is installed since 2005 for SCADA systems, PLC program development, and alarm transmission. This makes it possible to consolidate distributed, heterogeneous operating systems and applications on a limited number of standardised high-performance servers in the Central Control Room (CCR). More generally, virtualisation assists in offering continuous computing services for controls and maintaining performance and assuring quality. Implementing our systems in a vi...

  7. The Computational Infrastructure for Geodynamics as a Community of Practice

    Science.gov (United States)

    Hwang, L.; Kellogg, L. H.

    2016-12-01

    Computational Infrastructure for Geodynamics (CIG), geodynamics.org, originated in 2005 out of community recognition that the efforts of individual or small groups of researchers to develop scientifically-sound software is impossible to sustain, duplicates effort, and makes it difficult for scientists to adopt state-of-the art computational methods that promote new discovery. As a community of practice, participants in CIG share an interest in computational modeling in geodynamics and work together on open source software to build the capacity to support complex, extensible, scalable, interoperable, reliable, and reusable software in an effort to increase the return on investment in scientific software development and increase the quality of the resulting software. The group interacts regularly to learn from each other and better their practices formally through webinar series, workshops, and tutorials and informally through listservs and hackathons. Over the past decade, we have learned that successful scientific software development requires at a minimum: collaboration between domain-expert researchers, software developers and computational scientists; clearly identified and committed lead developer(s); well-defined scientific and computational goals that are regularly evaluated and updated; well-defined benchmarks and testing throughout development; attention throughout development to usability and extensibility; understanding and evaluation of the complexity of dependent libraries; and managed user expectations through education, training, and support. CIG's code donation standards provide the basis for recently formalized best practices in software development (geodynamics.org/cig/dev/best-practices/). Best practices include use of version control; widely used, open source software libraries; extensive test suites; portable configuration and build systems; extensive documentation internal and external to the code; and structured, human readable input formats.

  8. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    Science.gov (United States)

    Harutyunyan, A.; Blomer, J.; Buncic, P.; Charalampidis, I.; Grey, F.; Karneyeu, A.; Larsen, D.; Lombraña González, D.; Lisec, J.; Segal, B.; Skands, P.

    2012-12-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC@home 2.0 volunteer computing project.

  9. CernVM Co-Pilot: an Extensible Framework for Building Scalable Computing Infrastructures on the Cloud

    International Nuclear Information System (INIS)

    Harutyunyan, A; Blomer, J; Buncic, P; Charalampidis, I; Grey, F; Karneyeu, A; Larsen, D; Lombraña González, D; Lisec, J; Segal, B; Skands, P

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of managed or unmanaged computing resources. Co-Pilot can either be used to create a stand-alone computing infrastructure, or to integrate new computing resources into existing infrastructures (such as Grid or batch). Unlike traditional middleware systems, Co-Pilot components communicate using the Extensible Messaging and Presence protocol (XMPP). This allows the system to be easily scaled in case of a high load, and it also simplifies the development of new components. In this contribution we present the latest developments and the current status of the framework, discuss how it can be extended to suit the needs of a particular community, as well as describe the operational experience of using the framework in the LHC at home 2.0 volunteer computing project.

  10. CernVM Co-Pilot: an Extensible Framework for Building Scalable Cloud Computing Infrastructures

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    CernVM Co-Pilot is a framework for instantiating an ad-hoc computing infrastructure on top of distributed computing resources. Such resources include commercial computing clouds (e.g. Amazon EC2), scientific computing clouds (e.g. CERN lxcloud), as well as the machines of users participating in volunteer computing projects (e.g. BOINC). The framework consists of components that communicate using the Extensible Messaging and Presence protocol (XMPP), allowing for new components to be developed in virtually any programming language and interfaced to existing Grid and batch computing infrastructures exploited by the High Energy Physics community. Co-Pilot has been used to execute jobs for both the ALICE and ATLAS experiments at CERN. CernVM Co-Pilot is also one of the enabling technologies behind the LHC@home 2.0 volunteer computing project, which is the first such project that exploits virtual machine technology. The use of virtual machines eliminates the necessity of modifying existing applications and adapt...

  11. Enabling software defined networking experiments in networked critical infrastructures

    Directory of Open Access Journals (Sweden)

    Béla Genge

    2014-05-01

    Full Text Available Nowadays, the fact that Networked Critical Infrastructures (NCI, e.g., power plants, water plants, oil and gas distribution infrastructures, and electricity grids, are targeted by significant cyber threats is well known. Nevertheless, recent research has shown that specific characteristics of NCI can be exploited in the enabling of more efficient mitigation techniques, while novel techniques from the field of IP networks can bring significant advantages. In this paper we explore the interconnection of NCI communication infrastructures with Software Defined Networking (SDN-enabled network topologies. SDN provides the means to create virtual networking services and to implement global networking decisions. It relies on OpenFlow to enable communication with remote devices and has been recently categorized as the “Next Big Technology”, which will revolutionize the way decisions are implemented in switches and routers. Therefore, the paper documents the first steps towards enabling an SDN-NCI and presents the impact of a Denial of Service experiment over traffic resulting from an XBee sensor network which is routed across an emulated SDN network.

  12. INFRASTRUCTURE

    CERN Document Server

    A.Gaddi

    2011-01-01

    Between the end of March to June 2011, there has been no detector downtime during proton fills due to CMS Infrastructures failures. This exceptional performance is a clear sign of the high quality work done by the CMS Infrastructures unit and its supporting teams. Powering infrastructure At the end of March, the EN/EL group observed a problem with the CMS 48 V system. The problem was a lack of isolation between the negative (return) terminal and earth. Although at that moment we were not seeing any loss of functionality, in the long term it would have led to severe disruption to the CMS power system. The 48 V system is critical to the operation of CMS: in addition to feeding the anti-panic lights, essential for the safety of the underground areas, it powers all the PLCs (Twidos) that control AC power to the racks and front-end electronics of CMS. A failure of the 48 V system would bring down the whole detector and lead to evacuation of the cavern. EN/EL technicians have made an accurate search of the fault, ...

  13. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2011-01-01

    Most of the work relating to Infrastructure has been concentrated in the new CSC and RPC manufactory at building 904, on the Prevessin site. Brand new gas distribution, powering and HVAC infrastructures are being deployed and the production of the first CSC chambers has started. Other activities at the CMS site concern the installation of a new small crane bridge in the Cooling technical room in USC55, in order to facilitate the intervention of the maintenance team in case of major failures of the chilled water pumping units. The laser barrack in USC55 has been also the object of a study, requested by the ECAL community, for the new laser system that shall be delivered in few months. In addition, ordinary maintenance works have been performed during the short machine stops on all the main infrastructures at Point 5 and in preparation to the Year-End Technical Stop (YETS), when most of the systems will be carefully inspected in order to ensure a smooth running through the crucial year 2012. After the incide...

  14. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2012-01-01

    The CMS Infrastructures teams are preparing for the LS1 activities. A long list of maintenance, consolidation and upgrade projects for CMS Infrastructures is on the table and is being discussed among Technical Coordination and sub-detector representatives. Apart from the activities concerning the cooling infrastructures (see below), two main projects have started: the refurbishment of the SX5 building, from storage area to RP storage and Muon stations laboratory; and the procurement of a new dry-gas (nitrogen and dry air) plant for inner detector flushing. We briefly present here the work done on the first item, leaving the second one for the next CMS Bulletin issue. The SX5 building is entering its third era, from main assembly building for CMS from 2000 to 2007, to storage building from 2008 to 2012, to RP storage and Muon laboratory during LS1 and beyond. A wall of concrete blocks has been erected to limit the RP zone, while the rest of the surface has been split between the ME1/1 and the CSC/DT laborat...

  15. CMS Distributed Computing Workflow Experience

    CERN Document Server

    Haas, Jeffrey David

    2010-01-01

    The vast majority of the CMS Computing capacity, which is organized in a tiered hierarchy, is located away from CERN. The 7 Tier-1 sites archive the LHC proton-proton collision data that is initially processed at CERN. These sites provide access to all recorded and simulated data for the Tier-2 sites, via wide-area network (WAN) transfers. All central data processing workflows are executed at the Tier-1 level, which contain re-reconstruction and skimming workflows of collision data as well as reprocessing of simulated data to adapt to changing detector conditions. This paper describes the operation of the CMS processing infrastructure at the Tier-1 level. The Tier-1 workflows are described in detail. The operational optimization of resource usage is described. In particular, the variation of different workflows during the data taking period of 2010, their efficiencies and latencies as well as their impact on the delivery of physics results is discussed and lessons are drawn from this experience. The simul...

  16. Stuart Energy's experiences in developing 'Hydrogen Energy Station' infrastructure

    International Nuclear Information System (INIS)

    Crilly, B.

    2004-01-01

    'Full text:' With over 50 years experience, Stuart Energy is the global leader in the development, manufacture and integration of multi-use hydrogen infrastructure products that use the Company's proprietary IMET hydrogen generation water electrolysis technology. Stuart Energy offers its customers the power of hydrogen through its integrated Hydrogen Energy Station (HES) that provides clean, secure and distributed hydrogen. The HES can be comprised of five modules: hydrogen generation, compression, storage, fuel dispensing and / or power generation. This paper discusses Stuart Energy's involvement with over 10 stations installed in recent years throughout North America, Asia and Europe while examining the economic and environmental benefits of these systems. (author)

  17. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi

    2012-01-01

    The CMS Infrastructures teams are constantly ensuring the smooth operation of the different services during this critical period when the detector is taking data at full speed. A single failure would spoil hours of high luminosity beam and everything is put in place to avoid such an eventuality. In the meantime however, the fast approaching LS1 requires that we take a look at the various activities to take place from the end of the year onwards. The list of infrastructures consolidation and upgrade tasks is already long and will touch all the services (cooling, gas, inertion, powering, etc.). The definitive list will be available just before the LS1 start. One activity performed by the CMS cooling team that is worth mentioning is the maintenance of the cooling circuits at the CMS Electronics Integration Centre (EIC) at building 904. The old chiller has been replaced by a three-units cooling plant that also serves the HVAC system for the new CSC and RPC factories. The commissioning of this new plant has tak...

  18. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi

    2010-01-01

    In addition to the intense campaign of replacement of the leaky bushing on the Endcap circuits, other important activities have also been completed, with the aim of enhancing the overall reliability of the cooling infrastructures at CMS. Remaining with the Endcap circuit, the regulating valve that supplies cold water to the primary side of the circuit heat-exchanger, is not well adapted in flow capability and a new part has been ordered, to be installed during a stop of LHC. The instrumentation monitoring of the refilling rate of the circuits has been enhanced and we can now detect leaks as small as 0.5 cc/sec, on circuits that have nominal flow rates of some 20 litres/sec. Another activity starting now that the technical stop is over is the collection of spare parts that are difficult to find on the market. These will be stored at P5 with the aim of reducing down-time in case of component failure. Concerning the ventilation infrastructures, it has been noticed that in winter time the relative humidity leve...

  19. COMPUTER CONTROL OF BEHAVIORAL EXPERIMENTS.

    Science.gov (United States)

    SIEGEL, LOUIS

    THE LINC COMPUTER PROVIDES A PARTICULAR SCHEDULE OF REINFORCEMENT FOR BEHAVIORAL EXPERIMENTS BY EXECUTING A SEQUENCE OF COMPUTER OPERATIONS IN CONJUNCTION WITH A SPECIALLY DESIGNED INTERFACE. THE INTERFACE IS THE MEANS OF COMMUNICATION BETWEEN THE EXPERIMENTAL CHAMBER AND THE COMPUTER. THE PROGRAM AND INTERFACE OF AN EXPERIMENT INVOLVING A PIGEON…

  20. Software Infrastructure for Computer-aided Drug Discovery and Development, a Practical Example with Guidelines.

    Science.gov (United States)

    Moretti, Loris; Sartori, Luca

    2016-09-01

    In the field of Computer-Aided Drug Discovery and Development (CADDD) the proper software infrastructure is essential for everyday investigations. The creation of such an environment should be carefully planned and implemented with certain features in order to be productive and efficient. Here we describe a solution to integrate standard computational services into a functional unit that empowers modelling applications for drug discovery. This system allows users with various level of expertise to run in silico experiments automatically and without the burden of file formatting for different software, managing the actual computation, keeping track of the activities and graphical rendering of the structural outcomes. To showcase the potential of this approach, performances of five different docking programs on an Hiv-1 protease test set are presented. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Using Infrastructure Awareness to Support the Recruitment of Volunteer Computing Participants

    DEFF Research Database (Denmark)

    Ramos, Juan David Hincapie

    , the properties of computational infrastructures provided in the periphery of the user’s attention, and supporting gradual disclosure of detailed information on user’s request. Working with users of the Mini-Grid, this thesis shows the design process of two infrastructure awareness systems aimed at supporting...... the recruitment of participants, the implementation of one possible technical strategy, and an in-the-wild evaluation. The thesis finalizes with a discussion of the results and implications of infrastructure awareness for participative and other computational infrastructures....

  2. Data Intensive Scientific Computing on Petabyte Scalable Infrastructure, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The infrastructure and programming paradigm for petabyte-level data processing performed at companies like Google and Yahoo shed some promising lights on the...

  3. Reliability issues related to the usage of Cloud Computing in Critical Infrastructures

    OpenAIRE

    Diez Gonzalez, Oscar Manuel; Silva Vazquez, Andrés

    2011-01-01

    The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud co...

  4. IMPLEMENTATION OF CLOUD COMPUTING AS A COMPONENT OF THE UNIVERSITY IT INFRASTRUCTURE

    Directory of Open Access Journals (Sweden)

    Vasyl P. Oleksyuk

    2014-05-01

    Full Text Available The article investigated the concept of IT infrastructure of higher educational institution. The article described models of deploying of cloud technologies in IT infrastructure. The hybrid model is most recent for higher educational institution. The unified authentication is an important component of IT infrastructure. The author suggests the public (Google Apps, Office 365 and private (Cloudstack, Eucalyptus, OpenStack cloud platforms to deploying in IT infrastructure of higher educational institution. Open source platform for organizing enterprise clouds were analyzed by the author. The article describes the experience of the deployment enterprise cloud in IT infrastructure of Department of Physics and Mathematics of Ternopil V. Hnatyuk National Pedagogical University.

  5. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    Directory of Open Access Journals (Sweden)

    Watthanai Pinthong

    2016-07-01

    Full Text Available Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software.

  6. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi and P. Tropea

    2013-01-01

      Most of the CMS infrastructures at P5 will go through a heavy consolidation-work period during LS1. All systems, from the cryogenic plant of the superconducting magnet to the rack powering in the USC55 counting rooms, from the cooling circuits to the gas distribution, will undergo consolidation work. As announced in the last issue of the CMS Bulletin, we present here one of the consolidation projects of LS1: the installation of a new dry-gas plant for inner detectors inertion. So far the oxygen and humidity suppression inside the CMS Tracker and Pixel volumes were assured by flushing dry nitrogen gas evaporated from a large liquid nitrogen tank. For technical reasons, the maximum flow is limited to less than 100 m3/h and the cost of refilling the tank every two weeks with liquid nitrogen is quite substantial. The new dry-gas plant will supply up to 400 m3/h of dry nitrogen (or the same flow of dry air, during shut-downs) with a comparatively minimal operation cost. It has been evaluated that the...

  7. INFRASTRUCTURE

    CERN Document Server

    Andrea Gaddi

    2010-01-01

    During the last six months, the main activity on the cooling circuit has essentially been preventive maintenance. At each short machine technical stop, a water sample is extracted out of every cooling circuit to measure the induced radioactivity. Soon after, a visual check of the whole detector cooling network is done, looking for water leaks in sensitive locations. Depending on sub-system availability, the main water filters are replaced; the old ones are inspected and sent to the CERN metallurgical lab in case of suspicious sediments. For the coming winter technical stop, a number of corrective maintenance activities and infrastructure consolidation work-packages are foreseen. A few faulty valves, found on the muon system cooling circuit, will be replaced; the cooling gauges for TOTEM and CASTOR, in the CMS Forward region, will be either changed or shielded against the magnetic stray field. The demineralizer cartridges will be replaced as well. New instrumentation will also be installed in the SCX5 PC farm ...

  8. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi.

    The various water-cooling circuits ran smoothly over the summer. The overall performance of the cooling system is satisfactory, even if some improvements are possible, concerning the endcap water-cooling and the C6F14 circuits. In particular for the endcap cooling circuit, we aim to lower the water temperature, to provide more margin for RPC detectors. An expert-on-call piquet has been established during the summer global run, assuring the continuous supervision of the installations. An effort has been made to collect and harmonize the existing documentation on the cooling infrastructures at P5. The last six months have seen minor modifications to the electrical power network at P5. Among these, the racks in USC55 for the Tracker and Sniffer systems, which are backed up by the diesel generator in case of power outage, have been equipped with new control boxes to allow a remote restart. Other interventions have concerned the supply of assured power to those installations that are essential for CMS to run eff...

  9. INFRASTRUCTURE

    CERN Multimedia

    A. Gaddi

    The long winter shut-down allows for modifications that will improve the reliability of the detector infrastructures at P5. The annual maintenance of detector services is taking place as well. This means a full stop of water-cooling circuits from November 24th with a gradual restart from mid January 09. The annual maintenance service includes the cleaning of the two SF5 cooling towers, service of the chiller plants on the surface, and the cryogenic plant serving the CMS Magnet. In addition, the overall site power is reduced from 8MW to 2MW, compatible with the switchover to the Swiss power network in winter. Full power will be available again from end of January. Among the modification works planned, the Low Voltage cabinets are being refurbished; doubling the cable sections and replacing the 40A circuit breakers with 60A types. This will reduce the overheating that has been experienced. Moreover, two new LV transformers will be bought and pre-cabled in order to assure a quick swap in case of failure of any...

  10. INFRASTRUCTURE

    CERN Document Server

    A. Gaddi

    2011-01-01

    During the last winter technical stop, a number of corrective maintenance activities and infrastructure consolidation work-packages were completed. On the surface, the site cooling facility has passed the annual maintenance process that includes the cleaning of the two evaporative cooling towers, the maintenance of the chiller units and the safety checks on the software controls. In parallel, CMS teams, reinforced by PH-DT group personnel, have worked to shield the cooling gauges for TOTEM and CASTOR against the magnetic stray field in the CMS Forward region, to add labels to almost all the valves underground and to clean all the filters in UXC55, USC55 and SCX5. Following the insertion of TOTEM T1 detector, the cooling circuit has been branched off and commissioned. The demineraliser cartridges have been replaced as well, as they were shown to be almost saturated. New instrumentation has been installed in the SCX5 PC farm cooling and ventilation network, in order to monitor the performance of the HVAC system...

  11. Migration of alcator C-Mod computer infrastructure to Linux

    International Nuclear Information System (INIS)

    Fredian, T.W.; Greenwald, M.; Stillerman, J.A.

    2004-01-01

    The Alcator C-Mod fusion experiment at MIT in Cambridge, Massachusetts has been operating for twelve years. The data handling for the experiment during most of this period was based on MDSplus running on a cluster of VAX and Alpha computers using the OpenVMS operating system. While the OpenVMS operating system provided a stable reliable platform, the support of the operating system and the software layered on the system has deteriorated in recent years. With the advent of extremely powerful low cost personal computers and the increasing popularity and robustness of the Linux operating system a decision was made to migrate the data handling systems for C-Mod to a collection of PC's running Linux. This paper will describe the new system configuration, the effort involved in the migration from OpenVMS, the results of the first run campaign under the new configuration and the impact the switch may have on the rest of the MDSplus community

  12. Towards sustainable infrastructure development through integrated contracts : Experiences with inclusiveness in Dutch infrastructure projects

    NARCIS (Netherlands)

    Lenferink, Sander; Tillema, Taede; Arts, Jos

    Current complex society necessitates finding inclusive arrangements for delivering sustainable road infrastructure integrating design, construction and maintenance stages of the project lifecycle. In this article we investigate whether linking stages by integrated contracts can lead to more

  13. The Green Experiment: Cities, Green Stormwater Infrastructure, and Sustainability

    Directory of Open Access Journals (Sweden)

    Christopher M. Chini

    2017-01-01

    Full Text Available Green infrastructure is a unique combination of economic, social, and environmental goals and benefits that requires an adaptable framework for planning, implementing, and evaluating. In this study, we propose an experimental framework for policy, implementation, and subsequent evaluation of green stormwater infrastructure within the context of sociotechnical systems and urban experimentation. Sociotechnical systems describe the interaction of complex systems with quantitative and qualitative impacts. Urban experimentation—traditionally referencing climate change programs and their impacts—is a process of evaluating city programs as if in a laboratory setting with hypotheses and evaluated results. We combine these two concepts into a singular framework creating a policy feedback cycle (PFC for green infrastructure to evaluate municipal green infrastructure plans as an experimental process within the context of a sociotechnical system. After proposing and discussing the PFC, we utilize the tool to research and evaluate the green infrastructure programs of 27 municipalities across the United States. Results indicate that green infrastructure plans should incorporate community involvement and communication, evaluation based on project motivation, and an iterative process for knowledge production. We suggest knowledge brokers as a key resource in connecting the evaluation stage of the feedback cycle to the policy phase. We identify three important needs for green infrastructure experimentation: (i a fluid definition of green infrastructure in policy; (ii maintenance and evaluation components of a green infrastructure plan; and (iii communication of the plan to the community.

  14. MEMS Reliability: Infrastructure, Test Structures, Experiments, and Failure Modes

    Energy Technology Data Exchange (ETDEWEB)

    TANNER,DANELLE M.; SMITH,NORMAN F.; IRWIN,LLOYD W.; EATON,WILLIAM P.; HELGESEN,KAREN SUE; CLEMENT,J. JOSEPH; MILLER,WILLIAM M.; MILLER,SAMUEL L.; DUGGER,MICHAEL T.; WALRAVEN,JEREMY A.; PETERSON,KENNETH A.

    2000-01-01

    The burgeoning new technology of Micro-Electro-Mechanical Systems (MEMS) shows great promise in the weapons arena. We can now conceive of micro-gyros, micro-surety systems, and micro-navigators that are extremely small and inexpensive. Do we want to use this new technology in critical applications such as nuclear weapons? This question drove us to understand the reliability and failure mechanisms of silicon surface-micromachined MEMS. Development of a testing infrastructure was a crucial step to perform reliability experiments on MEMS devices and will be reported here. In addition, reliability test structures have been designed and characterized. Many experiments were performed to investigate failure modes and specifically those in different environments (humidity, temperature, shock, vibration, and storage). A predictive reliability model for wear of rubbing surfaces in microengines was developed. The root causes of failure for operating and non-operating MEMS are discussed. The major failure mechanism for operating MEMS was wear of the polysilicon rubbing surfaces. Reliability design rules for future MEMS devices are established.

  15. INFRASTRUCTURE

    CERN Multimedia

    Andrea Gaddi

    The various water-cooling circuits have been running smoothly since the last maintenance stop. The temperature set-points are being tuned to the actual requests from sub-detectors. As the RPC chambers seem to be rather sensitive to temperature fluctuations, the set-point on the Barrel and Endcap Muon circuits has been lowered by one degree Celsius, reaching the minimum temperature possible with the current hardware. A further decrease in temperature will only be possible with a substantial modification of the heat exchanger and related control valve on the primary circuit. A study has been launched to investigate possible solutions and related costs. The two cooling skids for Totem and Castor have been installed on top of the HF platform. They will supply demineralized water to the two forward sub-detectors, transferring the heat to the main rack circuit via an on-board heat exchanger. A preliminary analysis of the cooling requirements of the SCX5 computer farm has been done. As a first result, two precision...

  16. The Green Experiment: Cities, Green Stormwater Infrastructure, and Sustainability

    OpenAIRE

    Christopher M. Chini; James F. Canning; Kelsey L. Schreiber; Joshua M. Peschel; Ashlynn S. Stillwell

    2017-01-01

    Green infrastructure is a unique combination of economic, social, and environmental goals and benefits that requires an adaptable framework for planning, implementing, and evaluating. In this study, we propose an experimental framework for policy, implementation, and subsequent evaluation of green stormwater infrastructure within the context of sociotechnical systems and urban experimentation. Sociotechnical systems describe the interaction of complex systems with quantitative and qualitative...

  17. A Cloud Computing-Enabled Spatio-Temporal Cyber-Physical Information Infrastructure for Efficient Soil Moisture Monitoring

    Directory of Open Access Journals (Sweden)

    Lianjie Zhou

    2016-06-01

    Full Text Available Comprehensive surface soil moisture (SM monitoring is a vital task in precision agriculture applications. SM monitoring includes remote sensing imagery monitoring and in situ sensor-based observational monitoring. Cloud computing can increase computational efficiency enormously. A geographical web service was developed to assist in agronomic decision making, and this tool can be scaled to any location and crop. By integrating cloud computing and the web service-enabled information infrastructure, this study uses the cloud computing-enabled spatio-temporal cyber-physical infrastructure (CESCI to provide an efficient solution for soil moisture monitoring in precision agriculture. On the server side of CESCI, diverse Open Geospatial Consortium web services work closely with each other. Hubei Province, located on the Jianghan Plain in central China, is selected as the remote sensing study area in the experiment. The Baoxie scientific experimental field in Wuhan City is selected as the in situ sensor study area. The results show that the proposed method enhances the efficiency of remote sensing imagery mapping and in situ soil moisture interpolation. In addition, the proposed method is compared to other existing precision agriculture infrastructures. In this comparison, the proposed infrastructure performs soil moisture mapping in Hubei Province in 1.4 min and near real-time in situ soil moisture interpolation in an efficient manner. Moreover, an enhanced performance monitoring method can help to reduce costs in precision agriculture monitoring, as well as increasing agricultural productivity and farmers’ net-income.

  18. Assessment of Road Infrastructures Pertaining to Malaysian Experience

    Directory of Open Access Journals (Sweden)

    Samsuddin Norshakina

    2016-01-01

    Full Text Available Road Infrastructures contribute towards many severe accidents and it needs supervision as to improve road safety levels. The numbers of fatalities have increased annually and road authority should seriously consider conducting programs or activities to periodically monitor, restore of improve road infrastructure. Implementation of road safety audits may reduce fatalities among road users and maintain road safety at acceptable standards. This paper is aimed to discuss the aspects of road infrastructure in Malaysia. The research signifies the impact of road hazards during the observations and the impact of road infrastructure types on road accidents. The F050 (Jalan Kluang-Batu Pahat road case study showed that infrastructure risk is closely related with number of accident. As the infrastructure risk increase, the number of road accidents also increase. It was also found that different road zones along Jalan Kluang-Batu Pahat showed different level of intersection volume due to number of road intersection. Thus, it is hoped that by implementing continuous assessment on road infrastructures, it might be able to reduce road accidents and fatalities among drives and the community.

  19. Data grids a new computational infrastructure for data-intensive science

    CERN Document Server

    Avery, P

    2002-01-01

    Twenty-first-century scientific and engineering enterprises are increasingly characterized by their geographic dispersion and their reliance on large data archives. These characteristics bring with them unique challenges. First, the increasing size and complexity of modern data collections require significant investments in information technologies to store, retrieve and analyse them. Second, the increased distribution of people and resources in these projects has made resource sharing and collaboration across significant geographic and organizational boundaries critical to their success. In this paper I explore how computing infrastructures based on data grids offer data-intensive enterprises a comprehensive, scalable framework for collaboration and resource sharing. A detailed example of a data grid framework is presented for a Large Hadron Collider experiment, where a hierarchical set of laboratory and university resources comprising petaflops of processing power and a multi- petabyte data archive must be ...

  20. Pharmacology Experiments on the Computer.

    Science.gov (United States)

    Keller, Daniel

    1990-01-01

    A computer program that replaces a set of pharmacology and physiology laboratory experiments on live animals or isolated organs is described and illustrated. Five experiments are simulated: dose-effect relationships on smooth muscle, blood pressure and catecholamines, neuromuscular signal transmission, acetylcholine and the circulation, and…

  1. Establishment of a national radiation protection infrastructure. The Philippine experience

    Energy Technology Data Exchange (ETDEWEB)

    Valdezco, E.M. [Philippine Nuclear Research Institute, Department of Science and Technology (Philippines)

    2000-05-01

    Radiation and radioactive materials have been used widely in the Philippines for the last four decades and have made substantial contributions to the improvement of the life and welfare of the Filipino people. In spite of the unsuccessful attempt to operate a nuclear power, plant, the country, through the Philippine Nuclear Research Institute has consistently pursued an active small nuclear applications program to promote the peaceful applications of nuclear energy while also mandated to ensure radiation safety through nuclear regulations and radioactive materials licensing. Another government agency, the Radiation Health Services (RHS) of the Department of Health was created much later to address the growing concern on radiation hazards from electrically generated radiation devices and machines. The RHS has been strengthened later to include non-ionizing radiation health hazards and has expanded to include a biomedical engineering and non-radiation related medical equipment. The paper will describe the historical perspective highlighting the basis of the national regulatory framework to ensure that only qualified individuals are authorized to use radioactive materials and radiation emitting machines/devices. The development of national training programs in radiation protection and experiences in implementing these programs will be presented. National efforts to strengthen the radiation protection infrastructure through the establishment, improvement and upgrading of a number of facilities and capabilities in radiation protection related work activities will be discussed including participation in national, regional and international intercomparison programs to ensure accuracy, reliability, reproducibility and comparability of dose measurements. Lastly, data on the status of small nuclear applications and related activities in the country will be presented including a number of current issues related to the adoption of the new international basic safety standards

  2. Establishment of a national radiation protection infrastructure. The Philippine experience

    International Nuclear Information System (INIS)

    Valdezco, E.M.

    2000-01-01

    Radiation and radioactive materials have been used widely in the Philippines for the last four decades and have made substantial contributions to the improvement of the life and welfare of the Filipino people. In spite of the unsuccessful attempt to operate a nuclear power, plant, the country, through the Philippine Nuclear Research Institute has consistently pursued an active small nuclear applications program to promote the peaceful applications of nuclear energy while also mandated to ensure radiation safety through nuclear regulations and radioactive materials licensing. Another government agency, the Radiation Health Services (RHS) of the Department of Health was created much later to address the growing concern on radiation hazards from electrically generated radiation devices and machines. The RHS has been strengthened later to include non-ionizing radiation health hazards and has expanded to include a biomedical engineering and non-radiation related medical equipment. The paper will describe the historical perspective highlighting the basis of the national regulatory framework to ensure that only qualified individuals are authorized to use radioactive materials and radiation emitting machines/devices. The development of national training programs in radiation protection and experiences in implementing these programs will be presented. National efforts to strengthen the radiation protection infrastructure through the establishment, improvement and upgrading of a number of facilities and capabilities in radiation protection related work activities will be discussed including participation in national, regional and international intercomparison programs to ensure accuracy, reliability, reproducibility and comparability of dose measurements. Lastly, data on the status of small nuclear applications and related activities in the country will be presented including a number of current issues related to the adoption of the new international basic safety standards

  3. Enhancing Trusted Cloud Computing Platform for Infrastructure as a Service

    Directory of Open Access Journals (Sweden)

    KIM, H.

    2017-02-01

    Full Text Available The characteristics of cloud computing including on-demand self-service, resource pooling, and rapid elasticity have made it grow in popularity. However, security concerns still obstruct widespread adoption of cloud computing in the industry. Especially, security risks related to virtual machine make cloud users worry about exposure of their private data in IaaS environment. In this paper, we propose an enhanced trusted cloud computing platform to provide confidentiality and integrity of the user's data and computation. The presented platform provides secure and efficient virtual machine management protocols not only to protect against eavesdropping and tampering during transfer but also to guarantee the virtual machine is hosted only on the trusted cloud nodes against inside attackers. The protocols utilize both symmetric key operations and public key operations together with efficient node authentication model, hence both the computational cost for cryptographic operations and the communication steps are significantly reduced. As a result, the simulation shows the performance of the proposed platform is approximately doubled compared to the previous platforms. The proposed platform eliminates cloud users' worry above by providing confidentiality and integrity of their private data with better performance, and thus it contributes to wider industry adoption of cloud computing.

  4. IBERCIVIS: a stable citizen computing infrastructure, or science at home

    International Nuclear Information System (INIS)

    Castejon, F.; Tarancon, A.

    2008-01-01

    Researchers deal with increasingly difficult, complex issues that require more resources and tools. In addition to strictly technical problems, they are also required to produce research that is understood, at least in part, by the public and to be able to convey what are almost always difficult ideas and concepts the frontiers of knowledge. It rarely happens, but sometimes it is possible to solve several problems at the same time. As we will see throughout the article, Volunteer Computing, when properly handled, is able to supply computing power the scientific community and also serve as a window to science in the homes of citizens. (Author) 5 refs

  5. Computer loss experience and predictions

    Science.gov (United States)

    Parker, Donn B.

    1996-03-01

    The types of losses organizations must anticipate have become more difficult to predict because of the eclectic nature of computers and the data communications and the decrease in news media reporting of computer-related losses as they become commonplace. Total business crime is conjectured to be decreasing in frequency and increasing in loss per case as a result of increasing computer use. Computer crimes are probably increasing, however, as their share of the decreasing business crime rate grows. Ultimately all business crime will involve computers in some way, and we could see a decline of both together. The important information security measures in high-loss business crime generally concern controls over authorized people engaged in unauthorized activities. Such controls include authentication of users, analysis of detailed audit records, unannounced audits, segregation of development and production systems and duties, shielding the viewing of screens, and security awareness and motivation controls in high-value transaction areas. Computer crimes that involve highly publicized intriguing computer misuse methods, such as privacy violations, radio frequency emanations eavesdropping, and computer viruses, have been reported in waves that periodically have saturated the news media during the past 20 years. We must be able to anticipate such highly publicized crimes and reduce the impact and embarrassment they cause. On the basis of our most recent experience, I propose nine new types of computer crime to be aware of: computer larceny (theft and burglary of small computers), automated hacking (use of computer programs to intrude), electronic data interchange fraud (business transaction fraud), Trojan bomb extortion and sabotage (code security inserted into others' systems that can be triggered to cause damage), LANarchy (unknown equipment in use), desktop forgery (computerized forgery and counterfeiting of documents), information anarchy (indiscriminate use of

  6. Computing for an SSC experiment

    International Nuclear Information System (INIS)

    Gaines, I.

    1993-01-01

    The hardware and software problems for SSC experiments are similar to those faced by present day experiments but larger in scale. In particular, the Solenoidal Detector Collaboration (SDC) anticipates the need for close to 10**6 MIPS of off-line computing and will produce several Petabytes (10**15 bytes) of data per year. Software contributions will be made from large numbers of highly geographically dispersed physicists. Hardware and software architectures to meet these needs have been designed. Providing the requisites amount of computing power and providing tools to allow cooperative software development using extensions of existing techniques look achievable. The major challenges will be to provide efficient methods of accessing and manipulating the enormous quantities of data that will be produced at the SSC, and to enforce the use of software engineering tools that will ensure the open-quotes correctnessclose quotes of experiment critical software

  7. Computing for ongoing experiments on high energy physics in LPP, JINR

    International Nuclear Information System (INIS)

    Belosludtsev, D.A.; Zhil'tsov, V.E.; Zinchenko, A.I.; Kekelidze, V.D.; Madigozhin, D.T.; Potrebenikov, Yu.K.; Khabarov, S.V.; Shkarovskij, S.N.; Shchinov, B.G.

    2004-01-01

    The computer infrastructure made at the Laboratory of Particle Physics, JINR, purposed for active participation of JINR experts in ongoing experiments on particle and nuclear physics is presented. The principles of design and construction of the personal computer farm have been given and the used computer and informational services for effective application of distributed computer resources have been described

  8. Distributed computing grid experiences in CMS

    CERN Document Server

    Andreeva, Julia; Barrass, T; Bonacorsi, D; Bunn, Julian; Capiluppi, P; Corvo, M; Darmenov, N; De Filippis, N; Donno, F; Donvito, G; Eulisse, G; Fanfani, A; Fanzago, F; Filine, A; Grandi, C; Hernández, J M; Innocente, V; Jan, A; Lacaprara, S; Legrand, I; Metson, S; Newbold, D; Newman, H; Pierro, A; Silvestris, L; Steenberg, C; Stockinger, H; Taylor, Lucas; Thomas, M; Tuura, L; Van Lingen, F; Wildish, Tony

    2005-01-01

    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data- taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure ...

  9. Gender and urban infrastructural poverty experience in Africa: A preliminary survey in Ibadan city, Nigeria

    Directory of Open Access Journals (Sweden)

    Raimi. A. Asiyanbola

    2012-12-01

    Full Text Available The paper examines gender differences in the urban infrastructural poverty experience in an African city – Ibadan, Nigeria. The result of the cross-sectional survey of 232 households sampled in Ibadan city shows that there is intra-urban variation in the women and men urban infrastructure experience in Ibadan. The result of the correlation analysis shows that there is significant relationship between women and men urban infrastructure experience and the household income, educational level, household size and the stage in the life cycle; only with the urban infrastructure experience of the women is a significant relationship found with the occupation and the responsibility in the household. The result of the multiple linear regression analysis shows that the impact/effect of the socio-cultural, demographic and economic characteristics are more on women experience of urban infrastructure than on men’s experience. While the relative contributions of the economic characteristics, family characteristics and socio-cultural characteristics in that order are all significant in explaining the variance in women’s experience of urban infrastructure, only economic characteristics and family characteristics in that order are found to be significant in the case of the men. Also, the most important socio-cultural demographic and economic variables as shown by the beta coefficients for women are household income, household size, and responsibility in the household, while for men are the household income and the household size. Policy implications of the findings are highlighted in the paper.

  10. Network computing infrastructure to share tools and data in global nuclear energy partnership

    International Nuclear Information System (INIS)

    Kim, Guehee; Suzuki, Yoshio; Teshima, Naoya

    2010-01-01

    CCSE/JAEA (Center for Computational Science and e-Systems/Japan Atomic Energy Agency) integrated a prototype system of a network computing infrastructure for sharing tools and data to support the U.S. and Japan collaboration in GNEP (Global Nuclear Energy Partnership). We focused on three technical issues to apply our information process infrastructure, which are accessibility, security, and usability. In designing the prototype system, we integrated and improved both network and Web technologies. For the accessibility issue, we adopted SSL-VPN (Security Socket Layer - Virtual Private Network) technology for the access beyond firewalls. For the security issue, we developed an authentication gateway based on the PKI (Public Key Infrastructure) authentication mechanism to strengthen the security. Also, we set fine access control policy to shared tools and data and used shared key based encryption method to protect tools and data against leakage to third parties. For the usability issue, we chose Web browsers as user interface and developed Web application to provide functions to support sharing tools and data. By using WebDAV (Web-based Distributed Authoring and Versioning) function, users can manipulate shared tools and data through the Windows-like folder environment. We implemented the prototype system in Grid infrastructure for atomic energy research: AEGIS (Atomic Energy Grid Infrastructure) developed by CCSE/JAEA. The prototype system was applied for the trial use in the first period of GNEP. (author)

  11. An Open Computing Infrastructure that Facilitates Integrated Product and Process Development from a Decision-Based Perspective

    Science.gov (United States)

    Hale, Mark A.

    1996-01-01

    Computer applications for design have evolved rapidly over the past several decades, and significant payoffs are being achieved by organizations through reductions in design cycle times. These applications are overwhelmed by the requirements imposed during complex, open engineering systems design. Organizations are faced with a number of different methodologies, numerous legacy disciplinary tools, and a very large amount of data. Yet they are also faced with few interdisciplinary tools for design collaboration or methods for achieving the revolutionary product designs required to maintain a competitive advantage in the future. These organizations are looking for a software infrastructure that integrates current corporate design practices with newer simulation and solution techniques. Such an infrastructure must be robust to changes in both corporate needs and enabling technologies. In addition, this infrastructure must be user-friendly, modular and scalable. This need is the motivation for the research described in this dissertation. The research is focused on the development of an open computing infrastructure that facilitates product and process design. In addition, this research explicitly deals with human interactions during design through a model that focuses on the role of a designer as that of decision-maker. The research perspective here is taken from that of design as a discipline with a focus on Decision-Based Design, Theory of Languages, Information Science, and Integration Technology. Given this background, a Model of IPPD is developed and implemented along the lines of a traditional experimental procedure: with the steps of establishing context, formalizing a theory, building an apparatus, conducting an experiment, reviewing results, and providing recommendations. Based on this Model, Design Processes and Specification can be explored in a structured and implementable architecture. An architecture for exploring design called DREAMS (Developing Robust

  12. Perancangan dan Analisis Kinerja Private Cloud Computing dengan Layanan Infrastructure-As-A-Service (IAAS

    Directory of Open Access Journals (Sweden)

    Wikranta Arsa

    2014-07-01

    Abstract  Server machine is one of the main components in supporting and developing a web-based scientific work. The high price of the server to be the main obstacle in the student produced a scholarly work. Server configuration that can be done anywhere and anytime to be a fundamental desire, in addition to the booking engine is easy, fast, and flexible is also highly desirable. For that we need a system that can handle these problems. Cloud computing with Infrastructure-As-A-Serveice (IAAS can provide a reliable infrastructure. To determine the performance of the system, we required a performance analysis of cloud server between conventional servers. Results of performance analysis of private cloud computing with Infrastructure-As-A-Service (IAAS indicate that the cloud server performance comparison with conventional server is not too much different and the system resource usage level servers provide more leverage.   Keyword—Cloud Computing, Infrastructure As-A-Service (IAAS, Performance Analysis.

  13. Cloud Computing in Support of Applied Learning: A Baseline Study of Infrastructure Design at Southern Polytechnic State University

    Science.gov (United States)

    Conn, Samuel S.; Reichgelt, Han

    2013-01-01

    Cloud computing represents an architecture and paradigm of computing designed to deliver infrastructure, platforms, and software as constructible computing resources on demand to networked users. As campuses are challenged to better accommodate academic needs for applications and computing environments, cloud computing can provide an accommodating…

  14. VMEbus based computer and real-time UNIX as infrastructure of DAQ

    International Nuclear Information System (INIS)

    Yasu, Y.; Fujii, H.; Nomachi, M.; Kodama, H.; Inoue, E.; Tajima, Y.; Takeuchi, Y.; Shimizu, Y.

    1994-01-01

    This paper describes what the authors have constructed as the infrastructure of data acquisition system (DAQ). The paper reports recent developments concerned with HP VME board computer with LynxOS (HP742rt/HP-RT) and Alpha/OSF1 with VMEbus adapter. The paper also reports current status of developing a Benchmark Suite for Data Acquisition (DAQBENCH) for measuring not only the performance of VME/CAMAC access but also that of the context switching, the inter-process communications and so on, for various computers including Workstation-based systems and VME board computers

  15. Enhanced computational infrastructure for data analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; McHarg, B.B.; Meyer, W.H.; Parker, C.T.

    2000-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from nine national laboratories, 19 foreign laboratories, 16 universities, and five industrial partnerships. As a result of this work, DIII-D data is available on a 24x7 basis from a set of viewing and analysis tools that can be run on either the collaborators' or DIII-D's computer systems. Additionally, a web based data and code documentation system has been created to aid the novice and expert user alike

  16. Enhanced Computational Infrastructure for Data Analysis at the DIII-D National Fusion Facility

    International Nuclear Information System (INIS)

    Schissel, D.P.; Peng, Q.; Schachter, J.; Terpstra, T.B.; Casper, T.A.; Freeman, J.; Jong, R.; Keith, K.M.; Meyer, W.H.; Parker, C.T.; McCharg, B.B.

    1999-01-01

    Recently a number of enhancements to the computer hardware infrastructure have been implemented at the DIII-D National Fusion Facility. Utilizing these improvements to the hardware infrastructure, software enhancements are focusing on streamlined analysis, automation, and graphical user interface (GUI) systems to enlarge the user base. The adoption of the load balancing software package LSF Suite by Platform Computing has dramatically increased the availability of CPU cycles and the efficiency of their use. Streamlined analysis has been aided by the adoption of the MDSplus system to provide a unified interface to analyzed DIII-D data. The majority of MDSplus data is made available in between pulses giving the researcher critical information before setting up the next pulse. Work on data viewing and analysis tools focuses on efficient GUI design with object-oriented programming (OOP) for maximum code flexibility. Work to enhance the computational infrastructure at DIII-D has included a significant effort to aid the remote collaborator since the DIII-D National Team consists of scientists from 9 national laboratories, 19 foreign laboratories, 16 universities, and 5 industrial partnerships. As a result of this work, DIII-D data is available on a 24 x 7 basis from a set of viewing and analysis tools that can be run either on the collaborators' or DIII-Ds computer systems. Additionally, a Web based data and code documentation system has been created to aid the novice and expert user alike

  17. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2's (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Instituto de Fisica Corpuscular de Valencia), after discussing with the ATLAS Tier-3 task force, should interact with the ATLAS computing model, detail the conditions under which Tier-3 centres can expect some level of support and set reasonable expectations for the scope and support of ATLAS Tier-3 sites. (orig.)

  18. WRF4G project: Adaptation of WRF Model to Distributed Computing Infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Fernández Quiruelas, Valvanuz; García Díez, Markel; Blanco Real, Jose C.; Fernández, Jesús

    2013-04-01

    Nowadays Grid Computing is powerful computational tool which is ready to be used for scientific community in different areas (such as biomedicine, astrophysics, climate, etc.). However, the use of this distributed computing infrastructures (DCI) is not yet common practice in climate research, and only a few teams and applications in this area take advantage of this infrastructure. Thus, the first objective of this project is to popularize the use of this technology in the atmospheric sciences area. In order to achieve this objective, one of the most used applications has been taken (WRF; a limited- area model, successor of the MM5 model), that has a user community formed by more than 8000 researchers worldwide. This community develop its research activity on different areas and could benefit from the advantages of Grid resources (case study simulations, regional hind-cast/forecast, sensitivity studies, etc.). The WRF model is been used as input by many energy and natural hazards community, therefore those community will also benefit. However, Grid infrastructures have some drawbacks for the execution of applications that make an intensive use of CPU and memory for a long period of time. This makes necessary to develop a specific framework (middleware). This middleware encapsulates the application and provides appropriate services for the monitoring and management of the jobs and the data. Thus, the second objective of the project consists on the development of a generic adaptation of WRF for Grid (WRF4G), to be distributed as open-source and to be integrated in the official WRF development cycle. The use of this WRF adaptation should be transparent and useful to face any of the previously described studies, and avoid any of the problems of the Grid infrastructure. Moreover it should simplify the access to the Grid infrastructures for the research teams, and also to free them from the technical and computational aspects of the use of the Grid. Finally, in order to

  19. Social web applications in the city: a lightweight infrastructure for urban computing

    DEFF Research Database (Denmark)

    Hansen, Frank Allan; Grønbæk, Kaj

    2008-01-01

    In this paper, we describe an infrastructure for browsing and multimedia blogging of Web-based information anchored with physical places in an urban environment. The infrastructure is generic in the sense that it may use any means such as GPS, RFID or 2D-barcodes as ubiquitous links anchors...... to anchor Web-based information, blogs, and services in the physical environment. The infrastructure is inspired from earlier work on open hypermedia, in the sense that the anchoring and blogging functionality can be integrated to augment arbitrary Web sites providing information that is relevant to places...... or objects in the physical world. The blog and anchor functionality is implemented as a set of Web services running on a server external to the content server. Experiences and design issues from three cases are discussed, which use Semacode-based physical anchoring to support lightweight urban Web...

  20. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    CERN Document Server

    González de la Hoza, S; Ros, E; Sánchez, J; Amorós, G; Fassi, F; Fernández, A; Kaci, M; Lamas, A; Salt, J

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2’s (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Insti...

  1. Monitoring performance of a highly distributed and complex computing infrastructure in LHCb

    Science.gov (United States)

    Mathe, Z.; Haen, C.; Stagni, F.

    2017-10-01

    In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.

  2. German experience in managing stormwater with green infrastructure

    Science.gov (United States)

    This paper identifies and describes experience with ‘green’ stormwater management practices in Germany. It provides the context in which developments took place and extracts lessons learned to inform efforts of other countries in confronting urban stormwater challenges. Our findi...

  3. Telecommunications, power supply, computer systems: the infrastructures of the soccer world cup; Telecommunications, electricite, informatique: les infrastructures de la Coupe du Monde

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1998-06-01

    The 1998 edition of the soccer world cup took place in ten different stadiums in France and several related sites. This short paper gives a general overview of the infrastructures developed for this occasion in the domains of telecommunications, power supply (substations, protection systems, computerized control systems..), and computer systems. (J.S.)

  4. Probability Distributome: A Web Computational Infrastructure for Exploring the Properties, Interrelations, and Applications of Probability Distributions.

    Science.gov (United States)

    Dinov, Ivo D; Siegrist, Kyle; Pearl, Dennis K; Kalinin, Alexandr; Christou, Nicolas

    2016-06-01

    Probability distributions are useful for modeling, simulation, analysis, and inference on varieties of natural processes and physical phenomena. There are uncountably many probability distributions. However, a few dozen families of distributions are commonly defined and are frequently used in practice for problem solving, experimental applications, and theoretical studies. In this paper, we present a new computational and graphical infrastructure, the Distributome , which facilitates the discovery, exploration and application of diverse spectra of probability distributions. The extensible Distributome infrastructure provides interfaces for (human and machine) traversal, search, and navigation of all common probability distributions. It also enables distribution modeling, applications, investigation of inter-distribution relations, as well as their analytical representations and computational utilization. The entire Distributome framework is designed and implemented as an open-source, community-built, and Internet-accessible infrastructure. It is portable, extensible and compatible with HTML5 and Web2.0 standards (http://Distributome.org). We demonstrate two types of applications of the probability Distributome resources: computational research and science education. The Distributome tools may be employed to address five complementary computational modeling applications (simulation, data-analysis and inference, model-fitting, examination of the analytical, mathematical and computational properties of specific probability distributions, and exploration of the inter-distributional relations). Many high school and college science, technology, engineering and mathematics (STEM) courses may be enriched by the use of modern pedagogical approaches and technology-enhanced methods. The Distributome resources provide enhancements for blended STEM education by improving student motivation, augmenting the classical curriculum with interactive webapps, and overhauling the

  5. Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales

    Data.gov (United States)

    National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...

  6. Assessing landscape experiences as a cultural ecosystem service in public infrastructure projects

    DEFF Research Database (Denmark)

    Zandersen, Marianne; Lindhjem, Henrik; Magnussen, Kristin

    Undesirable landscape changes, especially from large infrastructure projects, may give rise to large welfare losses due to degraded landscape experiences. These losses are largely unaccounted for in Nordic countries’ planning processes. There is a need to develop practical methods of including...

  7. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  8. a Holistic Approach for Inspection of Civil Infrastructures Based on Computer Vision Techniques

    Science.gov (United States)

    Stentoumis, C.; Protopapadakis, E.; Doulamis, A.; Doulamis, N.

    2016-06-01

    In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  9. A HOLISTIC APPROACH FOR INSPECTION OF CIVIL INFRASTRUCTURES BASED ON COMPUTER VISION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    C. Stentoumis

    2016-06-01

    Full Text Available In this work, it is examined the 2D recognition and 3D modelling of concrete tunnel cracks, through visual cues. At the time being, the structural integrity inspection of large-scale infrastructures is mainly performed through visual observations by human inspectors, who identify structural defects, rate them and, then, categorize their severity. The described approach targets at minimum human intervention, for autonomous inspection of civil infrastructures. The shortfalls of existing approaches in crack assessment are being addressed by proposing a novel detection scheme. Although efforts have been made in the field, synergies among proposed techniques are still missing. The holistic approach of this paper exploits the state of the art techniques of pattern recognition and stereo-matching, in order to build accurate 3D crack models. The innovation lies in the hybrid approach for the CNN detector initialization, and the use of the modified census transformation for stereo matching along with a binary fusion of two state-of-the-art optimization schemes. The described approach manages to deal with images of harsh radiometry, along with severe radiometric differences in the stereo pair. The effectiveness of this workflow is evaluated on a real dataset gathered in highway and railway tunnels. What is promising is that the computer vision workflow described in this work can be transferred, with adaptations of course, to other infrastructure such as pipelines, bridges and large industrial facilities that are in the need of continuous state assessment during their operational life cycle.

  10. Using Cloud Services for Library IT Infrastructure

    OpenAIRE

    Erik Mitchell

    2010-01-01

    Cloud computing comes in several different forms and this article documents how service, platform, and infrastructure forms of cloud computing have been used to serve library needs. Following an overview of these uses the article discusses the experience of one library in migrating IT infrastructure to a cloud environment and concludes with a model for assessing cloud computing.

  11. WISDOM-II: Screening against multiple targets implicated in malaria using computational grid infrastructures

    Directory of Open Access Journals (Sweden)

    Kenyon Colin

    2009-05-01

    Full Text Available Abstract Background Despite continuous efforts of the international community to reduce the impact of malaria on developing countries, no significant progress has been made in the recent years and the discovery of new drugs is more than ever needed. Out of the many proteins involved in the metabolic activities of the Plasmodium parasite, some are promising targets to carry out rational drug discovery. Motivation Recent years have witnessed the emergence of grids, which are highly distributed computing infrastructures particularly well fitted for embarrassingly parallel computations like docking. In 2005, a first attempt at using grids for large-scale virtual screening focused on plasmepsins and ended up in the identification of previously unknown scaffolds, which were confirmed in vitro to be active plasmepsin inhibitors. Following this success, a second deployment took place in the fall of 2006 focussing on one well known target, dihydrofolate reductase (DHFR, and on a new promising one, glutathione-S-transferase. Methods In silico drug design, especially vHTS is a widely and well-accepted technology in lead identification and lead optimization. This approach, therefore builds, upon the progress made in computational chemistry to achieve more accurate in silico docking and in information technology to design and operate large scale grid infrastructures. Results On the computational side, a sustained infrastructure has been developed: docking at large scale, using different strategies in result analysis, storing of the results on the fly into MySQL databases and application of molecular dynamics refinement are MM-PBSA and MM-GBSA rescoring. The modeling results obtained are very promising. Based on the modeling results, In vitro results are underway for all the targets against which screening is performed. Conclusion The current paper describes the rational drug discovery activity at large scale, especially molecular docking using FlexX software

  12. Problem-Oriented Simulation Packages and Computational Infrastructure for Numerical Studies of Powerful Gyrotrons

    International Nuclear Information System (INIS)

    Damyanova, M; Sabchevski, S; Vasileva, E; Balabanova, E; Zhelyazkov, I; Dankov, P; Malinov, P

    2016-01-01

    Powerful gyrotrons are necessary as sources of strong microwaves for electron cyclotron resonance heating (ECRH) and electron cyclotron current drive (ECCD) of magnetically confined plasmas in various reactors (most notably ITER) for controlled thermonuclear fusion. Adequate physical models and efficient problem-oriented software packages are essential tools for numerical studies, analysis, optimization and computer-aided design (CAD) of such high-performance gyrotrons operating in a CW mode and delivering output power of the order of 1-2 MW. In this report we present the current status of our simulation tools (physical models, numerical codes, pre- and post-processing programs, etc.) as well as the computational infrastructure on which they are being developed, maintained and executed. (paper)

  13. MOBBED: a computational data infrastructure for handling large collections of event-rich time series datasets in MATLAB.

    Science.gov (United States)

    Cockfield, Jeremy; Su, Kyungmin; Robbins, Kay A

    2013-01-01

    Experiments to monitor human brain activity during active behavior record a variety of modalities (e.g., EEG, eye tracking, motion capture, respiration monitoring) and capture a complex environmental context leading to large, event-rich time series datasets. The considerable variability of responses within and among subjects in more realistic behavioral scenarios requires experiments to assess many more subjects over longer periods of time. This explosion of data requires better computational infrastructure to more systematically explore and process these collections. MOBBED is a lightweight, easy-to-use, extensible toolkit that allows users to incorporate a computational database into their normal MATLAB workflow. Although capable of storing quite general types of annotated data, MOBBED is particularly oriented to multichannel time series such as EEG that have event streams overlaid with sensor data. MOBBED directly supports access to individual events, data frames, and time-stamped feature vectors, allowing users to ask questions such as what types of events or features co-occur under various experimental conditions. A database provides several advantages not available to users who process one dataset at a time from the local file system. In addition to archiving primary data in a central place to save space and avoid inconsistencies, such a database allows users to manage, search, and retrieve events across multiple datasets without reading the entire dataset. The database also provides infrastructure for handling more complex event patterns that include environmental and contextual conditions. The database can also be used as a cache for expensive intermediate results that are reused in such activities as cross-validation of machine learning algorithms. MOBBED is implemented over PostgreSQL, a widely used open source database, and is freely available under the GNU general public license at http://visual.cs.utsa.edu/mobbed. Source and issue reports for MOBBED

  14. Computing challenges of the CMS experiment

    International Nuclear Information System (INIS)

    Krammer, N.; Liko, D.

    2017-01-01

    The success of the LHC experiments is due to the magnificent performance of the detector systems and the excellent operating computing systems. The CMS offline software and computing system is successfully fulfilling the LHC Run 2 requirements. For the increased data rate of future LHC operation, together with high pileup interactions, improvements of the usage of the current computing facilities and new technologies became necessary. Especially for the challenge of the future HL-LHC a more flexible and sophisticated computing model is needed. In this presentation, I will discuss the current computing system used in the LHC Run 2 and future computing facilities for the HL-LHC runs using flexible computing technologies like commercial and academic computing clouds. The cloud resources are highly virtualized and can be deployed for a variety of computing tasks providing the capacities for the increasing needs of large scale scientific computing.

  15. Sharing experience and knowledge with wearable computers

    OpenAIRE

    Nilsson, Marcus; Drugge, Mikael; Parnes, Peter

    2004-01-01

    Wearable computer have mostly been looked on when used in isolation. But the wearable computer with Internet connection is a good tool for communication and for sharing knowledge and experience with other people. The unobtrusiveness of this type of equipment makes it easy to communicate at most type of locations and contexts. The wearable computer makes it easy to be a mediator of other people knowledge and becoming a knowledgeable user. This paper describes the experience gained from testing...

  16. Data that warms: Waste heat, infrastructural convergence and the computation traffic commodity

    Directory of Open Access Journals (Sweden)

    Julia Velkova

    2016-12-01

    Full Text Available This article explores the ways in which data centre operators are currently reconfiguring the systems of energy and heat supply in European capitals, replacing conventional forms of heating with data-driven heat production, and becoming important energy suppliers. Taking as an empirical object the heat generated from server halls, the article traces the expanding phenomenon of ‘waste heat recycling’ and charts the ways in which data centre operators in Stockholm and Paris direct waste heat through metropolitan district heating systems and urban homes, and valorise it. Drawing on new materialisms, infrastructure studies and classical theory of production and destruction of value in capitalism, the article outlines two modes in which this process happens, namely infrastructural convergence and decentralisation of the data centre. These modes arguably help data centre operators convert big data from a source of value online into a raw material that needs to flow in the network irrespective of meaning. In this conversion process, the article argues, a new commodity is in a process of formation, that of computation traffic. Altogether data-driven heat production is suggested to raise the importance of certain data processing nodes in Northern Europe, simultaneously intervening in the global politics of access, while neutralising external criticism towards big data by making urban life literally dependent on power from data streams.

  17. A Comprehensive and Cost-Effective Computer Infrastructure for K-12 Schools

    Science.gov (United States)

    Warren, G. P.; Seaton, J. M.

    1996-01-01

    Since 1993, NASA Langley Research Center has been developing and implementing a low-cost Internet connection model, including system architecture, training, and support, to provide Internet access for an entire network of computers. This infrastructure allows local area networks which exceed 50 machines per school to independently access the complete functionality of the Internet by connecting to a central site, using state-of-the-art commercial modem technology, through a single standard telephone line. By locating high-cost resources at this central site and sharing these resources and their costs among the school districts throughout a region, a practical, efficient, and affordable infrastructure for providing scale-able Internet connectivity has been developed. As the demand for faster Internet access grows, the model has a simple expansion path that eliminates the need to replace major system components and re-train personnel. Observations of optical Internet usage within an environment, particularly school classrooms, have shown that after an initial period of 'surfing,' the Internet traffic becomes repetitive. By automatically storing requested Internet information on a high-capacity networked disk drive at the local site (network based disk caching), then updating this information only when it changes, well over 80 percent of the Internet traffic that leaves a location can be eliminated by retrieving the information from the local disk cache.

  18. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  19. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  20. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  1. Event heap: a coordination infrastructure for dynamic heterogeneous application interactions in ubiquitous computing environments

    Science.gov (United States)

    Johanson, Bradley E.; Fox, Armando; Winograd, Terry A.; Hanrahan, Patrick M.

    2010-04-20

    An efficient and adaptive middleware infrastructure called the Event Heap system dynamically coordinates application interactions and communications in a ubiquitous computing environment, e.g., an interactive workspace, having heterogeneous software applications running on various machines and devices across different platforms. Applications exchange events via the Event Heap. Each event is characterized by a set of unordered, named fields. Events are routed by matching certain attributes in the fields. The source and target versions of each field are automatically set when an event is posted or used as a template. The Event Heap system implements a unique combination of features, both intrinsic to tuplespaces and specific to the Event Heap, including content based addressing, support for routing patterns, standard routing fields, limited data persistence, query persistence/registration, transparent communication, self-description, flexible typing, logical/physical centralization, portable client API, at most once per source first-in-first-out ordering, and modular restartability.

  2. Radiotherapy infrastructure and human resources in Switzerland : Present status and projected computations for 2020.

    Science.gov (United States)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-09-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology "Quantification of Radiation Therapy Infrastructure and Staffing" guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO "Health Economics in Radiation Oncology" (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland.

  3. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    International Nuclear Information System (INIS)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar; Zwahlen, Daniel; Bodis, Stephan

    2016-01-01

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [de

  4. Privacy-Preserving Data Aggregation Protocol for Fog Computing-Assisted Vehicle-to-Infrastructure Scenario

    Directory of Open Access Journals (Sweden)

    Yanan Chen

    2018-01-01

    Full Text Available Vehicle-to-infrastructure (V2I communication enables moving vehicles to upload real-time data about road surface situation to the Internet via fixed roadside units (RSU. Thanks to the resource restriction of mobile vehicles, fog computation-enhanced V2I communication scenario has received increasing attention recently. However, how to aggregate the sensed data from vehicles securely and efficiently still remains open to the V2I communication scenario. In this paper, a light-weight and anonymous aggregation protocol is proposed for the fog computing-based V2I communication scenario. With the proposed protocol, the data collected by the vehicles can be efficiently obtained by the RSU in a privacy-preserving manner. Particularly, we first suggest a certificateless aggregate signcryption (CL-A-SC scheme and prove its security in the random oracle model. The suggested CL-A-SC scheme, which is of independent interest, can achieve the merits of certificateless cryptography and signcryption scheme simultaneously. Then we put forward the anonymous aggregation protocol for V2I communication scenario as one extension of the suggested CL-A-SC scheme. Security analysis demonstrates that the proposed aggregation protocol achieves desirable security properties. The performance comparison shows that the proposed protocol significantly reduces the computation and communication overhead compared with the up-to-date protocols in this field.

  5. Mental Rotation Ability and Computer Game Experience

    Science.gov (United States)

    Gecu, Zeynep; Cagiltay, Kursat

    2015-01-01

    Computer games, which are currently very popular among students, can affect different cognitive abilities. The purpose of the present study is to examine undergraduate students' experiences and preferences in playing computer games as well as their mental rotation abilities. A total of 163 undergraduate students participated. The results showed a…

  6. ATLAS distributed computing: experience and evolution

    International Nuclear Information System (INIS)

    Nairz, A

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb −1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future

  7. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    Andreazza, A; Annovi, A; Martini, A; Barberis, D; Brunengo, A; Corosu, M; Campana, S; Girolamo, A Di; Carlino, G; Doria, A; Merola, L; Musto, E; Ciocca, C; Jha, M K; Cobal, M; Pascolo, F; Salvo, A De; Luminari, L; Sanctis, U De; Galeazzi, F

    2011-01-01

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  8. Update on the CERN Computing and Network Infrastructure for Controls (CNIC)

    CERN Multimedia

    Lueders, S

    2007-01-01

    Over the last few years modern accelerator and experiment control systems have increasingly been based on commercial-off-the-shelf products (VME crates, PLCs, SCADA systems, etc.), on Windows or Linux PCs, and on communication infrastructures using Ethernet and TCP/IP. Despite the benefits coming with this (r)evolution, new vulnerabilities are inherited too: Worms and viruses spread within seconds via the Ethernet cable, and attackers are becoming interested in control systems. Unfortunately, control PCs cannot be patched as fast as office PCs. Even worse, vulnerability scans at CERN using standard IT tools have shown that commercial automation systems lack fundamental security precautions: Some systems crashed during the scan, others could easily be stopped or their process data be altered. During the two years following the presentation of the CNIC Security Policy at ICALEPCS2005, a "Defense-in-Depth" approach has been applied to protect CERN's control systems. This presentation will give a review of its th...

  9. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  10. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  11. Design and study of parallel computing environment of Monte Carlo simulation for particle therapy planning using a public cloud-computing infrastructure

    International Nuclear Information System (INIS)

    Yokohama, Noriya

    2013-01-01

    This report was aimed at structuring the design of architectures and studying performance measurement of a parallel computing environment using a Monte Carlo simulation for particle therapy using a high performance computing (HPC) instance within a public cloud-computing infrastructure. Performance measurements showed an approximately 28 times faster speed than seen with single-thread architecture, combined with improved stability. A study of methods of optimizing the system operations also indicated lower cost. (author)

  12. The Computer Game as a Somatic Experience

    DEFF Research Database (Denmark)

    Nielsen, Henrik Smed

    2010-01-01

    This article describes the experience of playing computer games. With a media archaeological outset the relation between human and machine is emphasised as the key to understand the experience. This relation is further explored by drawing on a phenomenological philosophy of technology which...

  13. submitter LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    CERN Document Server

    Barranco, Javier; Cameron, David; Crouch, Matthew; De Maria, Riccardo; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Van der Veken, Frederik; Zacharov, Igor

    2017-01-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted i...

  14. LHC@Home: a BOINC-based volunteer computing infrastructure for physics studies at CERN

    Science.gov (United States)

    Barranco, Javier; Cai, Yunhai; Cameron, David; Crouch, Matthew; Maria, Riccardo De; Field, Laurence; Giovannozzi, Massimo; Hermes, Pascal; Høimyr, Nils; Kaltchev, Dobrin; Karastathis, Nikos; Luzzi, Cinzia; Maclean, Ewen; McIntosh, Eric; Mereghetti, Alessio; Molson, James; Nosochkov, Yuri; Pieloni, Tatiana; Reid, Ivan D.; Rivkin, Lenny; Segal, Ben; Sjobak, Kyrre; Skands, Peter; Tambasco, Claudia; Veken, Frederik Van der; Zacharov, Igor

    2017-12-01

    The LHC@Home BOINC project has provided computing capacity for numerical simulations to researchers at CERN since 2004, and has since 2011 been expanded with a wider range of applications. The traditional CERN accelerator physics simulation code SixTrack enjoys continuing volunteers support, and thanks to virtualisation a number of applications from the LHC experiment collaborations and particle theory groups have joined the consolidated LHC@Home BOINC project. This paper addresses the challenges related to traditional and virtualized applications in the BOINC environment, and how volunteer computing has been integrated into the overall computing strategy of the laboratory through the consolidated LHC@Home service. Thanks to the computing power provided by volunteers joining LHC@Home, numerous accelerator beam physics studies have been carried out, yielding an improved understanding of charged particle dynamics in the CERN Large Hadron Collider (LHC) and its future upgrades. The main results are highlighted in this paper.

  15. Cloud computing: Grijs of Groen? over energie-efficiëntie en duurzaamheid van Infrastructure as a Service

    NARCIS (Netherlands)

    Spitzer, A.M.; Worm, D.T.H.; Bomhof, F.W.; Bastiaans, M.

    2012-01-01

    Cloud computing is het op afroep, dynamisch ontsluiten van een verzameling ICT-middelen (zoals netwerken, opslag, verwerking, applicaties en diensten) over een netwerk. In dit rapport is uitgegaan van “Infrastructure as a Service”-clouds: opslag- en verwerkingscapaciteit wordt als dienst ter

  16. SAMGrid experiences with the Condor technology in Run II computing

    International Nuclear Information System (INIS)

    Baranovski, A.; Loebel-Carpenter, L.; Garzoglio, G.; Herber, R.; Illingworth, R.; Kennedy, R.; Kreymer, A.; Kumar, A.; Lueking, L.; Lyon, A.; Merritt, W.; Terekhov, I.; Trumbo, J.; Veseli, S.; White, S.; St. Denis, R.; Jain, S.; Nishandar, A.

    2004-01-01

    SAMGrid is a globally distributed system for data handling and job management, developed at Fermilab for the D0 and CDF experiments in Run II. The Condor system is being developed at the University of Wisconsin for management of distributed resources, computational and otherwise. We briefly review the SAMGrid architecture and its interaction with Condor, which was presented earlier. We then present our experiences using the system in production, which have two distinct aspects. At the global level, we deployed Condor-G, the Grid-extended Condor, for the resource brokering and global scheduling of our jobs. At the heart of the system is Condor's Matchmaking Service. As a more recent work at the computing element level, we have been benefiting from the large computing cluster at the University of Wisconsin campus. The architecture of the computing facility and the philosophy of Condor's resource management have prompted us to improve the application infrastructure for D0 and CDF, in aspects such as parting with the shared file system or reliance on resources being dedicated. As a result, we have increased productivity and made our applications more portable and Grid-ready. Our fruitful collaboration with the Condor team has been made possible by the Particle Physics Data Grid

  17. CMS distributed analysis infrastructure and operations: experience with the first LHC data

    International Nuclear Information System (INIS)

    Vaandering, E W

    2011-01-01

    The CMS distributed analysis infrastructure represents a heterogeneous pool of resources distributed across several continents. The resources are harnessed using glite and glidein-based work load management systems (WMS). We provide the operational experience of the analysis workflows using CRAB-based servers interfaced with the underlying WMS. The automatized interaction of the server with the WMS provides a successful analysis workflow. We present the operational experience as well as methods used in CMS to analyze the LHC data. The interaction with CMS Run-registry for Run and luminosity block selections via CRAB is discussed. The variations of different workflows during the LHC data-taking period and the lessons drawn from this experience are also outlined.

  18. Infrastructure for Multiphysics Software Integration in High Performance Computing-Aided Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Michael T. [Illinois Rocstar LLC, Champaign, IL (United States); Safdari, Masoud [Illinois Rocstar LLC, Champaign, IL (United States); Kress, Jessica E. [Illinois Rocstar LLC, Champaign, IL (United States); Anderson, Michael J. [Illinois Rocstar LLC, Champaign, IL (United States); Horvath, Samantha [Illinois Rocstar LLC, Champaign, IL (United States); Brandyberry, Mark D. [Illinois Rocstar LLC, Champaign, IL (United States); Kim, Woohyun [Illinois Rocstar LLC, Champaign, IL (United States); Sarwal, Neil [Illinois Rocstar LLC, Champaign, IL (United States); Weisberg, Brian [Illinois Rocstar LLC, Champaign, IL (United States)

    2016-10-15

    The project described in this report constructed and exercised an innovative multiphysics coupling toolkit called the Illinois Rocstar MultiPhysics Application Coupling Toolkit (IMPACT). IMPACT is an open source, flexible, natively parallel infrastructure for coupling multiple uniphysics simulation codes into multiphysics computational systems. IMPACT works with codes written in several high-performance-computing (HPC) programming languages, and is designed from the beginning for HPC multiphysics code development. It is designed to be minimally invasive to the individual physics codes being integrated, and has few requirements on those physics codes for integration. The goal of IMPACT is to provide the support needed to enable coupling existing tools together in unique and innovative ways to produce powerful new multiphysics technologies without extensive modification and rewrite of the physics packages being integrated. There are three major outcomes from this project: 1) construction, testing, application, and open-source release of the IMPACT infrastructure, 2) production of example open-source multiphysics tools using IMPACT, and 3) identification and engagement of interested organizations in the tools and applications resulting from the project. This last outcome represents the incipient development of a user community and application echosystem being built using IMPACT. Multiphysics coupling standardization can only come from organizations working together to define needs and processes that span the space of necessary multiphysics outcomes, which Illinois Rocstar plans to continue driving toward. The IMPACT system, including source code, documentation, and test problems are all now available through the public gitHUB.org system to anyone interested in multiphysics code coupling. Many of the basic documents explaining use and architecture of IMPACT are also attached as appendices to this document. Online HTML documentation is available through the gitHUB site

  19. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    Science.gov (United States)

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  20. Integrating CAD modules in a PACS environment using a wide computing infrastructure.

    Science.gov (United States)

    Suárez-Cuenca, Jorge J; Tilve, Amara; López, Ricardo; Ferro, Gonzalo; Quiles, Javier; Souto, Miguel

    2017-04-01

    The aim of this paper is to describe a project designed to achieve a total integration of different CAD algorithms into the PACS environment by using a wide computing infrastructure. The aim is to build a system for the entire region of Galicia, Spain, to make CAD accessible to multiple hospitals by employing different PACSs and clinical workstations. The new CAD model seeks to connect different devices (CAD systems, acquisition modalities, workstations and PACS) by means of networking based on a platform that will offer different CAD services. This paper describes some aspects related to the health services of the region where the project was developed, CAD algorithms that were either employed or selected for inclusion in the project, and several technical aspects and results. We have built a standard-based platform with which users can request a CAD service and receive the results in their local PACS. The process runs through a web interface that allows sending data to the different CAD services. A DICOM SR object is received with the results of the algorithms stored inside the original study in the proper folder with the original images. As a result, a homogeneous service to the different hospitals of the region will be offered. End users will benefit from a homogeneous workflow and a standardised integration model to request and obtain results from CAD systems in any modality, not dependant on commercial integration models. This new solution will foster the deployment of these technologies in the entire region of Galicia.

  1. Supporting life-long competence development using the TENCompetence infrastructure: a first experiment

    NARCIS (Netherlands)

    Schoonenboom, J.; Sligte, H.; Moghnieh, A.; Hernàndez-Leo, D.; Stefanov, K.; Glahn, C.; Specht, M.; Lemmers, R.; Sligte, H.; Koper, R.

    2008-01-01

    This paper describes a test of the TENCompetence infrastructure that was developed for supporting lifelong competence development. The infrastructure contains supportive elements, among others the listing of competences and their components, competence development plans attached to competences and

  2. An extensible infrastructure for fully automated spike sorting during online experiments.

    Science.gov (United States)

    Santhanam, Gopal; Sahani, Maneesh; Ryu, Stephen; Shenoy, Krishna

    2004-01-01

    When recording extracellular neural activity, it is often necessary to distinguish action potentials arising from distinct cells near the electrode tip, a process commonly referred to as "spike sorting." In a number of experiments, notably those that involve direct neuroprosthetic control of an effector, this cell-by-cell classification of the incoming signal must be achieved in real time. Several commercial offerings are available for this task, but all of these require some manual supervision per electrode, making each scheme cumbersome with large electrode counts. We present a new infrastructure that leverages existing unsupervised algorithms to sort and subsequently implement the resulting signal classification rules for each electrode using a commercially available Cerebus neural signal processor. We demonstrate an implementation of this infrastructure to classify signals from a cortical electrode array, using a probabilistic clustering algorithm (described elsewhere). The data were collected from a rhesus monkey performing a delayed center-out reach task. We used both sorted and unsorted (thresholded) action potentials from an array implanted in pre-motor cortex to "predict" the reach target, a common decoding operation in neuroprosthetic research. The use of sorted spikes led to an improvement in decoding accuracy of between 3.6 and 6.4%.

  3. RC Circuits: Some Computer-Interfaced Experiments.

    Science.gov (United States)

    Jolly, Pratibha; Verma, Mallika

    1994-01-01

    Describes a simple computer-interface experiment for recording the response of an RC network to an arbitrary input excitation. The setup is used to pose a variety of open-ended investigations in network modeling by varying the initial conditions, input signal waveform, and the circuit topology. (DDR)

  4. Incorporating lab experience into computer security courses

    NARCIS (Netherlands)

    Ben Othmane, L.; Bhuse, V.; Lilien, L.T.

    2013-01-01

    We describe our experience with teaching computer security labs at two different universities. We report on the hardware and software lab setups, summarize lab assignments, present the challenges encountered, and discuss the lessons learned. We agree with and emphasize the viewpoint that security

  5. Volunteer computing experience with ATLAS@Home

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Bianchi, Riccardo-Maria; Cameron, David; Filipčič, Andrej; Lançon, Eric; Wu, Wenjing

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  6. Volunteer Computing Experience with ATLAS@Home

    CERN Document Server

    Cameron, David; The ATLAS collaboration; Bourdarios, Claire; Lan\\c con, Eric

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers' resources make up a sizable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one job to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  7. Volunteer Computing Experience with ATLAS@Home

    Science.gov (United States)

    Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.; ATLAS Collaboration

    2017-10-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  8. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  9. Computer Based Road Accident Reconstruction Experiences

    Directory of Open Access Journals (Sweden)

    Milan Batista

    2005-03-01

    Full Text Available Since road accident analyses and reconstructions are increasinglybased on specific computer software for simulationof vehicle d1iving dynamics and collision dynamics, and forsimulation of a set of trial runs from which the model that bestdescribes a real event can be selected, the paper presents anoverview of some computer software and methods available toaccident reconstruction experts. Besides being time-saving,when properly used such computer software can provide moreauthentic and more trustworthy accident reconstruction, thereforepractical experiences while using computer software toolsfor road accident reconstruction obtained in the TransportSafety Laboratory at the Faculty for Maritime Studies andTransport of the University of Ljubljana are presented and discussed.This paper addresses also software technology for extractingmaximum information from the accident photo-documentationto support accident reconstruction based on the simulationsoftware, as well as the field work of reconstruction expertsor police on the road accident scene defined by this technology.

  10. The Computational Infrastructure for Geodynamics: An Example of Software Curation and Citation in the Geodynamics Community

    Science.gov (United States)

    Hwang, L.; Kellogg, L. H.

    2017-12-01

    Curation of software promotes discoverability and accessibility and works hand in hand with scholarly citation to ascribe value to, and provide recognition for software development. To meet this challenge, the Computational Infrastructure for Geodynamics (CIG) maintains a community repository built on custom and open tools to promote discovery, access, identification, credit, and provenance of research software for the geodynamics community. CIG (geodynamics.org) originated from recognition of the tremendous effort required to develop sound software and the need to reduce duplication of effort and to sustain community codes. CIG curates software across 6 domains and has developed and follows software best practices that include establishing test cases, documentation, and a citable publication for each software package. CIG software landing web pages provide access to current and past releases; many are also accessible through the CIG community repository on github. CIG has now developed abc - attribution builder for citation to enable software users to give credit to software developers. abc uses zenodo as an archive and as the mechanism to obtain a unique identifier (DOI) for scientific software. To assemble the metadata, we searched the software's documentation and research publications and then requested the primary developers to verify. In this process, we have learned that each development community approaches software attribution differently. The metadata gathered is based on guidelines established by groups such as FORCE11 and OntoSoft. The rollout of abc is gradual as developers are forward-looking, rarely willing to go back and archive prior releases in zenodo. Going forward all actively developed packages will utilize the zenodo and github integration to automate the archival process when a new release is issued. How to handle legacy software, multi-authored libraries, and assigning roles to software remain open issues.

  11. Software Attribution for Geoscience Applications in the Computational Infrastructure for Geodynamics

    Science.gov (United States)

    Hwang, L.; Dumit, J.; Fish, A.; Soito, L.; Kellogg, L. H.; Smith, M.

    2015-12-01

    Scientific software is largely developed by individual scientists and represents a significant intellectual contribution to the field. As the scientific culture and funding agencies move towards an expectation that software be open-source, there is a corresponding need for mechanisms to cite software, both to provide credit and recognition to developers, and to aid in discoverability of software and scientific reproducibility. We assess the geodynamic modeling community's current citation practices by examining more than 300 predominantly self-reported publications utilizing scientific software in the past 5 years that is available through the Computational Infrastructure for Geodynamics (CIG). Preliminary results indicate that authors cite and attribute software either through citing (in rank order) peer-reviewed scientific publications, a user's manual, and/or a paper describing the software code. Attributions maybe found directly in the text, in acknowledgements, in figure captions, or in footnotes. What is considered citable varies widely. Citations predominantly lack software version numbers or persistent identifiers to find the software package. Versioning may be implied through reference to a versioned user manual. Authors sometimes report code features used and whether they have modified the code. As an open-source community, CIG requests that researchers contribute their modifications to the repository. However, such modifications may not be contributed back to a repository code branch, decreasing the chances of discoverability and reproducibility. Survey results through CIG's Software Attribution for Geoscience Applications (SAGA) project suggest that lack of knowledge, tools, and workflows to cite codes are barriers to effectively implement the emerging citation norms. Generated on-demand attributions on software landing pages and a prototype extensible plug-in to automatically generate attributions in codes are the first steps towards reproducibility.

  12. Computational Experiments for Science and Engineering Education

    Science.gov (United States)

    Xie, Charles

    2011-01-01

    How to integrate simulation-based engineering and science (SBES) into the science curriculum smoothly is a challenging question. For the importance of SBES to be appreciated, the core value of simulations-that they help people understand natural phenomena and solve engineering problems-must be taught. A strategy to achieve this goal is to introduce computational experiments to the science curriculum to replace or supplement textbook illustrations and exercises and to complement or frame hands-on or wet lab experiments. In this way, students will have an opportunity to learn about SBES without compromising other learning goals required by the standards and teachers will welcome these tools as they strengthen what they are already teaching. This paper demonstrates this idea using a number of examples in physics, chemistry, and engineering. These exemplary computational experiments show that it is possible to create a curriculum that is both deeper and wider.

  13. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, G.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acharya, B.S.; Adams, D.L.; Addy, T.N.; Adelman, J.; Adorisio, C.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Ahsan, M.; Aielli, G.; Akdogan, T.; Akesson, T.P.A.; Akimoto, G.; Akimov, A.V.; Aktas, A.; Alam, M.S.; Alam, M.A.; Albrand, S.; Aleksa, M.; Aleksandrov, I.N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P.P.; Allwood-Spiers, S.E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M.G.; Amako, K.; Amelung, C.; Amorim, A.; Amoros, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C.F.; Anderson, K.J.; Andreazza, A.; Andrei, V.; Anduaga, X.S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Antunovic, B.; Anulli, F.; Aoun, S.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A.T.H.; Archambault, J.P.; Arfaoui, S.; Arguin, J-F.; Argyropoulos, T.; Arik, M.; Armbruster, A.J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Arutinov, D.; Asai, M.; Asai, S.; Asfandiyarov, R.; Ask, S.; Asman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M.A.; Bach, A.M.; Bachacou, H.; Bachas, K.; Backes, M.; Badescu, E.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J.T.; Baker, O.K.; Baker, M.D.; Baker, S; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, S.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S.P.; Baranov, S.; Barashkou, A.; Barber, T.; Barberio, E.L.; Barberis, D.; Barbero, M.; Bardin, D.Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B.M.; Barnett, R.M.; Baroncelli, A.; Barr, A.J.; Barreiro, F.; Barreiro Guimaraes da Costa, J.; Barrillon, P.; Bartoldus, R.; Bartsch, D.; Bates, R.L.; Batkova, L.; Batley, J.R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H.S.; Bazalova, M.; Beare, B.; Beau, T.; Beauchemin, P.H.; Beccherle, R.; Becerici, N.; Bechtle, P.; Beck, G.A.; Beck, H.P.; Beckingham, M.; Becks, K.H.; Beddall, A.J.; Beddall, A.; Bednyakov, V.A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P.K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P.J.; Bell, W.H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Bendel, M.; Benedict, B.H.; Benekos, N.; Benhammou, Y.; Benincasa, G.P.; Benjamin, D.P.; Benoit, M.; Bensinger, J.R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Besana, M.I.; Besson, N.; Bethke, S.; Bianchi, R.M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K.M.; Blair, R.E.; Blanchard, J-B; Blanchot, G.; Blocker, C.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G.J.; Bocci, A.; Boehler, M.; Boek, J.; Boelaert, N.; Boser, S.; Bogaerts, J.A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Bondarenko, V.G.; Bondioli, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E.V.; Boulahouache, C.; Bourdarios, C.; Boveia, A.; Boyd, J.; Boyko, I.R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G.W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J.E.; Braun, H.M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Britton, D.; Brochu, F.M.; Brock, I.; Brock, R.; Brodet, E.; Bromberg, C.; Brooijmans, G.; Brooks, W.K.; Brown, G.; Bruckman de Renstrom, P.A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bucci, F.; Buchanan, J.; Buchholz, P.; Buckley, A.G.; Budagov, I.A.; Budick, B.; Buscher, V.; Bugge, L.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C.P.; Butin, F.; Butler, B.; Butler, J.M.; Buttar, C.M.; Butterworth, J.M.; Byatt, T.; Caballero, J.; Cabrera Urban, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L.P.; Calvet, D.; Camarri, P.; Cameron, D.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M.D.M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carrillo Montoya, G.D.; Carron Montero, S.; Carter, A.A.; Carter, J.R.; Carvalho, J.; Casadei, D.; Casado, M.P.; Cascella, M.; Castaneda Hernandez, A.M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N.F.; Cataldi, G.; Catinaccio, A.; Catmore, J.R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerqueira, A.S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S.A.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapman, J.D.; Chapman, J.W.; Chareyre, E.; Charlton, D.G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S.V.; Chelkov, G.A.; Chen, H.; Chen, S.; Chen, X.; Cheplakov, A.; Chepurnov, V.F.; Cherkaoui El Moursli, R.; Tcherniatine, V.; Chesneanu, D.; Cheu, E.; Cheung, S.L.; Chevalier, L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Childers, J.T.; Chilingarov, A.; Chiodini, G.; Chizhov, V.; Choudalakis, G.; Chouridou, S.; Christidi, I.A.; Christov, A.; Chromek-Burckhart, D.; Chu, M.L.; Chudoba, J.; Ciapetti, G.; Ciftci, A.K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M.D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Citterio, M.; Clark, A.; Clark, P.J.; Cleland, W.; Clemens, J.C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coggeshall, J.; Cogneras, E.; Colijn, A.P.; Collard, C.; Collins, N.J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Conde Muino, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cooke, M.; Cooper, B.D.; Cooper-Sarkar, A.M.; Cooper-Smith, N.J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M.J.; Costanzo, D.; Costin, T.; Cote, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B.E.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crepe-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Curatolo, M.; Curtis, C.J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Via, C; Dabrowski, W.; Dai, T.; Dallapiccola, C.; Dallison, S.J.; Daly, C.H.; Dam, M.; Danielsson, H.O.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G.L.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A.R.; Dawson, I.; Daya, R.K.; De, K.; de Asmundis, R.; De Castro, S.; De Castro Faria Salgado, P.E.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De Mora, L.; De Oliveira Branco, M.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J.B.; De Zorzi, G.; Dean, S.; Dedovich, D.V.; Degenhardt, J.; Dehchar, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P.A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Deng, W.; Denisov, S.P.; Derkaoui, J.E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P.O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Girolamo, A.; Di Girolamo, B.; Di Luise, S.; Di Mattia, A.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Diaz, M.A.; Diblen, F.; Diehl, E.B.; Dietrich, J.; Dietzsch, T.A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; do Vale, M.A.B.; Do Valle Wemans, A.; Doan, T.K.O.; Dobos, D.; Dobson, E.; Dobson, M.; Doglioni, C.; Doherty, T.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B.A.; Dohmae, T.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M.T.; Doxiadis, A.; Doyle, A.T.; Drasal, Z.; Dris, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Duhrssen, M.; Duflot, L.; Dufour, M-A.; Dunford, M.; Duran Yildiz, H.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Duren, M.; Ebenstein, W.L.; Ebke, J.; Eckweiler, S.; Edmonds, K.; Edwards, C.A.; Egorov, K.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ermoline, I.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienvre, A.I.; Etzion, E.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R.M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S.M.; Farthouat, P.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Fayard, L.; Fayette, F.; Febbraro, R.; Federic, P.; Fedin, O.L.; Fedorko, W.; Feligioni, L.; Felzmann, C.U.; Feng, C.; Feng, E.J.; Fenyuk, A.B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M.L.; Ferrere, D.; Ferretti, C.; Fiascaris, M.; Fiedler, F.; Filipcic, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M.C.N.; Fiorini, L.; Firan, A.; Fischer, G.; Fisher, M.J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L.R.; Flowerdew, M.J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fowler, A.J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; Freestone, J.; French, S.T.; Froeschl, R.; Froidevaux, D.; Frost, J.A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E.J.; Gallo, V.; Gallop, B.J.; Gallus, P.; Galyaev, E.; Gan, K.K.; Gao, Y.S.; Gaponenko, A.; Garcia-Sciveres, M.; Garcia, C.; Garcia Navarro, J.E.; Gardner, R.W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Gatti, C.; Gaudio, G.; Gautard, V.; Gauzzi, P.; Gavrilenko, I.L.; Gay, C.; Gaycken, G.; Gazis, E.N.; Ge, P.; Gee, C.N.P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M.H.; Gentile, S.; Georgatos, F.; George, S.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S.M.; Gilbert, L.M.; Gilchriese, M.; Gilewsky, V.; Gingrich, D.M.; Ginzburg, J.; Giokaris, N.; Giordani, M.P.; Giordano, R.; Giorgi, F.M.; Giovannini, P.; Giraud, P.F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B.K.; Gladilin, L.K.; Glasman, C.; Glazov, A.; Glitza, K.W.; Glonti, G.L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Gopfert, T.; Goeringer, C.; Gossling, C.; Gottfert, T.; Goggi, V.; Goldfarb, S.; Goldin, D.; Golling, T.; Gomes, A.; Gomez Fajardo, L.S.; Goncalo, R.; Gonella, L.; Gong, C.; Gonzalez de la Hoz, S.; Gonzalez Silva, M.L.; Gonzalez-Sevilla, S.; Goodson, J.J.; Goossens, L.; Gordon, H.A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorisek, A.; Gornicki, E.; Gosdzik, B.; Gosselink, M.; Gostkin, M.I.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M.P.; Goussiou, A.G.; Goy, C.; Grabowska-Bold, I.; Grafstrom, P.; Grahn, K-J.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H.M.; Gray, J.A.; Graziani, E.; Green, B.; Greenshaw, T.; Greenwood, Z.D.; Gregor, I.M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A.A.; Grimm, K.; Grinstein, S.; Grishkevich, Y.V.; Groh, M.; Groll, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Grybel, K.; Guicheney, C.; Guida, A.; Guillemin, T.; Guler, H.; Gunther, J.; Guo, B.; Gupta, A.; Gusakov, Y.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C.B.; Haas, A.; Haas, S.; Haber, C.; Hadavand, H.K.; Hadley, D.R.; Haefner, P.; Hartel, R.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, J.R.; Hansen, J.B.; Hansen, J.D.; Hansen, P.H.; Hansl-Kozanecka, T.; Hansson, P.; Hara, K.; Hare, G.A.; Harenberg, T.; Harrington, R.D.; Harris, O.M.; Harrison, K; Hartert, J.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C.M.; Hawkings, R.J.; Hayakawa, T.; Hayward, H.S.; Haywood, S.J.; Head, S.J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heller, M.; Hellman, S.; Helsens, C.; Hemperek, T.; Henderson, R.C.W.; Henke, M.; Henrichs, A.; Henriques Correia, A.M.; Henrot-Versille, S.; Hensel, C.; Henss, T.; Hernandez Jimenez, Y.; Hershenhorn, A.D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N.P.; Higon-Rodriguez, E.; Hill, J.C.; Hiller, K.H.; Hillert, S.; Hillier, S.J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M.C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M.R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holy, T.; Holzbauer, J.L.; Homma, Y.; Horazdovsky, T.; Hori, T.; Horn, C.; Horner, S.; Horvat, S.; Hostachy, J-Y.; Hou, S.; Hoummada, A.; Howe, T.; Hrivnac, J.; Hryn'ova, T.; Hsu, P.J.; Hsu, S.C.; Huang, G.S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Hughes, E.W.; Hughes, G.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ince, T.; Ioannou, P.; Iodice, M.; Irles Quiles, A.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issakov, V.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A.V.; Iwanski, W.; Iwasaki, H.; Izen, J.M.; Izzo, V.; Jackson, B.; Jackson, J.N.; Jackson, P.; Jaekel, M.R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D.K.; Jansen, E.; Jantsch, A.; Janus, M.; Jared, R.C.; Jarlskog, G.; Jeanty, L.; Jen-La Plante, I.; Jenni, P.; Jez, P.; Jezequel, S.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, M.; Johansson, K.E.; Johansson, P.; Johnert, S; Johns, K.A.; Jon-And, K.; Jones, G.; Jones, R.W.L.; Jones, T.J.; Jorge, P.M.; Joseph, J.; Juranek, V.; Jussel, P.; Kabachenko, V.V.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L.V.; Kalinowski, A.; Kama, S.; Kanaya, N.; Kaneda, M.; Kantserov, V.A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagounis, M.; Karagoz Unel, M.; Kartvelishvili, V.; Karyukhin, A.N.; Kashif, L.; Kasmi, A.; Kass, R.D.; Kastanas, A.; Kastoryano, M.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M.S.; Kayumov, F.; Kazanin, V.A.; Kazarinov, M.Y.; Keates, J.R.; Keeler, R.; Keener, P.T.; Kehoe, R.; Keil, M.; Kekelidze, G.D.; Kelly, M.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kersevan, B.P.; Kersten, S.; Kessoku, K.; Khakzad, M.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, M.S.; Kim, P.C.; Kim, S.H.; Kind, O.; Kind, P.; King, B.T.; Kirk, J.; Kirsch, G.P.; Kirsch, L.E.; Kiryunin, A.E.; Kisielewska, D.; Kittelmann, T.; Kiyamura, H.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E.B.; Klioutchnikova, T.; Klok, P.F.; Klous, S.; Kluge, E.E.; Kluge, T.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N.S.; Kneringer, E.; Ko, B.R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Koneke, K.; Konig, A.C.; Koenig, S.; Kopke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kolos, S.; Kolya, S.D.; Komar, A.A.; Komaragiri, J.R.; Kondo, T.; Kono, T.; Konoplich, R.; Konovalov, S.P.; Konstantinidis, N.; Koperny, S.; Korcyl, K.; Kordas, K.; Korn, A.; Korolkov, I.; Korolkova, E.V.; Korotkov, V.A.; Kortner, O.; Kostka, P.; Kostyukhin, V.V.; Kotov, S.; Kotov, V.M.; Kotov, K.Y.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, H.; Kowalski, T.Z.; Kozanecki, W.; Kozhin, A.S.; Kral, V.; Kramarenko, V.A.; Kramberger, G.; Krasny, M.W.; Krasznahorkay, A.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Kruger, H.; Krumshteyn, Z.V.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurchaninov, L.L.; Kurochkin, Y.A.; Kus, V.; Kwee, R.; La Rotonda, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V.R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, M.; Lampen, C.L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M.P.J.; Lane, J.L.; Lankford, A.J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J.F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Laycock, P.; Lazarev, A.B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Le Vine, M.; Lebedev, A.; Lebel, C.; LeCompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J.S.H.; Lee, S.C.; Lefebvre, M.; Legendre, M.; LeGeyt, B.C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leitner, R.; Lellouch, D.; Lellouch, J.; Lendermann, V.; Leney, K.J.C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leroy, C.; Lessard, J-R.; Lester, C.G.; Leung Fook Cheong, A.; Leveque, J.; Levin, D.; Levinson, L.J.; Leyton, M.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lilley, J.N.; Lim, H.; Limosani, A.; Limper, M.; Lin, S.C.; Linnemann, J.T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T.M.; Lissauer, D.; Lister, A.; Litke, A.M.; Liu, C.; Liu, D.; Liu, H.; Liu, J.B.; Liu, M.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Lloyd, S.L.; Lobodzinska, E.; Loch, P.; Lockman, W.S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F.K.; Loginov, A.; Loh, C.W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, R.E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Loureiro, K.F.; Lovas, L.; Love, J.; Love, P.A.; Lowe, A.J.; Lu, F.; Lubatti, H.J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L.L.; Macana Goia, J.A.; Maccarrone, G.; Macchiolo, A.; Macek, B.; Machado Miguens, J.; Mackeprang, R.; Madaras, R.J.; Mader, W.F.; Maenner, R.; Maeno, T.; Mattig, P.; Mattig, S.; Magalhaes Martins, P.J.; Magradze, E.; Mahalalel, Y.; Mahboubi, K.; Mahmood, A.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makouski, M.; Makovec, N.; Malecki, Pa.; Malecki, P.; Maleev, V.P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Mandelli, L.; Mandic, I.; Mandrysch, R.; Maneira, J.; Mangeard, P.S.; Manjavidze, I.D.; Manning, P.M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J.F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C.P.; Marroquim, F.; Marshall, Z.; Marti-Garcia, S.; Martin, A.J.; Martin, A.J.; Martin, B.; Martin, B.; Martin, F.F.; Martin, J.P.; Martin, T.A.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martini, A.; Martyniuk, A.C.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A.L.; Massa, I.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Matricon, P.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maxfield, S.J.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mc Donald, J.; Mc Kee, S.P.; McCarn, A.; McCarthy, R.L.; McCubbin, N.A.; McFarlane, K.W.; McGlone, H.; Mchedlidze, G.; McMahon, S.J.; McPherson, R.A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T.M.; Mehlhase, S.; Mehta, A.; Meier, K.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B.R.; Mendoza Navas, L.; Meng, Z.; Menke, S.; Meoni, E.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F.S.; Messina, A.M.; Metcalfe, J.; Mete, A.S.; Meyer, J-P.; Meyer, J.; Meyer, J.; Meyer, T.C.; Meyer, W.T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R.P.; Migas, S.; Mijovic, L.; Mikenberg, G.; Mikestikova, M.; Mikuz, M.; Miller, D.W.; Mills, W.J.; Mills, C.M.; Milov, A.; Milstead, D.A.; Milstein, D.; Minaenko, A.A.; Minano, M.; Minashvili, I.A.; Mincer, A.I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L.M.; Mirabelli, G.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitrevski, J.; Mitsou, V.A.; Miyagawa, P.S.; Mjornmark, J.U.; Mladenov, D.; Moa, T.; Moed, S.; Moeller, V.; Monig, K.; Moser, N.; Mohr, W.; Mohrdieck-Mock, S.; Moles-Valls, R.; Molina-Perez, J.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Moore, R.W.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llacer, M.; Morettini, P.; Morii, M.; Morley, A.K.; Mornacchi, G.; Morozov, S.V.; Morris, J.D.; Moser, H.G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S.V.; Moyse, E.J.W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Muller, T.A.; Muenstermann, D.; Muir, A.; Munwes, Y.; Murillo Garcia, R.; Murray, W.J.; Mussche, I.; Musto, E.; Myagkov, A.G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A.M.; Nakamura, K.; Nakano, I.; Nakatsuka, H.; Nanava, G.; Napier, A.; Nash, M.; Nation, N.R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nderitu, S.K.; Neal, H.A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nelson, A.; Nelson, T.K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A.A.; Nessi, M.; Neubauer, M.S.; Neusiedl, A.; Neves, R.N.; Nevski, P.; Newcomer, F.M.; Nickerson, R.B.; Nicolaidou, R.; Nicolas, L.; Nicoletti, G.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Nikiforov, A.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nordberg, M.; Nordkvist, B.; Notz, D.; Novakova, J.; Nozaki, M.; Nozicka, M.; Nugent, I.M.; Nuncio-Quiroz, A.E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Neil, D.C.; O'Shea, V.; Oakham, F.G.; Oberlack, H.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Ogren, H.; Oh, A.; Oh, S.H.; Ohm, C.C.; Ohshima, T.; Ohshita, H.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olchevski, A.G.; Oliveira, M.; Oliveira Damazio, D.; Oliver, J.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P.U.E.; Oram, C.J.; Oreglia, M.J.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R.S.; Ortega, E.O.; Osculati, B.; Ospanov, R.; Osuna, C.; Ottersbach, J.P; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A; Ozcan, V.E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Pahl, C.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J.D.; Pan, Y.B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Papadopoulou, Th.D.; Park, S.J.; Park, W.; Parker, M.A.; Parker, S.I.; Parodi, F.; Parsons, J.A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pasztor, G.; Pataraia, S.; Pater, J.R.; Patricelli, S.; Patwa, A.; Pauly, T.; Peak, L.S.; Pecsy, M.; Pedraza Morales, M.I.; Peleganchuk, S.V.; Peng, H.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Codina, E.; Perez Garcia-Estan, M.T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Persembe, S.; Perus, P.; Peshekhonov, V.D.; Petersen, B.A.; Petersen, T.C.; Petit, E.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A.W.; Piacquadio, G.; Piccinini, M.; Piegaia, R.; Pilcher, J.E.; Pilkington, A.D.; Pina, J.; Pinamonti, M.; Pinfold, J.L.; Pinto, B.; Pizio, C.; Placakyte, R.; Plamondon, M.; Pleier, M.A.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomeroy, D.; Pommes, K.; Ponsot, P.; Pontecorvo, L.; Pope, B.G.; Popeneciu, G.A.; Popovic, D.S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Porter, R.; Pospelov, G.E.; Pospisil, S.; Potekhin, M.; Potrap, I.N.; Potter, C.J.; Potter, C.T.; Potter, K.P.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Pribyl, L.; Price, D.; Price, L.E.; Prichard, P.M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qin, Z.; Quadt, A.; Quarrie, D.R.; Quayle, W.B.; Quinonez, F.; Raas, M.; Radeka, V.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A.M.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A.L.; Rebuzzi, D.M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinherz-Aronis, E.; Reinsch, A; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z.L.; Renkel, P.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richards, R.A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R.R.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Roa Romero, D.A.; Robertson, S.H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, JEM; Robinson, M.; Robson, A.; Rocha de Lima, J.G.; Roda, C.; Roda Dos Santos, D.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, S.; Rohne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V.M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, G.A.; Rosselet, L.; Rossetti, V.; Rossi, L.P.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Royon, C.R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Ruckert, B.; Ruckstuhl, N.; Rud, V.I.; Rudolph, G.; Ruhr, F.; Ruggieri, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N.A.; Rutherfoord, J.P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y.F.; Ryan, P.; Rybkin, G.; Rzaeva, S.; Saavedra, A.F.; Sadrozinski, H.F-W.; Sadykov, R.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.S.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B.M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Samset, B.H.; Sandaker, H.; Sander, H.G.; Sanders, M.P.; Sandhoff, M.; Sandhu, P.; Sandstroem, R.; Sandvoss, S.; Sankey, D.P.C.; Sanny, B.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Saraiva, J.G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sasaki, O.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Savard, P.; Savine, A.Y.; Savinov, V.; Sawyer, L.; Saxon, D.H.; Says, L.P.; Sbarra, C.; Sbrizzi, A.; Scannicchio, D.A.; Schaarschmidt, J.; Schacht, P.; Schafer, U.; Schaetzel, S.; Schaffer, A.C.; Schaile, D.; Schamberger, R.D.; Schamov, A.G.; Schegelsky, V.A.; Scheirich, D.; Schernau, M.; Scherzer, M.I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schroers, M.; Schultes, J.; Schultz-Coulon, H.C.; Schumacher, J.W.; Schumacher, M.; Schumm, B.A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W.G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S.C.; Seiden, A.; Seifert, F.; Seixas, J.M.; Sekhniaidze, G.; Seliverstov, D.M.; Sellden, B.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M.E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L.Y.; Shank, J.T.; Shao, Q.T.; Shapiro, M.; Shatalov, P.B.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M.J.; Shupe, M.A.; Sicho, P.; Sidoti, A.; Siegert, F; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S.B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N.B.; Sipica, V.; Siragusa, G.; Sisakyan, A.N.; Sivoklokov, S.Yu.; Sjoelin, J.; Sjursen, T.B.; Skovpen, K.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloper, J.; Sluka, T.; Smakhtin, V.; Smirnov, S.Yu.; Smirnov, Y.; Smirnova, L.N.; Smirnova, O.; Smith, B.C.; Smith, D.; Smith, K.M.; Smizanska, M.; Smolek, K.; Snesarev, A.A.; Snow, S.W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C.A.; Solar, M.; Solc, J.; Solfaroli Camillocci, E.; Solodkov, A.A.; Solovyanov, O.V.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spano, F.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R.D.; Stahl, T.; Stahlman, J.; Stamen, R.; Stancu, S.N.; Stanecka, E.; Stanek, R.W.; Stanescu, C.; Stapnes, S.; Starchenko, E.A.; Stark, J.; Staroba, P.; Starovoitov, P.; Stastny, J.; Stavina, P.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H.J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G.A.; Stockton, M.C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A.R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Strohmer, R.; Strom, D.M.; Stroynowski, R.; Strube, J.; Stugu, B.; Soh, D.A.; Su, D.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suk, M.; Sulin, V.V.; Sultansoy, S.; Sumida, T.; Sun, X.H.; Sundermann, J.E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M.R.; Suzuki, T.; Suzuki, Y.; Sykora, I.; Sykora, T.; Szymocha, T.; Sanchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M.C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G.F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F.E.; Taylor, G.N.; Taylor, R.P.; Taylor, W.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P.K.; Tennenbaum-Katan, Y.D.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R.J.; Thioye, M.; Thoma, S.; Thomas, J.P.; Thompson, E.N.; Thompson, P.D.; Thompson, P.D.; Thompson, R.J.; Thompson, A.S.; Thomson, E.; Thun, R.P.; Tic, T.; Tikhomirov, V.O.; Tikhonov, Y.A.; Tipton, P.; Tique Aires Viegas, F.J.; Tisserant, S.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokar, S.; Tokushuku, K.; Tollefson, K.; Tomasek, L.; Tomasek, M.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonoyan, A.; Topfel, C.; Topilin, N.D.; Torrence, E.; Torro Pastor, E.; Toth, J.; Touchard, F.; Tovey, D.R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I.M.; Trincaz-Duvoid, S.; Trinh, T.N.; Tripiana, M.F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocme, B.; Troncon, C.; Trzupek, A.; Tsarouchas, C.; Tseng, J.C-L.; Tsiakiris, M.; Tsiareshka, P.V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E.G.; Tsukerman, I.I.; Tsulaia, V.; Tsung, J.W.; Tsuno, S.; Tsybychev, D.; Tuggle, J.M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P.M.; Twomey, M.S.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J.A.; Van Berg, R.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vari, R.; Varnes, E.W.; Varouchas, D.; Vartapetian, A.; Varvell, K.E.; Vasilyeva, L.; Vassilakopoulos, V.I.; Vazeille, F.; Vellidis, C.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J.C.; Vetterli, M.C.; Vichou, I.; Vickey, T.; Viehhauser, G.H.A.; Villa, M.; Villani, E.G.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M.G.; Vinek, E.; Vinogradov, V.B.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T.T.; Vossebeld, J.H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vudragovic, D.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Walbersloh, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wang, C.; Wang, H.; Wang, J.; Wang, S.M.; Warburton, A.; Ward, C.P.; Warsinsky, M.; Wastie, R.; Watkins, P.M.; Watson, A.T.; Watson, M.F.; Watts, G.; Watts, S.; Waugh, A.T.; Waugh, B.M.; Weber, M.D.; Weber, M.; Weber, M.S.; Weber, P.; Weidberg, A.R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P.S.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Werthenbach, U.; Wessels, M.; Whalen, K.; White, A.; White, M.J.; White, S.; Whitehead, S.R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F.J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L.A.M.; Wildauer, A.; Wildt, M.A.; Wilkens, H.G.; Williams, E.; Williams, H.H.; Willocq, S.; Wilson, J.A.; Wilson, M.G.; Wilson, A.; Wingerter-Seez, I.; Winklmeier, F.; Wittgen, M.; Wolter, M.W.; Wolters, H.; Wosiek, B.K.; Wotschack, J.; Woudstra, M.J.; Wraight, K.; Wright, C.; Wright, D.; Wrona, B.; Wu, S.L.; Wu, X.; Wulf, E.; Wynne, B.M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xu, D.; Xu, N.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U.K.; Yang, Z.; Yao, W-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S.P.; Yu, D.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaidan, R.; Zaitsev, A.M.; Zajacova, Z.; Zambrano, V.; Zanello, L.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zemla, A.; Zendler, C.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi della Porta, G.; Zhan, Z.; Zhang, H.; Zhang, J.; Zhang, Q.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C.G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zivkovic, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zutshi, V.

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  14. Experience with the custom-developed ATLAS Offline Trigger Monitoring Framework and Reprocessing Infrastructure

    CERN Document Server

    Bartsch, V

    2012-01-01

    After about two years of data taking with the ATLAS detector manifold experience with the custom-developed trigger monitoring and reprocessing infrastructure could be collected. The trigger monitoring can be roughly divided into online and offline monitoring. The online monitoring calculates and displays all rates at every level of the trigger and evaluates up to 3000 data quality histograms. The physics analysis relevant data quality information is being checked and recorded automatically. The offline trigger monitoring provides information depending of the physics motivated different trigger streams after a run has finished. Experts are checking the information being guided by the assessment of algorithms checking the current histograms with a reference. The experts are recording their assessment in a so-called data quality defects which are used to select data for physics analysis. In the first half of 2011 about three percent of all data had an intolerable defect resulting from the ATLAS trigger system. T...

  15. ATLAS Distributed Computing Operations: Experience and improvements after 2 full years of data-taking

    International Nuclear Information System (INIS)

    Jézéquel, S; Stewart, G

    2012-01-01

    This paper summarizes operational experience and improvements in ATLAS computing infrastructure in 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to 2010, but scalability issues had to be addressed due to the increase in luminosity and trigger rate. Based on improved monitoring of ATLAS Grid computing, the evolution of computing activities (data/group production, their distribution and grid analysis) over time is presented. The main changes in the implementation of the computing model that will be shown are: the optimization of data distribution over the Grid, according to effective transfer rate and site readiness for analysis; the progressive dismantling of the cloud model, for data distribution and data processing; software installation migration to cvmfs; changing database access to a Frontier/squid infrastructure.

  16. METHODS FOR IMPROVING AVAILABILITY AND EFFICIENCY OF COMPUTER INFRASTRUCTURE IN SMART CITIES

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2017-09-01

    Full Text Available This paper discusses methods for increasing the availability and efficiency of information infrastructure in smart cities. Two criteria have been formulated to assign some key resources in smart city system. The process of finding some compromise solutions from Pareto-optimal solutions has been illustrated. Metaheuristics of collective intelligence, including particle swarm optimization PSO, ant colony optimization ACO, algorithm of bee colony ABC, and differential evolution DE have been described due to smart city infrastructure improving. Other application of above metaheuristics in smart city have been also presented.

  17. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    International Nuclear Information System (INIS)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Quast, Günter; Janczyk, Michael; Von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-01-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  18. Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system

    Science.gov (United States)

    Meier, Konrad; Fleig, Georg; Hauth, Thomas; Janczyk, Michael; Quast, Günter; von Suchodoletz, Dirk; Wiebelt, Bernd

    2016-10-01

    Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare

  19. TOWARDS IMPLEMENTATION OF THE FOG COMPUTING CONCEPT INTO THE GEOSPATIAL DATA INFRASTRUCTURES

    Directory of Open Access Journals (Sweden)

    E. A. Panidi

    2016-01-01

    Full Text Available The information technologies and Global Network technologies in particular are developing very quickly. According to this, the problem remains actual that incorporates implementation issues for the general-purpose technologies into the information systems which operate with geospatial data. The paper discusses the implementation feasibility for a number of new approaches and concepts that solve the problems of spatial data publish and management on the Global Network. A brief review describes some contemporary concepts and technologies used for distributed data storage and management, which provide combined use of server-side and client-side resources. In particular, the concepts of Cloud Computing, Fog Computing, and Internet of Things, also with Java Web Start, WebRTC and WebTorrent technologies are mentioned. The author's experience is described briefly, which incorporates the number of projects devoted to the development of the portable solutions for geospatial data and GIS software publication on the Global Network.

  20. Performing quantum computing experiments in the cloud

    Science.gov (United States)

    Devitt, Simon J.

    2016-09-01

    Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.

  1. Social infrastructure to integrate science and practice: the experience of the Long Tom Watershed Council

    Science.gov (United States)

    Rebecca L. Flitcroft; Dana C. Dedrick; Courtland L. Smith; Cynthia A. Thieman; John P. Bolte

    2009-01-01

    Ecological problem solving requires a flexible social infrastructure that can incorporate scientific insights and adapt to changing conditions. As applied to watershed management, social infrastructure includes mechanisms to design, carry out, evaluate, and modify plans for resource protection or restoration. Efforts to apply the best science will not bring anticipated...

  2. National Fusion Collaboratory: Grid Computing for Simulations and Experiments

    Science.gov (United States)

    Greenwald, Martin

    2004-05-01

    The National Fusion Collaboratory Project is creating a computational grid designed to advance scientific understanding and innovation in magnetic fusion research by facilitating collaborations, enabling more effective integration of experiments, theory and modeling and allowing more efficient use of experimental facilities. The philosophy of FusionGrid is that data, codes, analysis routines, visualization tools, and communication tools should be thought of as network available services, easily used by the fusion scientist. In such an environment, access to services is stressed rather than portability. By building on a foundation of established computer science toolkits, deployment time can be minimized. These services all share the same basic infrastructure that allows for secure authentication and resource authorization which allows stakeholders to control their own resources such as computers, data and experiments. Code developers can control intellectual property, and fair use of shared resources can be demonstrated and controlled. A key goal is to shield scientific users from the implementation details such that transparency and ease-of-use are maximized. The first FusionGrid service deployed was the TRANSP code, a widely used tool for transport analysis. Tools for run preparation, submission, monitoring and management have been developed and shared among a wide user base. This approach saves user sites from the laborious effort of maintaining such a large and complex code while at the same time reducing the burden on the development team by avoiding the need to support a large number of heterogeneous installations. Shared visualization and A/V tools are being developed and deployed to enhance long-distance collaborations. These include desktop versions of the Access Grid, a highly capable multi-point remote conferencing tool and capabilities for sharing displays and analysis tools over local and wide-area networks.

  3. @neurIST: infrastructure for advanced disease management through integration of heterogeneous data, computing, and complex processing services.

    Science.gov (United States)

    Benkner, Siegfried; Arbona, Antonio; Berti, Guntram; Chiarini, Alessandro; Dunlop, Robert; Engelbrecht, Gerhard; Frangi, Alejandro F; Friedrich, Christoph M; Hanser, Susanne; Hasselmeyer, Peer; Hose, Rod D; Iavindrasana, Jimison; Köhler, Martin; Iacono, Luigi Lo; Lonsdale, Guy; Meyer, Rodolphe; Moore, Bob; Rajasekaran, Hariharan; Summers, Paul E; Wöhrer, Alexander; Wood, Steven

    2010-11-01

    The increasing volume of data describing human disease processes and the growing complexity of understanding, managing, and sharing such data presents a huge challenge for clinicians and medical researchers. This paper presents the @neurIST system, which provides an infrastructure for biomedical research while aiding clinical care, by bringing together heterogeneous data and complex processing and computing services. Although @neurIST targets the investigation and treatment of cerebral aneurysms, the system's architecture is generic enough that it could be adapted to the treatment of other diseases. Innovations in @neurIST include confining the patient data pertaining to aneurysms inside a single environment that offers clinicians the tools to analyze and interpret patient data and make use of knowledge-based guidance in planning their treatment. Medical researchers gain access to a critical mass of aneurysm related data due to the system's ability to federate distributed information sources. A semantically mediated grid infrastructure ensures that both clinicians and researchers are able to seamlessly access and work on data that is distributed across multiple sites in a secure way in addition to providing computing resources on demand for performing computationally intensive simulations for treatment planning and research.

  4. Generalized Bell-inequality experiments and computation

    Energy Technology Data Exchange (ETDEWEB)

    Hoban, Matty J. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD (United Kingdom); Wallman, Joel J. [School of Physics, The University of Sydney, Sydney, New South Wales 2006 (Australia); Browne, Dan E. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom)

    2011-12-15

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  5. Generalized Bell-inequality experiments and computation

    International Nuclear Information System (INIS)

    Hoban, Matty J.; Wallman, Joel J.; Browne, Dan E.

    2011-01-01

    We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice of two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.

  6. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  7. Social Infrastructure to Integrate Science and Practice: the Experience of the Long Tom Watershed Council

    Directory of Open Access Journals (Sweden)

    Rebecca L. Flitcroft

    2009-12-01

    Full Text Available Ecological problem solving requires a flexible social infrastructure that can incorporate scientific insights and adapt to changing conditions. As applied to watershed management, social infrastructure includes mechanisms to design, carry out, evaluate, and modify plans for resource protection or restoration. Efforts to apply the best science will not bring anticipated results without the appropriate social infrastructure. For the Long Tom Watershed Council, social infrastructure includes a management structure, membership, vision, priorities, partners, resources, and the acquisition of scientific knowledge, as well as the communication with and education of people associated with and affected by actions to protect and restore the watershed. Key to integrating science and practice is keeping science in the loop, using data collection as an outreach tool, and the Long Tom Watershed Council's subwatershed enhancement program approach. Resulting from these methods are ecological leadership, restoration projects, and partnerships that catalyze landscape-level change.

  8. Distributed Analysis Experience using Ganga on an ATLAS Tier2 infrastructure

    International Nuclear Information System (INIS)

    Fassi, F.; Cabrera, S.; Vives, R.; Fernandez, A.; Gonzalez de la Hoz, S.; Sanchez, J.; March, L.; Salt, J.; Kaci, M.; Lamas, A.; Amoros, G.

    2007-01-01

    The ATLAS detector will explore the high-energy frontier of Particle Physics collecting the proton-proton collisions delivered by the LHC (Large Hadron Collider). Starting in spring 2008, the LHC will produce more than 10 Peta bytes of data per year. The adapted tiered hierarchy for computing model at the LHC is: Tier-0 (CERN), Tiers-1 and Tiers-2 centres distributed around the word. The ATLAS Distributed Analysis (DA) system has the goal of enabling physicists to perform Grid-based analysis on distributed data using distributed computing resources. IFIC Tier-2 facility is participating in several aspects of DA. In support of the ATLAS DA activities a prototype is being tested, deployed and integrated. The analysis data processing applications are based on the Athena framework. GANGA, developed by LHCb and ATLAS experiments, allows simple switching between testing on a local batch system and large-scale processing on the Grid, hiding Grid complexities. GANGA deals with providing physicists an integrated environment for job preparation, bookkeeping and archiving, job splitting and merging. The experience with the deployment, configuration and operation of the DA prototype will be presented. Experiences gained of using DA system and GANGA in the Top physics analysis will be described. (Author)

  9. Science gateways for distributed computing infrastructures development framework and exploitation by scientific user communities

    CERN Document Server

    Kacsuk, Péter

    2014-01-01

    The book describes the science gateway building technology developed in the SCI-BUS European project and its adoption and customization method, by which user communities, such as biologists, chemists, and astrophysicists, can build customized, domain-specific science gateways. Many aspects of the core technology are explained in detail, including its workflow capability, job submission mechanism to various grids and clouds, and its data transfer mechanisms among several distributed infrastructures. The book will be useful for scientific researchers and IT professionals engaged in the develop

  10. Creating an infrastructure for training in the responsible conduct of research: the University of Pittsburgh's experience.

    Science.gov (United States)

    Barnes, Barbara E; Friedman, Charles P; Rosenberg, Jerome L; Russell, Joanne; Beedle, Ari; Levine, Arthur S

    2006-02-01

    In response to public concerns about the consequences of research misconduct, academic institutions have become increasingly cognizant of the need to implement comprehensive, effective training in the responsible conduct of research (RCR) for faculty, staff, students, and external collaborators. The ability to meet this imperative is challenging as universities confront declining financial resources and increasing complexity of the research enterprise. The authors describe the University of Pittsburgh's design, implementation, and evaluation of a Web-based, institution-wide RCR training program called Research and Practice Fundamentals (RPF). This project, established in 2000, was embedded in the philosophy, organizational structure, and technology developed through the Integrated Advanced Information Management Systems grant from the National Library of Medicine. Utilizing a centralized, comprehensive approach, the RPF system provides an efficient mechanism for deploying content to a large, diverse cohort of learners and supports the needs of research administrators by providing access to information about who has successfully completed the training. During its first 3 years of operation, the RPF served over 17,000 users and issued more than 38,000 training certificates. The 18 modules that are currently available address issues required by regulatory mandates and other content areas important to the research community. RPF users report high levels of satisfaction with content and ease of using the system. Future efforts must explore methods to integrate non-RCR education and training into a centralized, cohesive structure. The University of Pittsburgh's experience with the RPF demonstrates the importance of developing an infrastructure for training that is comprehensive, scalable, reliable, centralized, affordable, and sustainable.

  11. On the Development of a Computing Infrastructure that Facilitates IPPD from a Decision-Based Design Perspective

    Science.gov (United States)

    Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.

    1995-01-01

    Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.

  12. Amorphous nanoparticles — Experiments and computer simulations

    International Nuclear Information System (INIS)

    Hoang, Vo Van; Ganguli, Dibyendu

    2012-01-01

    The data obtained by both experiments and computer simulations concerning the amorphous nanoparticles for decades including methods of synthesis, characterization, structural properties, atomic mechanism of a glass formation in nanoparticles, crystallization of the amorphous nanoparticles, physico-chemical properties (i.e. catalytic, optical, thermodynamic, magnetic, bioactivity and other properties) and various applications in science and technology have been reviewed. Amorphous nanoparticles coated with different surfactants are also reviewed as an extension in this direction. Much attention is paid to the pressure-induced polyamorphism of the amorphous nanoparticles or amorphization of the nanocrystalline counterparts. We also introduce here nanocomposites and nanofluids containing amorphous nanoparticles. Overall, amorphous nanoparticles exhibit a disordered structure different from that of corresponding bulks or from that of the nanocrystalline counterparts. Therefore, amorphous nanoparticles can have unique physico-chemical properties differed from those of the crystalline counterparts leading to their potential applications in science and technology.

  13. Computer controls for the WITCH experiment

    CERN Document Server

    Tandecki, M; Van Gorp, S; Friedag, P; De Leebeeck, V; Beck, D; Brand, H; Weinheimer, C; Breitenfeldt, M; Traykov, E; Mader, J; Roccia, S; Severijns, N; Herlert, A; Wauters, F; Zakoucky, D; Kozlov, V; Soti, G

    2011-01-01

    The WITCH experiment is a medium-scale experimental set-up located at ISOLDE/CERN. It combines a double Penning trap system with,a retardation spectrometer for energy measurements of recoil ions from beta decay. For a correct operation of such a set-up a whole range of different devices is required. Along with the installation and optimization of the set-up a computer control system was developed to control these devices. The CS-Framework that is developed and maintained at GSI, was chosen as a basis for this control system as it is perfectly suited to handle the distributed nature of a control system.We report here on the required hardware for WITCH, along with the basis of this CS-Framework and the add-ons that were implemented for WITCH. (C) 2010 Elsevier B.V. All rights reserved.

  14. Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

    OpenAIRE

    Stodden, Victoria; Miguez, Sheila

    2014-01-01

    The goal of this article is to coalesce a discussion around best practices for scholarly research that utilizes computational methods, by providing a formalized set of best practice recommendations to guide computational scientists and other stakeholders wishing to disseminate reproducible research, facilitate innovation by enabling data and code re-use, and enable broader communication of the output of computational scientific research. Scholarly dissemination and communication standards are...

  15. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  16. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  17. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  18. THE NON-LINEAR INTERACTION OF BRIDGES CONSTRUCTIONS AND THEIR INFRASTRUCTURE WITH FOUNDATION OF DISCRETE FLEXIBLE LANDING OF COMMON VIEW FOR EXAMPLE: CALCULATIONS, EXPERIMENTS AND DYING OUT VIBRATIONS

    Directory of Open Access Journals (Sweden)

    V. V. Kulyabko

    2010-04-01

    Full Text Available In the article the issues of increasing the possibilities of computer modeling of the dynamic interaction of bridge constructions and their infrastructure with moving transport and flows are considered.

  19. SaaS enabled admission control for MCMC simulation in cloud computing infrastructures

    Science.gov (United States)

    Vázquez-Poletti, J. L.; Moreno-Vozmediano, R.; Han, R.; Wang, W.; Llorente, I. M.

    2017-02-01

    Markov Chain Monte Carlo (MCMC) methods are widely used in the field of simulation and modelling of materials, producing applications that require a great amount of computational resources. Cloud computing represents a seamless source for these resources in the form of HPC. However, resource over-consumption can be an important drawback, specially if the cloud provision process is not appropriately optimized. In the present contribution we propose a two-level solution that, on one hand, takes advantage of approximate computing for reducing the resource demand and on the other, uses admission control policies for guaranteeing an optimal provision to running applications.

  20. Best Practices for Computational Science: Software Infrastructure and Environments for Reproducible and Extensible Research

    Directory of Open Access Journals (Sweden)

    Victoria Stodden

    2014-07-01

    Full Text Available The goal of this article is to coalesce a discussion around best practices for scholarly research that utilizes computational methods, by providing a formalized set of best practice recommendations to guide computational scientists and other stakeholders wishing to disseminate reproducible research, facilitate innovation by enabling data and code re-use, and enable broader communication of the output of computational scientific research. Scholarly dissemination and communication standards are changing to reflect the increasingly computational nature of scholarly research, primarily to include the sharing of the data and code associated with published results. We also present these Best Practices as a living, evolving, and changing document at http://wiki.stodden.net/Best_Practices.

  1. Solar: A Pervasive-Computing Infrastructure for Context-Aware Mobile Applications

    National Research Council Canada - National Science Library

    Chen, Guanling; Kotz, David

    2002-01-01

    .... To avoid increasing complexity, and allow the user to concentrate on her tasks, applications must automatically adapt to their changing context, the physical and computational environment in which they...

  2. FY 1994 Blue Book: High Performance Computing and Communications: Toward a National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — government and industry that advanced computer and telecommunications technologies could provide huge benefits throughout the research community and the entire U.S....

  3. Educational Infrastructure Using Virtualization Technologies: Experience at Kaunas University of Technology

    Science.gov (United States)

    Miseviciene, Regina; Ambraziene, Danute; Tuminauskas, Raimundas; Pažereckas, Nerijus

    2012-01-01

    Many factors influence education nowadays. Educational institutions are faced with budget cuttings, outdated IT, data security management and the willingness to integrate remote learning at home. Virtualization technologies provide innovative solutions to the problems. The paper presents an original educational infrastructure using virtualization…

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  5. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  6. INFORMATION AND TELECOMMUNICATION INFRASTRUCTURE AND ECONOMIC GROWTH: AN EXPERIENCE FROM NIGERIA

    Directory of Open Access Journals (Sweden)

    Wasiu Ishola Oyeniran

    2016-11-01

    Full Text Available The study examines the effect of investment in telecommunication infrastructure on economic growth in Nigeria. Using time series data from 1980 and 2012, the study employs autoregressive distributed lag (ARDL bounds testing approach proposed by Pesaran et al., (2001 to estimate the long run and short run effect of investment in telecommunication infrastructure on economic growth. The result from cointegration test showed presence of long run relationship between dependent and all explanatory variables. The study found foreign direct investment in information and communication technology more effective in improving and raising economic growth in Nigeria than government investment. The output from Chow breakpoint test shows that the liberalization of telecommunication industry introduced in 1992 has significant effect on economic growth in Nigeria. Therefore, it is imperative for Nigerian government to increase its spending on telecom and attract more foreign investment in telecommunication in order to boost productivity and economic growth.

  7. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  8. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  9. Computation for LHC experiments: a worldwide computing grid

    International Nuclear Information System (INIS)

    Fairouz, Malek

    2010-01-01

    In normal operating conditions the LHC detectors are expected to record about 10 10 collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10 9 octets per second and recording capacity of a few tens of 10 15 octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  10. Green infrastructure planning for cooling urban communities: Overview of the contemporary approaches with special reference to Serbian experiences

    Directory of Open Access Journals (Sweden)

    Marić Igor

    2015-01-01

    Full Text Available This paper investigates contemporary approaches defined by the policies, programs or standards that favor green infrastructure in urban planning for cooling urban environments with special reference to Serbian experiences. The research results reveal an increasing emphasis on the multifunctionality of green infrastructure as well the determination to the development of policies, guidelines and standards with the support of the overall community. Further, special importance is given to policies that promote ‘cool communities’ strategies resulting in the increase of vegetation-covered areas, what has contributed in adapting urban environments to the impacts of climate change. In addition, this research indicates the important role of local authorities and planners in Serbia in promoting planning policies and programs that take into consideration the role of green infrastructure in terms of improving climatic conditions, quality of life and reducing energy needed for cooling and heating. [Projekat Ministarstva nauke Republike Srbije, br. TR 36035: Spatial, ecological, energy, and social aspects of developing settlements and climate change - mutual impacts i br. 43007: The investigation of climate change and its impacts, climate change adaptation and mitigation

  11. Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab

    Science.gov (United States)

    Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.

    2017-10-01

    The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called

  12. Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb

    Energy Technology Data Exchange (ETDEWEB)

    Herner, K. [Fermilab; Alba Hernandex, A. F. [Fermilab; Bhat, S. [Fermilab; Box, D. [Fermilab; Boyd, J. [Fermilab; Di Benedetto, V. [Fermilab; Ding, P. [Fermilab; Dykstra, D. [Fermilab; Fattoruso, M. [Fermilab; Garzoglio, G. [Fermilab; Kirby, M. [Fermilab; Kreymer, A. [Fermilab; Levshina, T. [Fermilab; Mazzacane, A. [Fermilab; Mengel, M. [Fermilab; Mhashilkar, P. [Fermilab; Podstavkov, V. [Fermilab; Retzke, K. [Fermilab; Sharma, N. [Fermilab; Teheran, J. [Fermilab

    2016-01-01

    The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed

  13. Waste to Energy in Urban Infrastructure. Experiences from Indo-Swedish collaboration 2009-2011

    Energy Technology Data Exchange (ETDEWEB)

    2011-10-15

    This report provides an illustration of the progress that has been made in Indo-Swedish biogas collaboration since the delegation Biogas for Urban Infrastructure initiated action in 2009. A number of Swedish government organisations and private sector organisations have worked together with Indian counterparts to develop the Indo-Swedish Waste-to-Energy cooperation. A mere two years later, we can now state that this has been a very fruitful venture. The Swedish-Indian cooperation that was formed in conjunction with the biogas delegation has already resulted in new knowledge, new methods, opportunities for new strategies and new business models.

  14. CSDMS2.0: Computational Infrastructure for Community Surface Dynamics Modeling

    Science.gov (United States)

    Syvitski, J. P.; Hutton, E.; Peckham, S. D.; Overeem, I.; Kettner, A.

    2012-12-01

    The Community Surface Dynamic Modeling System (CSDMS) is an NSF-supported, international and community-driven program that seeks to transform the science and practice of earth-surface dynamics modeling. CSDMS integrates a diverse community of more than 850 geoscientists representing 360 international institutions (academic, government, industry) from 60 countries and is supported by a CSDMS Interagency Committee (22 Federal agencies), and a CSDMS Industrial Consortia (18 companies). CSDMS presently distributes more 200 Open Source models and modeling tools, access to high performance computing clusters in support of developing and running models, and a suite of products for education and knowledge transfer. CSDMS software architecture employs frameworks and services that convert stand-alone models into flexible "plug-and-play" components to be assembled into larger applications. CSDMS2.0 will support model applications within a web browser, on a wider variety of computational platforms, and on other high performance computing clusters to ensure robustness and sustainability of the framework. Conversion of stand-alone models into "plug-and-play" components will employ automated wrapping tools. Methods for quantifying model uncertainty are being adapted as part of the modeling framework. Benchmarking data is being incorporated into the CSDMS modeling framework to support model inter-comparison. Finally, a robust mechanism for ingesting and utilizing semantic mediation databases is being developed within the Modeling Framework. Six new community initiatives are being pursued: 1) an earth - ecosystem modeling initiative to capture ecosystem dynamics and ensuing interactions with landscapes, 2) a geodynamics initiative to investigate the interplay among climate, geomorphology, and tectonic processes, 3) an Anthropocene modeling initiative, to incorporate mechanistic models of human influences, 4) a coastal vulnerability modeling initiative, with emphasis on deltas and

  15. A Geometry Based Infra-Structure for Computational Analysis and Design

    Science.gov (United States)

    Haimes, Robert

    1998-01-01

    The computational steps traditionally taken for most engineering analysis suites (computational fluid dynamics (CFD), structural analysis, heat transfer and etc.) are: (1) Surface Generation -- usually by employing a Computer Assisted Design (CAD) system; (2) Grid Generation -- preparing the volume for the simulation; (3) Flow Solver -- producing the results at the specified operational point; (4) Post-processing Visualization -- interactively attempting to understand the results. For structural analysis, integrated systems can be obtained from a number of commercial vendors. These vendors couple directly to a number of CAD systems and are executed from within the CAD Graphical User Interface (GUI). It should be noted that the structural analysis problem is more tractable than CFD; there are fewer mesh topologies used and the grids are not as fine (this problem space does not have the length scaling issues of fluids). For CFD, these steps have worked well in the past for simple steady-state simulations at the expense of much user interaction. The data was transmitted between phases via files. In most cases, the output from a CAD system could go to Initial Graphics Exchange Specification (IGES) or Standard Exchange Program (STEP) files. The output from Grid Generators and Solvers do not really have standards though there are a couple of file formats that can be used for a subset of the gridding (i.e. PLOT3D data formats). The user would have to patch up the data or translate from one format to another to move to the next step. Sometimes this could take days. Specifically the problems with this procedure are:(1) File based -- Information flows from one step to the next via data files with formats specified for that procedure. File standards, when they exist, are wholly inadequate. For example, geometry from CAD systems (transmitted via IGES files) is defined as disjoint surfaces and curves (as well as masses of other information of no interest for the Grid Generator

  16. Using Computer Games for Instruction: The Student Experience

    Science.gov (United States)

    Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David; Tomes, Russell

    2011-01-01

    Computer games are fun, exciting and motivational when used as leisure pursuits. But do they have similar attributes when utilized for educational purposes? This article investigates whether learning by computer game can improve student experiences compared with a more formal lecture approach and whether computer games have potential for improving…

  17. One Head Start Classroom's Experience: Computers and Young Children's Development.

    Science.gov (United States)

    Fischer, Melissa Anne; Gillespie, Catherine Wilson

    2003-01-01

    Contends that early childhood educators need to understand how exposure to computers and constructive computer programs affects the development of children. Specifically examines: (1) research on children's technology experiences; (2) determining best practices; and (3) addressing educators' concerns about computers replacing other developmentally…

  18. National Computational Infrastructure for Lattice Gauge Theory SciDAC-2 Closeout Report

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Xian-He

    2013-08-01

    As part of this project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. This framework is open source and is available for download upon request. This work has also been used at Fermi Laboratory, Vanderbilt University and Mississippi State University across projects other than LQCD. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. Requirements for an LQCD workflow system are available in documentation.

  19. Contribution to global computation infrastructure: inter-platform delegation, integration of standard services and application to high-energy physics

    International Nuclear Information System (INIS)

    Lodygensky, Oleg

    2006-01-01

    The generalization and implementation of the current information resources, particularly the large storing capacities and the networks allow conceiving new methods of work and ways of entertainment. Centralized stand-alone, monolithic computing stations have been gradually replaced by distributed client-tailored architectures which in turn are challenged by the new distributed systems called 'pair-by pair' systems. This migration is no longer with the specialists' realm but users of more modest skills get used with this new techniques for e-mailing commercial information and exchanging various sorts of files on a 'equal-to-equal' basis. Trade, industry and research as well make profits largely of the new technique called 'grid', this new technique of handling information at a global scale. The present work concerns the grid utilisation for computation. A synergy was created with Paris-Sud University at Orsay, between the Information Research Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) in order to foster the works on grid infrastructure of high research interest for LRI and offering new working methods for LAL. The results of the work developed within this inter-disciplinary-collaboration are based on XtremWeb, the research and production platform for global computation elaborated at LRI. First one presents the current status of the large-scale distributed systems, their basic principles and user-oriented architecture. The XtremWeb is then described focusing the modifications which were effected upon both architecture and implementation in order to fulfill optimally the requirements imposed to such a platform. Then one presents studies with the platform allowing a generalization of the inter-grid resources and development of a user-oriented grid adapted to special services, as well,. Finally one presents the operation modes, the problems to solve and the advantages of this new platform for the high-energy research community, the most demanding

  20. Integration of Long term experiments on terrestrial ecosystem in AnaEE-France Research Infrastructure : concept and adding value

    Science.gov (United States)

    Chanzy, André; Chabbi, Abad; Houot, Sabine; Lafolie, François; Pichot, Christian; Raynal, Hélène; Saint-André, Laurent; Clobert, Jean; Greiveldinger, Lucile

    2015-04-01

    Continental ecosystems represent a critical zone that provide key ecological services to human populations like biomass production, that participate to the regulation of the global biogeochemical cycles and contribute and contribute to the maintenance of air and water quality. Global changes effects on continental ecosystems are likely to impact the fate of humanity, which is thus facing numerous challenges, such as an increasing demand for food and energy, competition for land and water use, or rapid climate warming. Hence, scientific progress in our understanding of the continental critical zone will come from studies that address how biotic and abiotic processes react to global changes. Long term experiments are required to take into account ecosystem inertia and feedback loops and to characterize trends and threshold in ecosystem dynamics. In France, 20 long-term experiments on terrestrial ecosystems are gathered within a single Research Infrastructure: ANAEE-France (http://www.anaee-s.fr), which is a part of AnaEE-Europe (http://www.anaee.com/). Each experiment consist in applying differentiated pressures on different plot over a long period (>20 years) representative of a range of management options. The originality of such infrastructure is a combination of experimental set up and long-term monitoring of simultaneous measurements of key ecosystem variables and parameters through a multi-disciplinary approach and replications of each treatment that improve the statistical strength of the results. The sites encompass gradients of climate conditions, ecosystem complexity and/or management, and can be used for calibration/validation of ecosystem functioning models as well as for the design of ecosystem management strategies. Gathering those experiments in a single research infrastructure is an important issue to enhance their visibility and increase the number of hosting scientific team by offering a range of services. These are: • Access to the ongoing long

  1. Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation.

    Energy Technology Data Exchange (ETDEWEB)

    Saffer, Shelley (Sam) I.

    2014-12-01

    This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

  2. Water infrastructure protection against intentional attacks:An experience in Italy

    Institute of Scientific and Technical Information of China (English)

    Cristiana Di Cristo; Angelo Leopardi; Giovanni de Marinis

    2011-01-01

    In the last years many interesting studies were devoted to the development of technologies and methodologies for the protection of water supply systems against intentional attacks.However the application to real systems is still limited for different economical and technical reasons.The Water Engineering Laboratory (L.I.A.) of University of Cassino (Italy) was involved in two research projects financed by the European Commission in the framework of the European Programme for Critical Infrastructure Protection (E.P.C.I.P.).Both projects,developed in partnership with a large Italian Water Company,have the common objective of providing guidelines for enhancing security in water supply systems respect to the intentional contamination risk.The fmal product is represented by the arrangement of a general procedure for protection systems design of water networks.In the paper the procedure is described through the application to two real water systems,characterized by different size and behavior.

  3. National Computational Infrastructure for LatticeGauge Theory SciDAC-2 Closeout Report

    Energy Technology Data Exchange (ETDEWEB)

    Bapty, Theodore; Dubey, Abhishek

    2013-07-18

    As part of the reliability project work, researchers from Vanderbilt University, Fermi National Laboratory and Illinois Institute of technology developed a real-time cluster fault-tolerant cluster monitoring framework. The goal for the scientific workflow project is to investigate and develop domain-specific workflow tools for LQCD to help effectively orchestrate, in parallel, computational campaigns consisting of many loosely-coupled batch processing jobs. Major requirements for an LQCD workflow system include: a system to manage input metadata, e.g. physics parameters such as masses, a system to manage and permit the reuse of templates describing workflows, a system to capture data provenance information, a systems to manage produced data, a means of monitoring workflow progress and status, a means of resuming or extending a stopped workflow, fault tolerance features to enhance the reliability of running workflows. In summary, these achievements are reported: • Implemented a software system to manage parameters. This includes a parameter set language based on a superset of the JSON data-interchange format, parsers in multiple languages (C++, Python, Ruby), and a web-based interface tool. It also includes a templating system that can produce input text for LQCD applications like MILC. • Implemented a monitoring sensor framework in software that is in production on the Fermilab USQCD facility. This includes equipment health, process accounting, MPI/QMP process tracking, and batch system (Torque) job monitoring. All sensor data are available from databases, and various query tools can be used to extract common data patterns and perform ad hoc searches. Common batch system queries such as job status are available in command line tools and are used in actual workflow-based production by a subset of Fermilab users. • Developed a formal state machine model for scientific workflow and reliability systems. This includes the use of Vanderbilt’s Generic Modeling

  4. Computation for the analysis of designed experiments

    CERN Document Server

    Heiberger, Richard

    2015-01-01

    Addresses the statistical, mathematical, and computational aspects of the construction of packages and analysis of variance (ANOVA) programs. Includes a disk at the back of the book that contains all program codes in four languages, APL, BASIC, C, and FORTRAN. Presents illustrations of the dual space geometry for all designs, including confounded designs.

  5. The Affective Experience of Novice Computer Programmers

    Science.gov (United States)

    Bosch, Nigel; D'Mello, Sidney

    2017-01-01

    Novice students (N = 99) participated in a lab study in which they learned the fundamentals of computer programming in Python using a self-paced computerized learning environment involving a 25-min scaffolded learning phase and a 10-min unscaffolded fadeout phase. Students provided affect judgments at approximately 100 points (every 15 s) over the…

  6. Electromagnetic Induction: A Computer-Assisted Experiment

    Science.gov (United States)

    Fredrickson, J. E.; Moreland, L.

    1972-01-01

    By using minimal equipment it is possible to demonstrate Faraday's Law. An electronic desk calculator enables sophomore students to solve a difficult mathematical expression for the induced EMF. Polaroid pictures of the plot of induced EMF, together with the computer facility, enables students to make comparisons. (PS)

  7. Computing in support of experiments at LAMPF

    International Nuclear Information System (INIS)

    Thomas, R.F.; Amann, J.F.; Butler, H.S.

    1976-10-01

    This report documents the discussions and conclusions of a study, conducted in August 1976, of the requirements for computer support of the experimental program in medium-energy physics at the Clinton P. Anderson Meson Physics Facility. 1 figure, 1 table

  8. Experiment Dashboard for Monitoring of the LHC Distributed Computing Systems

    International Nuclear Information System (INIS)

    Andreeva, J; Campos, M Devesas; Cros, J Tarragon; Gaidioz, B; Karavakis, E; Kokoszkiewicz, L; Lanciotti, E; Maier, G; Ollivier, W; Nowotka, M; Rocha, R; Sadykov, T; Saiz, P; Sargsyan, L; Sidorova, I; Tuckett, D

    2011-01-01

    LHC experiments are currently taking collisions data. A distributed computing model chosen by the four main LHC experiments allows physicists to benefit from resources spread all over the world. The distributed model and the scale of LHC computing activities increase the level of complexity of middleware, and also the chances of possible failures or inefficiencies in involved components. In order to ensure the required performance and functionality of the LHC computing system, monitoring the status of the distributed sites and services as well as monitoring LHC computing activities are among the key factors. Over the last years, the Experiment Dashboard team has been working on a number of applications that facilitate the monitoring of different activities: including following up jobs, transfers, and also site and service availabilities. This presentation describes Experiment Dashboard applications used by the LHC experiments and experience gained during the first months of data taking.

  9. Cloud Infrastructure & Applications - CloudIA

    Science.gov (United States)

    Sulistio, Anthony; Reich, Christoph; Doelitzscher, Frank

    The idea behind Cloud Computing is to deliver Infrastructure-as-a-Services and Software-as-a-Service over the Internet on an easy pay-per-use business model. To harness the potentials of Cloud Computing for e-Learning and research purposes, and to small- and medium-sized enterprises, the Hochschule Furtwangen University establishes a new project, called Cloud Infrastructure & Applications (CloudIA). The CloudIA project is a market-oriented cloud infrastructure that leverages different virtualization technologies, by supporting Service-Level Agreements for various service offerings. This paper describes the CloudIA project in details and mentions our early experiences in building a private cloud using an existing infrastructure.

  10. Using sobol sequences for planning computer experiments

    Science.gov (United States)

    Statnikov, I. N.; Firsov, G. I.

    2017-12-01

    Discusses the use for research of problems of multicriteria synthesis of dynamic systems method of Planning LP-search (PLP-search), which not only allows on the basis of the simulation model experiments to revise the parameter space within specified ranges of their change, but also through special randomized nature of the planning of these experiments is to apply a quantitative statistical evaluation of influence of change of varied parameters and their pairwise combinations to analyze properties of the dynamic system.Start your abstract here...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  12. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    Science.gov (United States)

    Habig, Alec; Norman, A.

    2015-12-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a νμ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics.

  13. Recent Evolution of the Offline Computing Model of the NOvA Experiment

    International Nuclear Information System (INIS)

    Habig, Alec; Group, Craig; Norman, A.

    2015-01-01

    The NOvA experiment at Fermilab is a long-baseline neutrino experiment designed to study νe appearance in a ν μ beam. Over the last few years there has been intense work to streamline the computing infrastructure in preparation for data, which started to flow in from the far detector in Fall 2013. Major accomplishments for this effort include migration to the use of off-site resources through the use of the Open Science Grid and upgrading the file-handling framework from simple disk storage to a tiered system using a comprehensive data management and delivery system to find and access files on either disk or tape storage. NOvA has already produced more than 6.5 million files and more than 1 PB of raw data and Monte Carlo simulation files which are managed under this model. The current system has demonstrated sustained rates of up to 1 TB/hour of file transfer by the data handling system. NOvA pioneered the use of new tools and this paved the way for their use by other Intensity Frontier experiments at Fermilab. Most importantly, the new framework places the experiment's infrastructure on a firm foundation, and is ready to produce the files needed for first physics. (paper)

  14. Experience of public procurement of Open Compute servers

    Science.gov (United States)

    Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony

    2015-12-01

    The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).

  15. Analysis Facility infrastructure (TIER3) for ATLAS High Energy physics experiment

    International Nuclear Information System (INIS)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2007-01-01

    ATLAS project has been asked to define the scope and role of Tier-3 resources (facilities or centres) within the existing ATLAS computing model, activities and facilities. This document attempts to address these questions by describing Tier-3 resources generally, and their relationship to the ATLAS Software and Computing Project. Originally the tiered computing model came out of MONARC (see http://monarc.web.cern.ch/MONARC/) work and was predicated upon the network being a scarce resource. In this model the tiered hierarchy ranged from the Tier-0 (CERN) down to the desktop or workstation (Tier 3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 (CERN) and Tier-1 (National centres) definition and roles. The various LHC projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2s (Regional centers) as part of their projects. Tier-3s, on the other hand, have (implicitly and sometime explicitly) been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS Research Program computing resources nor under their control, meaning there is no formal MOU process to designate sites as Tier-3s and no formal control of the program over the Tier-3 resources. Tier-3s are the responsibility of individual institutions to define, fund, deploy and support. However, having noted this, we must also recognize that Tier-3s must exist and will have implications for how our computing model should support ATLAS physicists. Tier-3 users will want to access data and simulations and will want to enable their Tier-3 resources to support their analysis and simulation work. Tiers 3s are an important resource for physicists to analyze LHC (Large Hadron Collider) data. This document will define how Tier-3s should best interact with the ATLAS computing model, detail the

  16. Model and Computing Experiment for Research and Aerosols Usage Management

    Directory of Open Access Journals (Sweden)

    Daler K. Sharipov

    2012-09-01

    Full Text Available The article deals with a math model for research and management of aerosols released into the atmosphere as well as numerical algorithm used as hardware and software systems for conducting computing experiment.

  17. Computing for Lattice QCD: new developments from the APE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN, Sezione di Roma Tor Vergata, Roma (Italy); Biagioni, A; De Luca, S [INFN, Sezione di Roma, Roma (Italy)

    2008-06-15

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  18. Computing for Lattice QCD: new developments from the APE experiment

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; De Luca, S.

    2008-01-01

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  19. A Computational Experiment on Single-Walled Carbon Nanotubes

    Science.gov (United States)

    Simpson, Scott; Lonie, David C.; Chen, Jiechen; Zurek, Eva

    2013-01-01

    A computational experiment that investigates single-walled carbon nanotubes (SWNTs) has been developed and employed in an upper-level undergraduate physical chemistry laboratory course. Computations were carried out to determine the electronic structure, radial breathing modes, and the influence of the nanotube's diameter on the…

  20. FOREIGN AND DOMESTIC EXPERIENCE OF INTEGRATING CLOUD COMPUTING INTO PEDAGOGICAL PROCESS OF HIGHER EDUCATIONAL ESTABLISHMENTS

    Directory of Open Access Journals (Sweden)

    Nataliia A. Khmil

    2016-01-01

    Full Text Available In the present article foreign and domestic experience of integrating cloud computing into pedagogical process of higher educational establishments (H.E.E. has been generalized. It has been stated that nowadays a lot of educational services are hosted in the cloud, e.g. infrastructure as a service (IaaS, platform as a service (PaaS and software as a service (SaaS. The peculiarities of implementing cloud technologies by H.E.E. in Ukraine and abroad have been singled out; the products developed by the leading IT companies for using cloud computing in higher education system, such as Microsoft for Education, Google Apps for Education and Amazon AWS Educate have been reviewed. The examples of concrete types, methods and forms of learning and research work based on cloud services have been provided.

  1. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    Science.gov (United States)

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    Science.gov (United States)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  3. The Information Science Experiment System - The computer for science experiments in space

    Science.gov (United States)

    Foudriat, Edwin C.; Husson, Charles

    1989-01-01

    The concept of the Information Science Experiment System (ISES), potential experiments, and system requirements are reviewed. The ISES is conceived as a computer resource in space whose aim is to assist computer, earth, and space science experiments, to develop and demonstrate new information processing concepts, and to provide an experiment base for developing new information technology for use in space systems. The discussion covers system hardware and architecture, operating system software, the user interface, and the ground communication link.

  4. An experiment for determining the Euler load by direct computation

    Science.gov (United States)

    Thurston, Gaylen A.; Stein, Peter A.

    1986-01-01

    A direct algorithm is presented for computing the Euler load of a column from experimental data. The method is based on exact inextensional theory for imperfect columns, which predicts two distinct deflected shapes at loads near the Euler load. The bending stiffness of the column appears in the expression for the Euler load along with the column length, therefore the experimental data allows a direct computation of bending stiffness. Experiments on graphite-epoxy columns of rectangular cross-section are reported in the paper. The bending stiffness of each composite column computed from experiment is compared with predictions from laminated plate theory.

  5. Computing and data handling recent experiences at Fermilab and SLAC

    International Nuclear Information System (INIS)

    Cooper, P.S.

    1990-01-01

    Computing has become evermore central to the doing of high energy physics. There are now major second and third generation experiments for which the largest single cost is computing. At the same time the availability of ''cheap'' computing has made possible experiments which were previously considered infeasible. The result of this trend has been an explosion of computing and computing needs. I will review here the magnitude of the problem, as seen at Fermilab and SLAC, and the present methods for dealing with it. I will then undertake the dangerous assignment of projecting the needs and solutions forthcoming in the next few years at both laboratories. I will concentrate on the ''offline'' problem; the process of turning terabytes of data tapes into pages of physics journals. 5 refs., 4 figs., 4 tabs

  6. Methodological Potential of Computer Experiment in Teaching Mathematics at University

    Science.gov (United States)

    Lin, Kequan; Sokolova, Anna Nikolaevna; Vlasova, Vera K.

    2017-01-01

    The study is relevant due to the opportunity of increasing efficiency of teaching mathematics at university through integration of students of computer experiment conducted with the use of IT in this process. The problem of there search is defined by a contradiction between great potential opportunities of mathematics experiment for motivating and…

  7. Remote Viewing and Computer Communications--An Experiment.

    Science.gov (United States)

    Vallee, Jacques

    1988-01-01

    A series of remote viewing experiments were run with 12 participants who communicated through a computer conferencing network. The correct target sample was identified in 8 out of 33 cases. This represented more than double the pure chance expectation. Appendices present protocol, instructions, and results of the experiments. (Author/YP)

  8. Computer simulation of Wheeler's delayed-choice experiment with photons

    NARCIS (Netherlands)

    Zhao, S.; Yuan, S.; De Raedt, H.; Michielsen, K.

    We present a computer simulation model of Wheeler's delayed-choice experiment that is a one-to-one copy of an experiment reported recently (Jacques V. et al., Science, 315 (2007) 966). The model is solely based on experimental facts, satisfies Einstein's criterion of local causality and does not

  9. Experiences and Lessons Learnt with Collaborative e-Research Infrastructure and the application of Identity Management and Access Control for the Centre for Environmental Data Analysis

    Science.gov (United States)

    Kershaw, P.

    2016-12-01

    CEDA, the Centre for Environmental Data Analysis, hosts a range of services on behalf of NERC (Natural Environment Research Council) for the UK environmental sciences community and its work with international partners. It is host to four data centres covering atmospheric science, earth observation, climate and space data domain areas. It holds this data on behalf of a number of different providers each with their own data policies which has thus required the development of a comprehensive system to manage access. With the advent of CMIP5, CEDA committed to be one of a number of centres to host the climate model outputs and make them available through the Earth System Grid Federation, a globally distributed software infrastructure developed for this purpose. From the outset, a means for restricting access to datasets was required, necessitating the development a federated system for authentication and authorisation so that access to data could be managed across multiple providers around the world. From 2012, CEDA has seen a further evolution with the development of JASMIN, a multi-petabyte data analysis facility. Hosted alongside the CEDA archive, it provides a range of services for users including a batch compute cluster, group workspaces and a community cloud. This has required significant changes and enhancements to the access control system. In common with many other examples in the research community, the experiences of the above underline the difficulties of developing collaborative e-Research infrastructures. Drawing from these there are some recurring themes: Clear requirements need to be established at the outset recognising that implementing strict access policies can incur additional development and administrative overhead. An appropriate balance is needed between ease of access desired by end users and metrics and monitoring required by resource providers. The major technical challenge is not with security technologies themselves but their effective

  10. Greening infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-10-01

    Full Text Available The development and maintenance of infrastructure is crucial to improving economic growth and quality of life (WEF 2013). Urban infrastructure typically includes bulk services such as water, sanitation and energy (typically electricity and gas...

  11. Large Cryogenic Infrastructure for LHC Superconducting Magnet and Cryogenic Component Tests: Layout, Commissioning and Operational Experience

    International Nuclear Information System (INIS)

    Calzas, C.; Chanat, D.; Knoops, S.; Sanmarti, M.; Serio, L.

    2004-01-01

    The largest cryogenic test facility at CERN, located at Zone 18, is used to validate and to test all main components working at cryogenic temperature in the LHC (Large Hadron Collider) before final installation in the machine tunnel. In total about 1300 main dipoles, 400 main quadrupoles, 5 RF-modules, eight 1.8 K refrigeration units will be tested in the coming years.The test facility has been improved and upgraded over the last few years and the first 18 kW refrigerator for the LHC machine has been added to boost the cryogenic capacity for the area via a 25,000 liter liquid helium dewar. The existing 6 kW refrigerator, used for the LHC Test String experiments, will also be employed to commission LHC cryogenic components.We report on the design and layout of the test facility as well as the commissioning and the first 10,000 hours operational experience of the test facility and the 18 kW LHC refrigerator

  12. Brookhaven Reactor Experiment Control Facility, a distributed function computer network

    International Nuclear Information System (INIS)

    Dimmler, D.G.; Greenlaw, N.; Kelley, M.A.; Potter, D.W.; Rankowitz, S.; Stubblefield, F.W.

    1975-11-01

    A computer network for real-time data acquisition, monitoring and control of a series of experiments at the Brookhaven High Flux Beam Reactor has been developed and has been set into routine operation. This reactor experiment control facility presently services nine neutron spectrometers and one x-ray diffractometer. Several additional experiment connections are in progress. The architecture of the facility is based on a distributed function network concept. A statement of implementation and results is presented

  13. Telecom infrastructure leasing

    International Nuclear Information System (INIS)

    Henley, R.

    1995-01-01

    Slides to accompany a discussion about leasing telecommunications infrastructure, including radio/microwave tower space, radio control buildings, paging systems and communications circuits, were presented. The structure of Alberta Power Limited was described within the ATCO group of companies. Corporate goals and management practices and priorities were summarized. Lessons and experiences in the infrastructure leasing business were reviewed

  14. Bike Infrastructures

    DEFF Research Database (Denmark)

    Silva, Victor; Harder, Henrik; Jensen, Ole B.

    Bike Infrastructures aims to identify bicycle infrastructure typologies and design elements that can help promote cycling significantly. It is structured as a case study based research where three cycling infrastructures with distinct typologies were analyzed and compared. The three cases......, the findings of this research project can also support bike friendly design and planning, and cyclist advocacy....

  15. Ontological and Epistemological Issues Regarding Climate Models and Computer Experiments

    Science.gov (United States)

    Vezer, M. A.

    2010-12-01

    Recent philosophical discussions (Parker 2009; Frigg and Reiss 2009; Winsberg, 2009; Morgon 2002, 2003, 2005; Gula 2002) about the ontology of computer simulation experiments and the epistemology of inferences drawn from them are of particular relevance to climate science as computer modeling and analysis are instrumental in understanding climatic systems. How do computer simulation experiments compare with traditional experiments? Is there an ontological difference between these two methods of inquiry? Are there epistemological considerations that result in one type of inference being more reliable than the other? What are the implications of these questions with respect to climate studies that rely on computer simulation analysis? In this paper, I examine these philosophical questions within the context of climate science, instantiating concerns in the philosophical literature with examples found in analysis of global climate change. I concentrate on Wendy Parker’s (2009) account of computer simulation studies, which offers a treatment of these and other questions relevant to investigations of climate change involving such modelling. Two theses at the center of Parker’s account will be the focus of this paper. The first is that computer simulation experiments ought to be regarded as straightforward material experiments; which is to say, there is no significant ontological difference between computer and traditional experimentation. Parker’s second thesis is that some of the emphasis on the epistemological importance of materiality has been misplaced. I examine both of these claims. First, I inquire as to whether viewing computer and traditional experiments as ontologically similar in the way she does implies that there is no proper distinction between abstract experiments (such as ‘thought experiments’ as well as computer experiments) and traditional ‘concrete’ ones. Second, I examine the notion of materiality (i.e., the material commonality between

  16. Perspectives on distributed computing : thirty people, four user types, and the distributed computing user experience.

    Energy Technology Data Exchange (ETDEWEB)

    Childers, L.; Liming, L.; Foster, I.; Mathematics and Computer Science; Univ. of Chicago

    2008-10-15

    This report summarizes the methodology and results of a user perspectives study conducted by the Community Driven Improvement of Globus Software (CDIGS) project. The purpose of the study was to document the work-related goals and challenges facing today's scientific technology users, to record their perspectives on Globus software and the distributed-computing ecosystem, and to provide recommendations to the Globus community based on the observations. Globus is a set of open source software components intended to provide a framework for collaborative computational science activities. Rather than attempting to characterize all users or potential users of Globus software, our strategy has been to speak in detail with a small group of individuals in the scientific community whose work appears to be the kind that could benefit from Globus software, learn as much as possible about their work goals and the challenges they face, and describe what we found. The result is a set of statements about specific individuals experiences. We do not claim that these are representative of a potential user community, but we do claim to have found commonalities and differences among the interviewees that may be reflected in the user community as a whole. We present these as a series of hypotheses that can be tested by subsequent studies, and we offer recommendations to Globus developers based on the assumption that these hypotheses are representative. Specifically, we conducted interviews with thirty technology users in the scientific community. We included both people who have used Globus software and those who have not. We made a point of including individuals who represent a variety of roles in scientific projects, for example, scientists, software developers, engineers, and infrastructure providers. The following material is included in this report: (1) A summary of the reported work-related goals, significant issues, and points of satisfaction with the use of Globus software

  17. Locative media and data-driven computing experiments

    Directory of Open Access Journals (Sweden)

    Sung-Yueh Perng

    2016-06-01

    Full Text Available Over the past two decades urban social life has undergone a rapid and pervasive geocoding, becoming mediated, augmented and anticipated by location-sensitive technologies and services that generate and utilise big, personal, locative data. The production of these data has prompted the development of exploratory data-driven computing experiments that seek to find ways to extract value and insight from them. These projects often start from the data, rather than from a question or theory, and try to imagine and identify their potential utility. In this paper, we explore the desires and mechanics of data-driven computing experiments. We demonstrate how both locative media data and computing experiments are ‘staged’ to create new values and computing techniques, which in turn are used to try and derive possible futures that are ridden with unintended consequences. We argue that using computing experiments to imagine potential urban futures produces effects that often have little to do with creating new urban practices. Instead, these experiments promote Big Data science and the prospect that data produced for one purpose can be recast for another and act as alternative mechanisms of envisioning urban futures.

  18. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    Science.gov (United States)

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  19. Spacelab experiment computer study. Volume 1: Executive summary (presentation)

    Science.gov (United States)

    Lewis, J. L.; Hodges, B. C.; Christy, J. O.

    1976-01-01

    A quantitative cost for various Spacelab flight hardware configurations is provided along with varied software development options. A cost analysis of Spacelab computer hardware and software is presented. The cost study is discussed based on utilization of a central experiment computer with optional auxillary equipment. Groundrules and assumptions used in deriving the costing methods for all options in the Spacelab experiment study are presented. The groundrules and assumptions, are analysed and the options along with their cost considerations, are discussed. It is concluded that Spacelab program cost for software development and maintenance is independent of experimental hardware and software options, that distributed standard computer concept simplifies software integration without a significant increase in cost, and that decisions on flight computer hardware configurations should not be made until payload selection for a given mission and a detailed analysis of the mission requirements are completed.

  20. Computational experiment approach to advanced secondary mathematics curriculum

    CERN Document Server

    Abramovich, Sergei

    2014-01-01

    This book promotes the experimental mathematics approach in the context of secondary mathematics curriculum by exploring mathematical models depending on parameters that were typically considered advanced in the pre-digital education era. This approach, by drawing on the power of computers to perform numerical computations and graphical constructions, stimulates formal learning of mathematics through making sense of a computational experiment. It allows one (in the spirit of Freudenthal) to bridge serious mathematical content and contemporary teaching practice. In other words, the notion of teaching experiment can be extended to include a true mathematical experiment. When used appropriately, the approach creates conditions for collateral learning (in the spirit of Dewey) to occur including the development of skills important for engineering applications of mathematics. In the context of a mathematics teacher education program, this book addresses a call for the preparation of teachers capable of utilizing mo...

  1. Benchmarking infrastructure for mutation text mining.

    Science.gov (United States)

    Klein, Artjom; Riazanov, Alexandre; Hindle, Matthew M; Baker, Christopher Jo

    2014-02-25

    Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption.

  2. Benchmarking infrastructure for mutation text mining

    Science.gov (United States)

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  3. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    International Nuclear Information System (INIS)

    Varela Rodriguez, F

    2011-01-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  4. Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems

    Science.gov (United States)

    Varela Rodriguez, F.

    2011-12-01

    The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.

  5. Building Resilient Cloud Over Unreliable Commodity Infrastructure

    OpenAIRE

    Kedia, Piyus; Bansal, Sorav; Deshpande, Deepak; Iyer, Sreekanth

    2012-01-01

    Cloud Computing has emerged as a successful computing paradigm for efficiently utilizing managed compute infrastructure such as high speed rack-mounted servers, connected with high speed networking, and reliable storage. Usually such infrastructure is dedicated, physically secured and has reliable power and networking infrastructure. However, much of our idle compute capacity is present in unmanaged infrastructure like idle desktops, lab machines, physically distant server machines, and lapto...

  6. Quantum chemistry simulation on quantum computers: theories and experiments.

    Science.gov (United States)

    Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng

    2012-07-14

    It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  8. The Design and Evaluation of Teaching Experiments in Computer Science.

    Science.gov (United States)

    Forcheri, Paola; Molfino, Maria Teresa

    1992-01-01

    Describes a relational model that was developed to provide a framework for the design and evaluation of teaching experiments for the introduction of computer science in secondary schools in Italy. Teacher training is discussed, instructional materials are considered, and use of the model for the evaluation process is described. (eight references)…

  9. Instructional Styles, Attitudes and Experiences of Seniors in Computer Workshops

    Science.gov (United States)

    Wood, Eileen; Lanuza, Catherine; Baciu, Iuliana; MacKenzie, Meagan; Nosko, Amanda

    2010-01-01

    Sixty-four seniors were introduced to computers through a series of five weekly workshops. Participants were given instruction followed by hands-on experience for topics related to social communication, information seeking, games, and word processing and were observed to determine their preferences for instructional support. Observations of…

  10. The Experiment Method for Manufacturing Grid Development on Single Computer

    Institute of Scientific and Technical Information of China (English)

    XIAO Youan; ZHOU Zude

    2006-01-01

    In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.

  11. Infrastructure needs for waste management

    International Nuclear Information System (INIS)

    Takahashi, M.

    2001-01-01

    National infrastructures are needed to safely and economically manage radioactive wastes. Considerable experience has been accumulated in industrialized countries for predisposal management of radioactive wastes, and legal, regulatory and technical infrastructures are in place. Drawing on this experience, international organizations can assist in transferring this knowledge to developing countries to build their waste management infrastructures. Infrastructure needs for disposal of long lived radioactive waste are more complex, due to the long time scale that must be considered. Challenges and infrastructure needs, particularly for countries developing geologic repositories for disposal of high level wastes, are discussed in this paper. (author)

  12. BUILDING A COMPLETE FREE AND OPEN SOURCE GIS INFRASTRUCTURE FOR HYDROLOGICAL COMPUTING AND DATA PUBLICATION USING GIS.LAB AND GISQUICK PLATFORMS

    Directory of Open Access Journals (Sweden)

    M. Landa

    2017-07-01

    Full Text Available Building a complete free and open source GIS computing and data publication platform can be a relatively easy task. This paper describes an automated deployment of such platform using two open source software projects – GIS.lab and Gisquick. GIS.lab (http: //web.gislab.io is a project for rapid deployment of a complete, centrally managed and horizontally scalable GIS infrastructure in the local area network, data center or cloud. It provides a comprehensive set of free geospatial software seamlessly integrated into one, easy-to-use system. A platform for GIS computing (in our case demonstrated on hydrological data processing requires core components as a geoprocessing server, map server, and a computation engine as eg. GRASS GIS, SAGA, or other similar GIS software. All these components can be rapidly, and automatically deployed by GIS.lab platform. In our demonstrated solution PyWPS is used for serving WPS processes built on the top of GRASS GIS computation platform. GIS.lab can be easily extended by other components running in Docker containers. This approach is shown on Gisquick seamless integration. Gisquick (http://gisquick.org is an open source platform for publishing geospatial data in the sense of rapid sharing of QGIS projects on the web. The platform consists of QGIS plugin, Django-based server application, QGIS server, and web/mobile clients. In this paper is shown how to easily deploy complete open source GIS infrastructure allowing all required operations as data preparation on desktop, data sharing, and geospatial computation as the service. It also includes data publication in the sense of OGC Web Services and importantly also as interactive web mapping applications.

  13. Thumbnail Images:Uncertainties, Infrastructures and Search Engines

    OpenAIRE

    Thylstrup, Nanna; Teilmann, Stina

    2017-01-01

    This article argues that thumbnail images are infrastructural images that raise issues of uncertainty in two distinct, but interrelated, areas: a legal question of how to define, understand and govern visual information infrastructures, in particular image search systems in epistemological and strategic terms; and a cultural question of how human-computer interaction design works with navigational uncertainty, both as an experience to be managed and a resource to be exploited. This paper cons...

  14. Doctors' experience with handheld computers in clinical practice: qualitative study.

    Science.gov (United States)

    McAlearney, Ann Scheck; Schweikhart, Sharon B; Medow, Mitchell A

    2004-05-15

    To examine doctors' perspectives about their experiences with handheld computers in clinical practice. Qualitative study of eight focus groups consisting of doctors with diverse training and practice patterns. Six practice settings across the United States and two additional focus group sessions held at a national meeting of general internists. 54 doctors who did or did not use handheld computers. Doctors who used handheld computers in clinical practice seemed generally satisfied with them and reported diverse patterns of use. Users perceived that the devices helped them increase productivity and improve patient care. Barriers to use concerned the device itself and personal and perceptual constraints, with perceptual factors such as comfort with technology, preference for paper, and the impression that the devices are not easy to use somewhat difficult to overcome. Participants suggested that organisations can help promote handheld computers by providing advice on purchase, usage, training, and user support. Participants expressed concern about reliability and security of the device but were particularly concerned about dependency on the device and over-reliance as a substitute for clinical thinking. Doctors expect handheld computers to become more useful, and most seem interested in leveraging (getting the most value from) their use. Key opportunities with handheld computers included their use as a stepping stone to build doctors' comfort with other information technology and ehealth initiatives and providing point of care support that helps improve patient care.

  15. Framework for emotional mobile computation for creating entertainment experience

    Science.gov (United States)

    Lugmayr, Artur R.

    2007-02-01

    Ambient media are media, which are manifesting in the natural environment of the consumer. The perceivable borders between the media and the context, where the media is used are getting more and more blurred. The consumer is moving through a digital space of services throughout his daily life. As we are developing towards an experience society, the central point in the development of services is the creation of a consumer experience. This paper reviews possibilities and potentials of the creation of entertainment experiences with mobile phone platforms. It reviews sensor network capable of acquiring consumer behavior data, interactivity strategies, psychological models for emotional computation on mobile phones, and lays the foundations of a nomadic experience society. The paper rounds up with a presentation of several different possible service scenarios in the field of entertainment and leisure computation on mobiles. The goal of this paper is to present a framework and evaluation of possibilities of applying sensor technology on mobile platforms to create an increasing consumer entertainment experience.

  16. Development of a Data Acquisition Program for the Purpose of Monitoring Processing Statistics Throughout the BaBar Online Computing Infrastructure's Farm Machines

    Energy Technology Data Exchange (ETDEWEB)

    Stonaha, P.

    2004-09-03

    A current shortcoming of the BaBar monitoring system is the lack of systematic gathering, archiving, and access to the running statistics of the BaBar Online Computing Infrastructure's farm machines. Using C, a program has been written to gather the raw data of each machine's running statistics and compute various rates and percentages that can be used for system monitoring. These rates and percentages then can be stored in an EPICS database for graphing, archiving, and future access. Graphical outputs show the reception of the data into the EPICS database. The C program can read if the data are 32- or 64-bit and correct for overflows. This program is not exclusive to BaBar and can be easily modified for any system.

  17. Multilink manipulator computer control: experience in development and commissioning

    International Nuclear Information System (INIS)

    Holt, J.E.

    1988-11-01

    This report describes development which has been carried out on the multilink manipulator computer control system. The system allows the manipulator to be driven using only two joysticks. The leading link is controlled and the other links follow its path into the reactor, thus avoiding any potential obstacles. The system has been fully commissioned and used with the Sizewell ''A'' reactor 2 Multilink T.V. manipulator. Experience of the use of the system is presented, together with recommendations for future improvements. (author)

  18. Unsteady Thick Airfoil Aerodynamics: Experiments, Computation, and Theory

    Science.gov (United States)

    Strangfeld, C.; Rumsey, C. L.; Mueller-Vahl, H.; Greenblatt, D.; Nayeri, C. N.; Paschereit, C. O.

    2015-01-01

    An experimental, computational and theoretical investigation was carried out to study the aerodynamic loads acting on a relatively thick NACA 0018 airfoil when subjected to pitching and surging, individually and synchronously. Both pre-stall and post-stall angles of attack were considered. Experiments were carried out in a dedicated unsteady wind tunnel, with large surge amplitudes, and airfoil loads were estimated by means of unsteady surface mounted pressure measurements. Theoretical predictions were based on Theodorsen's and Isaacs' results as well as on the relatively recent generalizations of van der Wall. Both two- and three-dimensional computations were performed on structured grids employing unsteady Reynolds-averaged Navier-Stokes (URANS). For pure surging at pre-stall angles of attack, the correspondence between experiments and theory was satisfactory; this served as a validation of Isaacs theory. Discrepancies were traced to dynamic trailing-edge separation, even at low angles of attack. Excellent correspondence was found between experiments and theory for airfoil pitching as well as combined pitching and surging; the latter appears to be the first clear validation of van der Wall's theoretical results. Although qualitatively similar to experiment at low angles of attack, two-dimensional URANS computations yielded notable errors in the unsteady load effects of pitching, surging and their synchronous combination. The main reason is believed to be that the URANS equations do not resolve wake vorticity (explicitly modeled in the theory) or the resulting rolled-up un- steady flow structures because high values of eddy viscosity tend to \\smear" the wake. At post-stall angles, three-dimensional computations illustrated the importance of modeling the tunnel side walls.

  19. INFORMATION INFRASTRUCTURE OF THE EDUCATIONAL ENVIRONMENT WITH VIRTUAL MACHINE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Artem D. Beresnev

    2014-09-01

    Full Text Available Subject of research. Information infrastructure for the training environment with application of technology of virtual computers for small pedagogical systems (separate classes, author's courses is created and investigated. Research technique. The life cycle model of information infrastructure for small pedagogical systems with usage of virtual computers in ARIS methodology is constructed. The technique of information infrastructure formation with virtual computers on the basis of process approach is offered. The model of an event chain in combination with the environment chart is used as the basic model. For each function of the event chain the necessary set of means of information and program support is defined. Technique application is illustrated on the example of information infrastructure design for the educational environment taking into account specific character of small pedagogical systems. Advantages of the designed information infrastructure are: the maximum usage of open or free components; the usage of standard protocols (mainly, HTTP and HTTPS; the maximum portability (application servers can be started up on any of widespread operating systems; uniform interface to management of various virtualization platforms, possibility of inventory of contents of the virtual computer without its start, flexible inventory management of the virtual computer by means of adjusted chains of rules. Approbation. Approbation of obtained results was carried out on the basis of training center "Institute of Informatics and Computer Facilities" (Tallinn, Estonia. Technique application within the course "Computer and Software Usage" gave the possibility to get half as much the number of refusals for components of the information infrastructure demanding intervention of the technical specialist, and also the time for elimination of such malfunctions. Besides, the pupils who have got broader experience with computer and software, showed better results

  20. Expertik: Experience with Artificial Intelligence and Mobile Computing

    Directory of Open Access Journals (Sweden)

    José Edward Beltrán Lozano

    2013-06-01

    Full Text Available This article presents the experience in the development of services based in Artificial Intelligence, Service Oriented Architecture, mobile computing. It aims to combine technology offered by mobile computing provides techniques and artificial intelligence through a service provide diagnostic solutions to problems in industrial maintenance. It aims to combine technology offered by mobile computing and the techniques artificial intelligence through a service to provide diagnostic solutions to problems in industrial maintenance. For service creation are identified the elements of an expert system, the knowledge base, the inference engine and knowledge acquisition interfaces and their consultation. The applications were developed in ASP.NET under architecture three layers. The data layer was developed conjunction in SQL Server with data management classes; business layer in VB.NET and the presentation layer in ASP.NET with XHTML. Web interfaces for knowledge acquisition and query developed in Web and Mobile Web. The inference engine was conducted in web service developed for the fuzzy logic model to resolve requests from applications consulting knowledge (initially an exact rule-based logic within this experience to resolve requests from applications consulting knowledge. This experience seeks to strengthen a technology-based company to offer services based on AI for service companies Colombia.

  1. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  2. On the computer simulation of the EPR-Bohm experiment

    International Nuclear Information System (INIS)

    McGoveran, D.O.; Noyes, H.P.; Manthey, M.J.

    1988-12-01

    We argue that supraluminal correlation without supraluminal signaling is a necessary consequence of any finite and discrete model for physics. Every day, the commercial and military practice of using encrypted communication based on correlated, pseudo-random signals illustrates this possibility. All that is needed are two levels of computational complexity which preclude using a smaller system to detect departures from ''randomness'' in the larger system. Hence the experimental realizations of the EPR-Bohm experiment leave open the question of whether the world of experience is ''random'' or pseudo-random. The latter possibility could be demonstrated experimentally if a complexity parameter related to the arm length and switching time in an Aspect-type realization of the EPR-Bohm experiment is sufficiently small compared to the number of reliable total counts which can be obtained in practice. 6 refs

  3. Topographic evolution of sandbars: Flume experiment and computational modeling

    Science.gov (United States)

    Kinzel, Paul J.; Nelson, Jonathan M.; McDonald, Richard R.; Logan, Brandy L.

    2010-01-01

    Measurements of sandbar formation and evolution were carried out in a laboratory flume and the topographic characteristics of these barforms were compared to predictions from a computational flow and sediment transport model with bed evolution. The flume experiment produced sandbars with approximate mode 2, whereas numerical simulations produced a bed morphology better approximated as alternate bars, mode 1. In addition, bar formation occurred more rapidly in the laboratory channel than for the model channel. This paper focuses on a steady-flow laboratory experiment without upstream sediment supply. Future experiments will examine the effects of unsteady flow and sediment supply and the use of numerical models to simulate the response of barform topography to these influences.

  4. LHCb: Performance evaluation and capacity planning for a scalable and highly available virtulization infrastructure for the LHCb experiment

    CERN Multimedia

    Sborzacchi, F; Neufeld, N

    2013-01-01

    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtulization platforms. The performance is measured using a series of independent benchmarks , testing the speed an the stability of multiple VMs runnng heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different workloads of Vm interact with the Hardware/Software resource layers.

  5. Distributing the computation in combinatorial optimization experiments over the cloud

    Directory of Open Access Journals (Sweden)

    Mario Brcic

    2017-12-01

    Full Text Available Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters. Since the process is also iterative and many ideas and hypotheses may be tested, execution time of each experiment has an important role in the efficiency and successfulness. Structure of such experiments allows for significant execution time improvement by distributing the computation. We focus on the cloud computing as a cost-efficient solution in these circumstances. In this paper we present a system for validating and comparing stochastic combinatorial optimization algorithms. The system also deals with selection of the optimal settings for computational nodes and number of nodes in terms of performance-cost tradeoff. We present applications of the system on a new class of project scheduling problem. We show that we can optimize the selection over cloud service providers as one of the settings and, according to the model, it resulted in a substantial cost-savings while meeting the deadline.

  6. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Science.gov (United States)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  7. Reproducible computational biology experiments with SED-ML--the Simulation Experiment Description Markup Language.

    Science.gov (United States)

    Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas

    2011-12-15

    The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research

  8. Reproducible computational biology experiments with SED-ML - The Simulation Experiment Description Markup Language

    Science.gov (United States)

    2011-01-01

    Background The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. Results In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. Conclusions With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from

  9. Experience building and operating the CMS Tier-1 computing centres

    Science.gov (United States)

    Albert, M.; Bakken, J.; Bonacorsi, D.; Brew, C.; Charlot, C.; Huang, Chih-Hao; Colling, D.; Dumitrescu, C.; Fagan, D.; Fassi, F.; Fisk, I.; Flix, J.; Giacchetti, L.; Gomez-Ceballos, G.; Gowdy, S.; Grandi, C.; Gutsche, O.; Hahn, K.; Holzman, B.; Jackson, J.; Kreuzer, P.; Kuo, C. M.; Mason, D.; Pukhaeva, N.; Qin, G.; Quast, G.; Rossman, P.; Sartirana, A.; Scheurer, A.; Schott, G.; Shih, J.; Tader, P.; Thompson, R.; Tiradani, A.; Trunov, A.

    2010-04-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  10. Experience building and operating the CMS Tier-1 computing centres

    International Nuclear Information System (INIS)

    Albert, M; Bakken, J; Huang, Chih-Hao; Dumitrescu, C; Fagan, D; Fisk, I; Giacchetti, L; Gutsche, O; Holzman, B; Bonacorsi, D; Grandi, C; Brew, C; Jackson, J; Charlot, C; Colling, D; Fassi, F; Flix, J; Gomez-Ceballos, G; Hahn, K; Gowdy, S

    2010-01-01

    The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.

  11. The BaBar experiment's distributed computing model

    International Nuclear Information System (INIS)

    Boutigny, D.

    2001-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multitier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT format and later in Objectivity format. GRID tools will be used for remote job submission

  12. The BaBar Experiment's Distributed Computing Model

    International Nuclear Information System (INIS)

    Gowdy, Stephen J.

    2002-01-01

    In order to face the expected increase in statistics between now and 2005, the BaBar experiment at SLAC is evolving its computing model toward a distributed multi-tier system. It is foreseen that data will be spread among Tier-A centers and deleted from the SLAC center. A uniform computing environment is being deployed in the centers, the network bandwidth is continuously increased and data distribution tools has been designed in order to reach a transfer rate of ∼100 TB of data per year. In parallel, smaller Tier-B and C sites receive subsets of data, presently in Kanga-ROOT[1] format and later in Objectivity[2] format. GRID tools will be used for remote job submission

  13. Computer modeling of active experiments in space plasmas

    International Nuclear Information System (INIS)

    Bollens, R.J.

    1993-01-01

    The understanding of space plasmas is expanding rapidly. This is, in large part, due to the ambitious efforts of scientists from around the world who are performing large scale active experiments in the space plasma surrounding the earth. One such effort was designated the Active Magnetospheric Particle Tracer Explorers (AMPTE) and consisted of a series of plasma releases that were completed during 1984 and 1985. What makes the AMPTE experiments particularly interesting was the occurrence of a dramatic anomaly that was completely unpredicted. During the AMPTE experiment, three satellites traced the solar-wind flow into the earth's magnetosphere. One satellite, built by West Germany, released a series of barium and lithium canisters that were detonated and subsequently photo-ionized via solar radiation, thereby creating an artificial comet. Another satellite, built by Great Britain and in the vicinity during detonation, carried, as did the first satellite, a comprehensive set of magnetic field, particle and wave instruments. Upon detonation, what was observed by the satellites, as well as by aircraft and ground-based observers, was quite unexpected. The initial deflection of the ion clouds was not in the ambient solar wind's flow direction (rvec V) but rather in the direction transverse to the solar wind and the background magnetic field (rvec V x rvec B). This result was not predicted by any existing theories or simulation models; it is the main subject discussed in this dissertation. A large three dimensional computer simulation was produced to demonstrate that this transverse motion can be explained in terms of a rocket effect. Due to the extreme computer resources utilized in producing this work, the computer methods used to complete the calculation and the visualization techniques used to view the results are also discussed

  14. Infrastructure for genomic interactions: Bioconductor classes for Hi-C, ChIA-PET and related experiments [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Aaron T. L. Lun

    2016-05-01

    Full Text Available The study of genomic interactions has been greatly facilitated by techniques such as chromatin conformation capture with high-throughput sequencing (Hi-C. These genome-wide experiments generate large amounts of data that require careful analysis to obtain useful biological conclusions. However, development of the appropriate software tools is hindered by the lack of basic infrastructure to represent and manipulate genomic interaction data. Here, we present the InteractionSet package that provides classes to represent genomic interactions and store their associated experimental data, along with the methods required for low-level manipulation and processing of those classes. The InteractionSet package exploits existing infrastructure in the open-source Bioconductor project, while in turn being used by Bioconductor packages designed for higher-level analyses. For new packages, use of the functionality in InteractionSet will simplify development, allow access to more features and improve interoperability between packages.

  15. Infrastructure for genomic interactions: Bioconductor classes for Hi-C, ChIA-PET and related experiments [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Aaron T. L. Lun

    2016-06-01

    Full Text Available The study of genomic interactions has been greatly facilitated by techniques such as chromatin conformation capture with high-throughput sequencing (Hi-C. These genome-wide experiments generate large amounts of data that require careful analysis to obtain useful biological conclusions. However, development of the appropriate software tools is hindered by the lack of basic infrastructure to represent and manipulate genomic interaction data. Here, we present the InteractionSet package that provides classes to represent genomic interactions and store their associated experimental data, along with the methods required for low-level manipulation and processing of those classes. The InteractionSet package exploits existing infrastructure in the open-source Bioconductor project, while in turn being used by Bioconductor packages designed for higher-level analyses. For new packages, use of the functionality in InteractionSet will simplify development, allow access to more features and improve interoperability between packages.

  16. Fisher information in the design of computer simulation experiments

    Energy Technology Data Exchange (ETDEWEB)

    StehlIk, Milan; Mueller, Werner G [Department of Applied Statistics, Johannes-Kepler-University Linz Freistaedter Strasse 315, A-4040 Linz (Austria)], E-mail: Milan.Stehlik@jku.at, E-mail: Werner.Mueller@jku.at

    2008-11-01

    The concept of Fisher information is conveniently used as a basis for designing efficient experiments. However, if the output stems from computer simulations they are often approximated as realizations of correlated random fields. Consequently, the conditions under which Fisher information may be suitable must be restated. In the paper we intend to give some simple but illuminating examples for these cases. 'Random phenomena have increasing importance in Engineering and Physics, therefore theoretical results are strongly needed. But there is a gap between the probability theory used by mathematicians and practitioners. Two very different languages have been generated in this way...' (Paul Kree, Paris 1995)

  17. Fisher information in the design of computer simulation experiments

    International Nuclear Information System (INIS)

    StehlIk, Milan; Mueller, Werner G

    2008-01-01

    The concept of Fisher information is conveniently used as a basis for designing efficient experiments. However, if the output stems from computer simulations they are often approximated as realizations of correlated random fields. Consequently, the conditions under which Fisher information may be suitable must be restated. In the paper we intend to give some simple but illuminating examples for these cases. 'Random phenomena have increasing importance in Engineering and Physics, therefore theoretical results are strongly needed. But there is a gap between the probability theory used by mathematicians and practitioners. Two very different languages have been generated in this way...' (Paul Kree, Paris 1995)

  18. Enabling systematic, harmonised and large-scale biofilms data computation: the Biofilms Experiment Workbench.

    Science.gov (United States)

    Pérez-Rodríguez, Gael; Glez-Peña, Daniel; Azevedo, Nuno F; Pereira, Maria Olívia; Fdez-Riverola, Florentino; Lourenço, Anália

    2015-03-01

    Biofilms are receiving increasing attention from the biomedical community. Biofilm-like growth within human body is considered one of the key microbial strategies to augment resistance and persistence during infectious processes. The Biofilms Experiment Workbench is a novel software workbench for the operation and analysis of biofilms experimental data. The goal is to promote the interchange and comparison of data among laboratories, providing systematic, harmonised and large-scale data computation. The workbench was developed with AIBench, an open-source Java desktop application framework for scientific software development in the domain of translational biomedicine. Implementation favours free and open-source third-parties, such as the R statistical package, and reaches for the Web services of the BiofOmics database to enable public experiment deposition. First, we summarise the novel, free, open, XML-based interchange format for encoding biofilms experimental data. Then, we describe the execution of common scenarios of operation with the new workbench, such as the creation of new experiments, the importation of data from Excel spreadsheets, the computation of analytical results, the on-demand and highly customised construction of Web publishable reports, and the comparison of results between laboratories. A considerable and varied amount of biofilms data is being generated, and there is a critical need to develop bioinformatics tools that expedite the interchange and comparison of microbiological and clinical results among laboratories. We propose a simple, open-source software infrastructure which is effective, extensible and easy to understand. The workbench is freely available for non-commercial use at http://sing.ei.uvigo.es/bew under LGPL license. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. EXPERIENCE OF THE ORGANIZATION OF VIRTUAL LABORATORIES ON THE BASIS OF TECHNOLOGIES OF CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    V. Oleksyuk

    2014-06-01

    Full Text Available The article investigated the concept of «virtual laboratory». This paper describes models of deploying of cloud technologies in IT infrastructure. The hybrid model is most recent for higher educational institution. The author suggests private cloud platforms to deploying the virtual laboratory. This paper describes the experience of the deployment enterprise cloud in IT infrastructure of Department of Physics and Mathematics of Ternopil V. Hnatyuk National Pedagogical University. The object of the research are virtual laboratories as components of IT infrastructure of higher education. The subject of the research are clouds as base of deployment of the virtual laboratories. Conclusions. The use of cloud technologies in the development virtual laboratories of the is an actual and need of the development. The hybrid model is the most appropriate in the deployment of cloud infrastructure of higher educational institution. It is reasonable to use the private (Cloudstack, Eucalyptus, OpenStack cloud platform in the universities.

  20. When STAR meets the Clouds-Virtualization and Cloud Computing Experiences

    International Nuclear Information System (INIS)

    Lauret, J; Hajdu, L; Walker, M; Balewski, J; Goasguen, S; Stout, L; Fenn, M; Keahey, K

    2011-01-01

    In recent years, Cloud computing has become a very attractive paradigm and popular model for accessing distributed resources. The Cloud has emerged as the next big trend. The burst of platform and projects providing Cloud resources and interfaces at the very same time that Grid projects are entering a production phase in their life cycle has however raised the question of the best approach to handling distributed resources. Especially, are Cloud resources scaling at the levels shown by Grids? Are they performing at the same level? What is their overhead on the IT teams and infrastructure? Rather than seeing the two as orthogonal, the STAR experiment has viewed them as complimentary and has studied merging the best of the two worlds with Grid middleware providing the aggregation of both Cloud and traditional resources. Since its first use of Cloud resources on Amazon EC2 in 2008/2009 using a Nimbus/EC2 interface, the STAR software team has tested and experimented with many novel approaches: from a traditional, native EC2 approach to the Virtual Organization Cluster (VOC) at Clemson University and Condor/VM on the GLOW resources at the University of Wisconsin. The STAR team is also planning to run as part of the DOE/Magellan project. In this paper, we will present an overview of our findings from using truly opportunistic resources and scaling-out two orders of magnitude in both tests and practical usage.

  1. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  2. Tactile Radar: experimenting a computer game with visually disabled.

    Science.gov (United States)

    Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana

    2017-09-18

    Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.

  3. A benchmark on computational simulation of a CT fracture experiment

    International Nuclear Information System (INIS)

    Franco, C.; Brochard, J.; Ignaccolo, S.; Eripret, C.

    1992-01-01

    For a better understanding of the fracture behavior of cracked welds in piping, FRAMATOME, EDF and CEA have launched an important analytical research program. This program is mainly based on the analysis of the effects of the geometrical parameters (the crack size and the welded joint dimensions) and the yield strength ratio on the fracture behavior of several cracked configurations. Two approaches have been selected for the fracture analyses: on one hand, the global approach based on the concept of crack driving force J and on the other hand, a local approach of ductile fracture. In this approach the crack initiation and growth are modelized by the nucleation, growth and coalescence of cavities in front of the crack tip. The model selected in this study estimates only the growth of the cavities using the RICE and TRACEY relationship. The present study deals with a benchmark on computational simulation of CT fracture experiments using three computer codes : ALIBABA developed by EDF the CEA's code CASTEM 2000 and the FRAMATOME's code SYSTUS. The paper is split into three parts. At first, the authors present the experimental procedure for high temperature toughness testing of two CT specimens taken from a welded pipe, characteristic of pressurized water reactor primary piping. Secondly, considerations are outlined about the Finite Element analysis and the application procedure. A detailed description is given on boundary and loading conditions, on the mesh characteristics, on the numerical scheme involved and on the void growth computation. Finally, the comparisons between numerical and experimental results are presented up to the crack initiation, the tearing process being not taken into account in the present study. The variations of J and of the local variables used to estimate the damage around the crack tip (triaxiality and hydrostatic stresses, plastic deformations, void growth ...) are computed as a function of the increasing load

  4. Computational methods for predicting the response of critical as-built infrastructure to dynamic loads (architectural surety)

    Energy Technology Data Exchange (ETDEWEB)

    Preece, D.S.; Weatherby, J.R.; Attaway, S.W.; Swegle, J.W.; Matalucci, R.V.

    1998-06-01

    Coupled blast-structural computational simulations using supercomputer capabilities will significantly advance the understanding of how complex structures respond under dynamic loads caused by explosives and earthquakes, an understanding with application to the surety of both federal and nonfederal buildings. Simulation of the effects of explosives on structures is a challenge because the explosive response can best be simulated using Eulerian computational techniques and structural behavior is best modeled using Lagrangian methods. Due to the different methodologies of the two computational techniques and code architecture requirements, they are usually implemented in different computer programs. Explosive and structure modeling in two different codes make it difficult or next to impossible to do coupled explosive/structure interaction simulations. Sandia National Laboratories has developed two techniques for solving this problem. The first is called Smoothed Particle Hydrodynamics (SPH), a relatively new gridless method comparable to Eulerian, that is especially suited for treating liquids and gases such as those produced by an explosive. The SPH capability has been fully implemented into the transient dynamics finite element (Lagrangian) codes PRONTO-2D and -3D. A PRONTO-3D/SPH simulation of the effect of a blast on a protective-wall barrier is presented in this paper. The second technique employed at Sandia National Laboratories uses a relatively new code called ALEGRA which is an ALE (Arbitrary Lagrangian-Eulerian) wave code with specific emphasis on large deformation and shock propagation. ALEGRA is capable of solving many shock-wave physics problems but it is especially suited for modeling problems involving the interaction of decoupled explosives with structures.

  5. The machine in the market: Computers and the infrastructure of price at the New York Stock Exchange, 1965-1975.

    Science.gov (United States)

    Kennedy, Devin

    2017-12-01

    This article traces the development and expansion of early computer systems for managing and disseminating 'real-time' market data at the most influential stock market in the United States, the New York Stock Exchange (NYSE). It follows electronic media at the NYSE over a roughly ten-year period, from the time of the deployment of a computer called the Market Data System (MDS) through debates surrounding the National Market System and the passage of the 1975 Securities Acts Amendments. Building on research at the archives of the NYSE and the Securities and Exchange Commission (SEC), this history emphasizes the regulatory and managerial contexts in which market data became computerized. The SEC viewed market automation as both necessary for the viability of the securities industry and a mechanism for expanding regulatory oversight over the venues of stock trading. Moving from the MDS to later technical projects in the late 1960s and early 1970s, this article charts the changing meaning of electronic governance in a market increasingly conceptualized as a technical object. Adding to recent work in the social studies of finance and financial technologies, this history sites early NYSE computerization programs within managerial efforts to consolidate control over the clerical labor of financial markets, and in contests between regulatory and market institutions. It concludes by exploring the differing forms of electronic governance activated in these efforts to bring computers into the market.

  6. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  7. Infrastructural Fractals

    DEFF Research Database (Denmark)

    Bruun Jensen, Casper

    2007-01-01

    . Instead, I outline a fractal approach to the study of space, society, and infrastructure. A fractal orientation requires a number of related conceptual reorientations. It has implications for thinking about scale and perspective, and (sociotechnical) relations, and for considering the role of the social...... and a fractal social theory....

  8. Electricity Infrastructure Operations Center (EIOC)

    Data.gov (United States)

    Federal Laboratory Consortium — The Electricity Infrastructure Operations Center (EIOC) at PNNL brings together industry-leading software, real-time grid data, and advanced computation into a fully...

  9. An in-situ stimulation experiment in crystalline rock - assessment of induced seismicity levels during stimulation and related hazard for nearby infrastructure

    Science.gov (United States)

    Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico

    2016-04-01

    A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced

  10. Developments of multibody system dynamics: computer simulations and experiments

    International Nuclear Information System (INIS)

    Yoo, Wan-Suk; Kim, Kee-Nam; Kim, Hyun-Woo; Sohn, Jeong-Hyun

    2007-01-01

    It is an exceptional success when multibody dynamics researchers Multibody System Dynamics journal one of the most highly ranked journals in the last 10 years. In the inaugural issue, Professor Schiehlen wrote an interesting article explaining the roots and perspectives of multibody system dynamics. Professor Shabana also wrote an interesting article to review developments in flexible multibody dynamics. The application possibilities of multibody system dynamics have grown wider and deeper, with many application examples being introduced with multibody techniques in the past 10 years. In this paper, the development of multibody dynamics is briefly reviewed and several applications of multibody dynamics are described according to the author's research results. Simulation examples are compared to physical experiments, which show reasonableness and accuracy of the multibody formulation applied to real problems. Computer simulations using the absolute nodal coordinate formulation (ANCF) were also compared to physical experiments; therefore, the validity of ANCF for large-displacement and large-deformation problems was shown. Physical experiments for large deformation problems include beam, plate, chain, and strip. Other research topics currently being carried out in the author's laboratory are also briefly explained

  11. Computer-generated ovaries to assist follicle counting experiments.

    Directory of Open Access Journals (Sweden)

    Angelos Skodras

    Full Text Available Precise estimation of the number of follicles in ovaries is of key importance in the field of reproductive biology, both from a developmental point of view, where follicle numbers are determined at specific time points, as well as from a therapeutic perspective, determining the adverse effects of environmental toxins and cancer chemotherapeutics on the reproductive system. The two main factors affecting follicle number estimates are the sampling method and the variation in follicle numbers within animals of the same strain, due to biological variability. This study aims at assessing the effect of these two factors, when estimating ovarian follicle numbers of neonatal mice. We developed computer algorithms, which generate models of neonatal mouse ovaries (simulated ovaries, with characteristics derived from experimental measurements already available in the published literature. The simulated ovaries are used to reproduce in-silico counting experiments based on unbiased stereological techniques; the proposed approach provides the necessary number of ovaries and sampling frequency to be used in the experiments given a specific biological variability and a desirable degree of accuracy. The simulated ovary is a novel, versatile tool which can be used in the planning phase of experiments to estimate the expected number of animals and workload, ensuring appropriate statistical power of the resulting measurements. Moreover, the idea of the simulated ovary can be applied to other organs made up of large numbers of individual functional units.

  12. Experience with the WIMS computer code at Skoda Plzen

    International Nuclear Information System (INIS)

    Vacek, J.; Mikolas, P.

    1991-01-01

    Validation of the program for neutronics analysis is described. Computational results are compared with results of experiments on critical assemblies and with results of other codes for different types of lattices. Included are the results for lattices containing Gd as burnable absorber. With minor exceptions, the results of benchmarking were quite satisfactory and justified the inclusion of WIMS in the production system of codes for WWER analysis. The first practical application was the adjustment of the WWER-440 few-group diffusion constants library of the three-dimensional diffusion code MOBY-DICK, which led to a remarkable improvement of results for operational states. Then a new library for the analysis of WWER-440 start-up was generated and tested and at present a new library for the analysis of WWER-440 operational states is being tested. Preparation of the library for WWER-1000 is in progress. (author). 19 refs

  13. Test experience on an ultrareliable computer communication network

    Science.gov (United States)

    Abbott, L. W.

    1984-01-01

    The dispersed sensor processing mesh (DSPM) is an experimental, ultra-reliable, fault-tolerant computer communications network that exhibits an organic-like ability to regenerate itself after suffering damage. The regeneration is accomplished by two routines - grow and repair. This paper discusses the DSPM concept for achieving fault tolerance and provides a brief description of the mechanization of both the experiment and the six-node experimental network. The main topic of this paper is the system performance of the growth algorithm contained in the grow routine. The characteristics imbued to DSPM by the growth algorithm are also discussed. Data from an experimental DSPM network and software simulation of larger DSPM-type networks are used to examine the inherent limitation on growth time by the growth algorithm and the relationship of growth time to network size and topology.

  14. A Rural South African Experience of an ESL Computer Program

    Directory of Open Access Journals (Sweden)

    Marius Dieperink

    2008-12-01

    Full Text Available This article reports on a case study that explored the effect of an English-as-Second Language (ESL computer program at Tshwane University of Technology (TUT, South Africa. The case study explored participants’ perceptions, attitudes and beliefs regarding the ESL reading enhancement program, Reading Excellence™. The study found that participants experienced the program in a positive light. They experienced improved ESL reading as well as listening and writing proficiency. In addition, they experienced improved affective well-being in the sense that they generally felt more comfortable using ESL. This included feeling more self-confident in their experience of their academic environment. Interviews as well as document review resulted in dissonance, however: data pointed towards poor class attendance as well as a perturbing lack of progress in terms of reading comprehension and speed.

  15. Experiences using DAKOTA stochastic expansion methods in computational simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Templeton, Jeremy Alan; Ruthruff, Joseph R.

    2012-01-01

    Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.

  16. Studies on defect evolution in steels: experiments and computer simulations

    International Nuclear Information System (INIS)

    Sundar, C.S.

    2011-01-01

    In this paper, we present the results of our on-going studies on steels that are being carried out with a view to develop radiation resistant steels. The focus is on the use of nano-dispersoids in alloys towards the suppression of void formation and eventual swelling under irradiation. Results on the nucleation and growth of TiC precipitates in Ti modified austenitic steels and investigations on nano Yttria particles in Fe - a model oxide dispersion ferritic steel will be presented. The experimental methods of ion beam irradiation and positron annihilation spectroscopy have been used to elucidate the role of minor alloying elements on swelling behaviour. Computer simulation of defect processes have been carried out using ab-initio methods, molecular dynamics and Monte Carlo simulations. Our perspectives on addressing the multi-scale phenomena of defect processes leading to radiation damage, through a judicious combination of experiments and simulations, would be presented. (author)

  17. Alkali Rydberg states in electromagnetic fields: computational physics meets experiment

    International Nuclear Information System (INIS)

    Krug, A.

    2001-11-01

    We study highly excited hydrogen and alkali atoms ('Rydberg states') under the influence of a strong microwave field. As the external frequency is comparable to the highly excited electron's classical Kepler frequency, the external field induces a strong coupling of many different quantum mechanical energy levels and finally leads to the ionization of the outer electron. While periodically driven atomic hydrogen can be seen as a paradigm of quantum chaotic motion in an open (decaying) quantum system, the presence of the non-hydrogenic atomic core - which unavoidably has to be treated quantum mechanically - entails some complications. Indeed, laboratory experiments show clear differences in the ionization dynamics of microwave driven hydrogen and non-hydrogenic Rydberg states. In the first part of this thesis, a machinery is developed that allows for numerical experiments on alkali and hydrogen atoms under precisely identical laboratory conditions. Due to the high density of states in the parameter regime typically explored in laboratory experiments, such simulations are only possible with the most advanced parallel computing facilities, in combination with an efficient parallel implementation of the numerical approach. The second part of the thesis is devoted to the results of the numerical experiment. We identify and describe significant differences and surprising similarities in the ionization dynamics of atomic hydrogen as compared to alkali atoms, and give account of the relevant frequency scales that distinguish hydrogenic from non-hydrogenic ionization behavior. Our results necessitate a reinterpretation of the experimental results so far available, and solve the puzzle of a distinct ionization behavior of periodically driven hydrogen and non-hydrogenic Rydberg atoms - an unresolved question for about one decade. Finally, microwave-driven Rydberg states will be considered as prototypes of open, complex quantum systems that exhibit a complicated temporal decay

  18. Interdisciplinary Team-Teaching Experience for a Computer and Nuclear Energy Course for Electrical and Computer Engineering Students

    Science.gov (United States)

    Kim, Charles; Jackson, Deborah; Keiller, Peter

    2016-01-01

    A new, interdisciplinary, team-taught course has been designed to educate students in Electrical and Computer Engineering (ECE) so that they can respond to global and urgent issues concerning computer control systems in nuclear power plants. This paper discusses our experience and assessment of the interdisciplinary computer and nuclear energy…

  19. Gravitational Acceleration Effects on Macrosegregation: Experiment and Computational Modeling

    Science.gov (United States)

    Leon-Torres, J.; Curreri, P. A.; Stefanescu, D. M.; Sen, S.

    1999-01-01

    Experiments were performed under terrestrial gravity (1g) and during parabolic flights (10-2 g) to study the solidification and macrosegregation patterns of Al-Cu alloys. Alloys having 2% and 5% Cu were solidified against a chill at two different cooling rates. Microscopic and Electron Microprobe characterization was used to produce microstructural and macrosegregation maps. In all cases positive segregation occurred next to the chill because shrinkage flow, as expected. This positive segregation was higher in the low-g samples, apparently because of the higher heat transfer coefficient. A 2-D computational model was used to explain the experimental results. The continuum formulation was employed to describe the macroscopic transports of mass, energy, and momentum, associated with the solidification phenomena, for a two-phase system. The model considers that liquid flow is driven by thermal and solutal buoyancy, and by solidification shrinkage. The solidification event was divided into two stages. In the first one, the liquid containing freely moving equiaxed grains was described through the relative viscosity concept. In the second stage, when a fixed dendritic network was formed after dendritic coherency, the mushy zone was treated as a porous medium. The macrosegregation maps and the cooling curves obtained during experiments were used for validation of the solidification and segregation model. The model can explain the solidification and macrosegregation patterns and the differences between low- and high-gravity results.

  20. Access control infrastructure for on-demand provisioned virtualised infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2011-01-01

    Cloud technologies are emerging as a new way of provisioning virtualised computing and infrastructure services on-demand for collaborative projects and groups. Security in provisioning virtual infrastructure services should address two general aspects: supporting secure operation of the provisioning

  1. Computationally mediated experiments: the next frontier in microscopy

    International Nuclear Information System (INIS)

    Zaluzec, N.J.

    2002-01-01

    Full text: It's reasonably safe to say that most of the simple experimental techniques that can be employed in microscopy have been well documented and exploited over the last 20 years. Thus, if we are interested in extending the range and diversity of problems that we will be dealing with in the next decade then we will have to takeup challenges which here-to-for were considered beyond the realm of routine work. Given the ever growing tendency to add computational resources to our instruments it is clear that the next breakthrough will be directly tied to how well we can effectively tie these two realms together. In the past we have used computers to simply speed up our experiments, but in the up coming decade the key will be to realize that once an effective interface of instrumentation and computational tools is developed we must change the way in which we design our experiments. This means re-examining how we do experiments so that measurements are done not just quickly, but precisely and to maximize the information measured so that the data therein can be 'mined' for content which might have been missed in the past. As example of this consider the experimental technique of Position Resolved Diffraction which is currently being developed for the study of nanoscale magnetic structures using ANL's Advanced Analytical Electron Microscope. Here a focused electron probe is sequentially scanned across a two dimensional field of view of a thin specimen and at each point on the specimen a two dimensional electron diffraction pattern is acquired and stored. Analysis of the spatial variation in the electron diffraction pattern allows a researcher to study the subtle changes resulting from microstructural differences such as ferro and electro magnetic domain formation and motion. There is, however, a severe limitation in this technique-namely its need to store and dynamically process large data sets preferably in near real time. A minimal scoping measurement would involve

  2. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    using faulty components as the infrastructure expands or contracts. To demonstrate the feasibility of such a theoretical SCS, an organized suite of experiments were conducted comparing two SMC prototypes against MPI (Message Passing Interface using a naive dense matrix multiplication application. Consistently better SMC performance results are reported.

  3. Railway infrastructure security

    CERN Document Server

    Sforza, Antonio; Vittorini, Valeria; Pragliola, Concetta

    2015-01-01

    This comprehensive monograph addresses crucial issues in the protection of railway systems, with the objective of enhancing the understanding of railway infrastructure security. Based on analyses by academics, technology providers, and railway operators, it explains how to assess terrorist and criminal threats, design countermeasures, and implement effective security strategies. In so doing, it draws upon a range of experiences from different countries in Europe and beyond. The book is the first to be devoted entirely to this subject. It will serve as a timely reminder of the attractiveness of the railway infrastructure system as a target for criminals and terrorists and, more importantly, as a valuable resource for stakeholders and professionals in the railway security field aiming to develop effective security based on a mix of methodological, technological, and organizational tools. Besides researchers and decision makers in the field, the book will appeal to students interested in critical infrastructur...

  4. [The significance of the experience in organizing medical support for the troops during the war years for the development of the modern military medical infrastructure].

    Science.gov (United States)

    Pogodin, Iu I; Gurov, A N

    1995-05-01

    In the present period when combat activities are being carried out at the territory of Russia, namely in Chechnya, it is very important to solve the problem of the improvement of the infrastructure of medical service as a basis of territorial system of medical support of troops. That's why we are looking at the experience of medical support of troops in the period of the Great Patriotic war in order to determine the basic characteristic features of military medical infrastructure (MMI) of that time. Using the experience of medical support in the period of the Great Patriotic war it is necessary to draw the main attention on studying the medico-geographical aspects of the Armed Forces deployment over the whole territory of the country, state of health service system (taking into account its reformation), influence of natural, socio-economic and ecological factors of different regions upon the health of servicemen, organization of medical support of troops, proliferation of infectious and parasitic diseases, local resources and availability of medication materials, medical supplies, equipment and technique, as well as other indices which must be taken into consideration in routine situations or during disaster relief. All this information is very valuable for the process of the formation of an adequate MMF in the zone of responsibility of medical support of troops.

  5. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  6. Cloud Infrastructure Security

    OpenAIRE

    Velev , Dimiter; Zlateva , Plamena

    2010-01-01

    Part 4: Security for Clouds; International audience; Cloud computing can help companies accomplish more by eliminating the physical bonds between an IT infrastructure and its users. Users can purchase services from a cloud environment that could allow them to save money and focus on their core business. At the same time certain concerns have emerged as potential barriers to rapid adoption of cloud services such as security, privacy and reliability. Usually the information security professiona...

  7. Evaluative Infrastructures

    DEFF Research Database (Denmark)

    Kornberger, Martin; Pflueger, Dane; Mouritsen, Jan

    2017-01-01

    Platform organizations such as Uber, eBay and Airbnb represent a growing disruptive phenomenon in contemporary capitalism, transforming economic organization, the nature of work, and the distribution of wealth. This paper investigates the accounting practices that underpin this new form...... of organizing, and in doing so confronts a significant challenge within the accounting literature: the need to escape what Hopwood (1996) describes as its “hierarchical consciousness”. In order to do so, this paper develops the concept of evaluative infrastructure which describes accounting practices...

  8. Ritual Infrastructure

    DEFF Research Database (Denmark)

    Sjørslev, Inger

    2017-01-01

    within urban life. There is a certain parallel between these different locations and the difference in ritual roads to certainty in the two religions. The article draws out connections between different levels of infrastructure – material, spatial and ritual. The comparison between the two religions......This article compares the ways in which two different religions in Brazil generate roads to certainty through objectification, one through gods, the other through banknotes. The Afro-Brazilian religion Candomblé provides a road to certainty based on cosmological ideas about gods whose presence...

  9. DIRAC distributed computing services

    International Nuclear Information System (INIS)

    Tsaregorodtsev, A

    2014-01-01

    DIRAC Project provides a general-purpose framework for building distributed computing systems. It is used now in several HEP and astrophysics experiments as well as for user communities in other scientific domains. There is a large interest from smaller user communities to have a simple tool like DIRAC for accessing grid and other types of distributed computing resources. However, small experiments cannot afford to install and maintain dedicated services. Therefore, several grid infrastructure projects are providing DIRAC services for their respective user communities. These services are used for user tutorials as well as to help porting the applications to the grid for a practical day-to-day work. The services are giving access typically to several grid infrastructures as well as to standalone computing clusters accessible by the target user communities. In the paper we will present the experience of running DIRAC services provided by the France-Grilles NGI and other national grid infrastructure projects.

  10. Building laboratory infrastructure to support scale-up of HIV/AIDS treatment, care, and prevention: in-country experience.

    Science.gov (United States)

    Abimiku, Alash'le G

    2009-06-01

    An unprecedented influx of funds and support through large programs such as the Global Fund for AIDS, Malaria and Tuberculosis and the World Health Organization's and President's Emergency Plan for AIDS Relief (PEPFAR) has made it possible for more than 1 million persons in resource-limited settings to access AIDS treatment and several million more to be in care and prevention programs. Nevertheless, there remain major challenges that prevent AIDS drugs and care from reaching many more in need, especially in rural settings. The roll-out of a high-quality treatment, care, and prevention program depends on an effective and reliable laboratory infrastructure. This article presents a strategy used by the Institute of Human Virology (IHV)-University of Maryland and its affiliate IHV-Nigeria to establish a multifaceted, integrated tier laboratory program to support a PEPFAR-funded scale-up of its AIDS Care Treatment in Nigeria program, in collaboration with the Centers for Disease Control and Prevention and the Nigerian government, as a possible model for overcoming a key challenge that faces several resource-limited countries trying to roll out and scale-up their HIV/AIDS treatment, care, and prevention program.

  11. Interactive Quantum Mechanics Quantum Experiments on the Computer

    CERN Document Server

    Brandt, S; Dahmen, H.D

    2011-01-01

    Extra Materials available on extras.springer.com INTERACTIVE QUANTUM MECHANICS allows students to perform their own quantum-physics experiments on their computer, in vivid 3D color graphics. Topics covered include: •        harmonic waves and wave packets, •        free particles as well as bound states and scattering in various potentials in one and three dimensions (both stationary and time dependent), •        two-particle systems, coupled harmonic oscillators, •        distinguishable and indistinguishable particles, •        coherent and squeezed states in time-dependent motion, •        quantized angular momentum, •        spin and magnetic resonance, •        hybridization. For the present edition the physics scope has been widened appreciably. Moreover, INTERQUANTA can now produce user-defined movies of quantum-mechanical situations. Movies can be viewed directly and also be saved to be shown later in any browser. Sections on spec...

  12. CERN Infrastructure Evolution

    CERN Document Server

    Bell, Tim

    2012-01-01

    The CERN Computer Centre is reviewing strategies for optimizing the use of the existing infrastructure in the future, and in the likely scenario that any extension will be remote from CERN, and in the light of the way other large facilities are today being operated. Over the past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote computer centres. This presentation will give the details on the project’s motivations, current status and areas for future investigation.

  13. Computer-simulated experiments and computer games: a method of design analysis

    Directory of Open Access Journals (Sweden)

    Jerome J. Leary

    1995-12-01

    Full Text Available Through the new modularization of the undergraduate science degree at the University of Brighton, larger numbers of students are choosing to take some science modules which include an amount of laboratory practical work. Indeed, within energy studies, the fuels and combustion module, for which the computer simulations were written, has seen a fourfold increase in student numbers from twelve to around fifty. Fitting out additional laboratories with new equipment to accommodate this increase presented problems: the laboratory space did not exist; fitting out the laboratories with new equipment would involve a relatively large capital spend per student for equipment that would be used infrequently; and, because some of the experiments use inflammable liquids and gases, additional staff would be needed for laboratory supervision.

  14. CERN printing infrastructure

    International Nuclear Information System (INIS)

    Otto, R; Sucik, J

    2008-01-01

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all (∼1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration

  15. CERN printing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Otto, R; Sucik, J [CERN, Geneva (Switzerland)], E-mail: Rafal.Otto@cern.ch, E-mail: Juraj.Sucik@cern.ch

    2008-07-15

    For many years CERN had a very sophisticated print server infrastructure [13] which supported several different protocols (AppleTalk, IPX and TCP/IP) and many different printing standards. Today's situation differs a lot: we have a much more homogenous network infrastructure, where TCP/IP is used everywhere and we have less printer models, which almost all work using current standards (i.e. they all provide PostScript drivers). This change gave us the possibility to review the printing architecture aiming at simplifying the infrastructure in order to achieve full automation of the service. The new infrastructure offers both: LPD service exposing print queues to Linux and Mac OS X computers and native printing for Windows based clients. The printer driver distribution is automatic and native on Windows and automated by custom mechanisms on Linux, where the appropriate Foomatic drivers are configured. Also the process of printer registration and queue creation is completely automated following the printer registration in the network database. At the end of 2006 we have moved all ({approx}1200) CERN printers and all users' connections at CERN to the new service. This paper will describe the new architecture and summarize the process of migration.

  16. Making green infrastructure healthier infrastructure

    Directory of Open Access Journals (Sweden)

    Mare Lõhmus

    2015-11-01

    Full Text Available Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens’ quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  17. Making green infrastructure healthier infrastructure.

    Science.gov (United States)

    Lõhmus, Mare; Balbus, John

    2015-01-01

    Increasing urban green and blue structure is often pointed out to be critical for sustainable development and climate change adaptation, which has led to the rapid expansion of greening activities in cities throughout the world. This process is likely to have a direct impact on the citizens' quality of life and public health. However, alongside numerous benefits, green and blue infrastructure also has the potential to create unexpected, undesirable, side-effects for health. This paper considers several potential harmful public health effects that might result from increased urban biodiversity, urban bodies of water, and urban tree cover projects. It does so with the intent of improving awareness and motivating preventive measures when designing and initiating such projects. Although biodiversity has been found to be associated with physiological benefits for humans in several studies, efforts to increase the biodiversity of urban environments may also promote the introduction and survival of vector or host organisms for infectious pathogens with resulting spread of a variety of diseases. In addition, more green connectivity in urban areas may potentiate the role of rats and ticks in the spread of infectious diseases. Bodies of water and wetlands play a crucial role in the urban climate adaptation and mitigation process. However, they also provide habitats for mosquitoes and toxic algal blooms. Finally, increasing urban green space may also adversely affect citizens allergic to pollen. Increased awareness of the potential hazards of urban green and blue infrastructure should not be a reason to stop or scale back projects. Instead, incorporating public health awareness and interventions into urban planning at the earliest stages can help insure that green and blue infrastructure achieves full potential for health promotion.

  18. The Fermilab data storage infrastructure

    International Nuclear Information System (INIS)

    Jon A Bakken et al.

    2003-01-01

    Fermilab, in collaboration with the DESY laboratory in Hamburg, Germany, has created a petabyte scale data storage infrastructure to meet the requirements of experiments to store and access large data sets. The Fermilab data storage infrastructure consists of the following major storage and data transfer components: Enstore mass storage system, DCache distributed data cache, ftp and Grid ftp for primarily external data transfers. This infrastructure provides a data throughput sufficient for transferring data from experiments' data acquisition systems. It also allows access to data in the Grid framework

  19. On-Line Digital Computer Applications in Gas Chromatography, An Undergraduate Analytical Experiment

    Science.gov (United States)

    Perone, S. P.; Eagleston, J. F.

    1971-01-01

    Presented are some descriptive background materials and the directions for an experiment which provides an introduction to on-line computer instrumentation. Assumes students are familiar with the Purdue Real-Time Basic (PRTB) laboratory computer system. (PR)

  20. Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments.

    NARCIS (Netherlands)

    Dewiyanti, Silvia; Brand-Gruwel, Saskia; Jochems, Wim; Broers, Nick

    2008-01-01

    Dewiyanti, S., Brand-Gruwel, S., Jochems, W., & Broers, N. (2007). Students experiences with collaborative learning in asynchronous computer-supported collaborative learning environments. Computers in Human Behavior, 23, 496-514.

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  2. An Experiment Support Computer for Externally-Based ISS Payloads

    Science.gov (United States)

    Sell, S. W.; Chen, S. E.

    2002-01-01

    The Experiment Support Facility - External (ESF-X) is a computer designed for general experiment use aboard the International Space Station (ISS) Truss Site locations. The ESF-X design is highly modular and uses commercial off-the-shelf (COTS) components wherever possible to allow for maximum reconfigurability to meet the needs of almost any payload. The ESF-X design has been developed with the EXPRESS Pallet as the target location and the University of Colorado's Micron Accuracy Deployment Experiment (MADE) as the anticipated first payload and capability driver. Thus the design presented here is configured for structural dynamics and control as well as optics experiments. The ESF-X is a small (58.4 x 48.3 x 17.8") steel and copper enclosure which houses a 14 slot VME card chassis and power supply. All power and data connections are made through a single panel on the enclosure so that only one side of the enclosure must be accessed for nominal operation and servicing activities. This feature also allows convenient access during integration and checkout activities. Because it utilizes a standard VME backplane, ESF-X can make use of the many commercial boards already in production for this standard. Since the VME standard is also heavily used in industrial and military applications, many ruggedized components are readily available. The baseline design includes commercial processors, Ethernet, MIL-STD-1553, and mass storage devices. The main processor board contains four TI 6701 DSPs with a PowerPC based controller. Other standard functions, such as analog-to-digital, digital-to-analog, motor driver, temperature readings, etc., are handled on industry-standard IP modules. Carrier cards, which hold 4 IP modules each, are placed in slots in the VME backplane. A unique, custom IP carrier board with radiation event detectors allows non RAD-hard components to be used in an extended exposure environment. Thermal control is maintained by conductive cooling through the copper

  3. The co-evolution of alternative fuel infrastructure and vehicles. A study of the experience of Argentina with compressed natural gas

    International Nuclear Information System (INIS)

    Collantes, Gustavo; Melaina, Marc W.

    2011-01-01

    In a quest for strategic and environmental benefits, the developed countries have been trying for many years to increase the share of alternative fuels in their transportation fuel mixes. They have met very little success though. In this paper, we examine the experience of Argentina with compressed natural gas. We conducted interviews with a wide range of stakeholders and analyzed econometrically data collected in Argentina to investigate the factors, economic, political, and others that determined the high rate of adoption of this fuel. A central objective of this research was to identify lessons that could be useful to developed countries in their efforts to deploy alternative fuel vehicles. We find that fuel price regulation was a significant determinant of the adoption of compressed natural gas, while, contrary to expectations, government financing of refueling infrastructure was minimal. (author)

  4. Smart Cyber Infrastructure for Big Data processing

    NARCIS (Netherlands)

    Makkes, M.X.; Cushing, R.; Oprescu, A.M.; Koning, R.; Grosso, P.; Meijer, R.J.; Laat, C. de

    2014-01-01

    The landscape of research cyber infrastructure is rapidly changing. There is a move towards virtualized and programmable infrastructure. The cloud paradigm enables the use of computing resources in different places and allows for optimizing workflows in either bringing computing to the data or the

  5. Experience of computed tomographic myelography and discography in cervical problem

    Energy Technology Data Exchange (ETDEWEB)

    Nakatani, Shigeru; Yamamoto, Masayuki; Uratsuji, Masaaki; Suzuki, Kunio; Matsui, Eigo [Hyogo Prefectural Awaji Hospital, Sumoto, Hyogo (Japan); Kurihara, Akira

    1983-06-01

    CTM (computed tomographic myelography) was performed on 15 cases of cervical lesions, and on 5 of them, CTD (computed tomographic discography) was also made. CTM revealed the intervertebral state, and in combination with CTD, providing more accurate information. The combined method of CTM and CTD was useful for soft disc herniation.

  6. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  7. Monte Carlo in radiotherapy: experience in a distributed computational environment

    Science.gov (United States)

    Caccia, B.; Mattia, M.; Amati, G.; Andenna, C.; Benassi, M.; D'Angelo, A.; Frustagli, G.; Iaccarino, G.; Occhigrossi, A.; Valentini, S.

    2007-06-01

    New technologies in cancer radiotherapy need a more accurate computation of the dose delivered in the radiotherapeutical treatment plan, and it is important to integrate sophisticated mathematical models and advanced computing knowledge into the treatment planning (TP) process. We present some results about using Monte Carlo (MC) codes in dose calculation for treatment planning. A distributed computing resource located in the Technologies and Health Department of the Italian National Institute of Health (ISS) along with other computer facilities (CASPUR - Inter-University Consortium for the Application of Super-Computing for Universities and Research) has been used to perform a fully complete MC simulation to compute dose distribution on phantoms irradiated with a radiotherapy accelerator. Using BEAMnrc and GEANT4 MC based codes we calculated dose distributions on a plain water phantom and air/water phantom. Experimental and calculated dose values below ±2% (for depth between 5 mm and 130 mm) were in agreement both in PDD (Percentage Depth Dose) and transversal sections of the phantom. We consider these results a first step towards a system suitable for medical physics departments to simulate a complete treatment plan using remote computing facilities for MC simulations.

  8. Measures of agreement between computation and experiment:validation metrics.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Oberkampf, William Louis

    2005-08-01

    With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables and sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric and also features that should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent free-shear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.

  9. Technology Trends in Cloud Infrastructure

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Cloud computing is growing at an exponential pace with an increasing number of workloads being hosted in mega-scale public clouds such as Microsoft Azure. Designing and operating such large infrastructures requires not only a significant capital spend for provisioning datacenters, servers, networking and operating systems, but also R&D investments to capitalize on disruptive technology trends and emerging workloads such as AI/ML. This talk will cover the various infrastructure innovations being implemented in large scale public clouds and opportunities/challenges ahead to deliver the next generation of scale computing. About the speaker Kushagra Vaid is the general manager and distinguished engineer for Hardware Infrastructure in the Microsoft Azure division. He is accountable for the architecture and design of compute and storage platforms, which are the foundation for Microsoft’s global cloud-scale services. He and his team have successfully delivered four generations of hyperscale cloud hardwar...

  10. Assessing Pre-Service Teachers' Computer Phobia Levels in Terms of Gender and Experience, Turkish Sample

    Science.gov (United States)

    Ursavas, Omer Faruk; Karal, Hasan

    2009-01-01

    In this study it is aimed to determine the level of pre-service teachers' computer phobia. Whether or not computer phobia meaningfully varies statistically according to gender and computer experience has been tested in the study. The study was performed on 430 pre-service teachers at the Education Faculty in Rize/Turkey. Data in the study were…

  11. Educational Computer Use in Leisure Contexts: A Phenomenological Study of Adolescents' Experiences at Internet Cafes

    Science.gov (United States)

    Cilesiz, Sebnem

    2009-01-01

    Computer use is a widespread leisure activity for adolescents. Leisure contexts, such as Internet cafes, constitute specific social environments for computer use and may hold significant educational potential. This article reports a phenomenological study of adolescents' experiences of educational computer use at Internet cafes in Turkey. The…

  12. Dynamic Collaboration Infrastructure for Hydrologic Science

    Science.gov (United States)

    Tarboton, D. G.; Idaszak, R.; Castillo, C.; Yi, H.; Jiang, F.; Jones, N.; Goodall, J. L.

    2016-12-01

    Data and modeling infrastructure is becoming increasingly accessible to water scientists. HydroShare is a collaborative environment that currently offers water scientists the ability to access modeling and data infrastructure in support of data intensive modeling and analysis. It supports the sharing of and collaboration around "resources" which are social objects defined to include both data and models in a structured standardized format. Users collaborate around these objects via comments, ratings, and groups. HydroShare also supports web services and cloud based computation for the execution of hydrologic models and analysis and visualization of hydrologic data. However, the quantity and variety of data and modeling infrastructure available that can be accessed from environments like HydroShare is increasing. Storage infrastructure can range from one's local PC to campus or organizational storage to storage in the cloud. Modeling or computing infrastructure can range from one's desktop to departmental clusters to national HPC resources to grid and cloud computing resources. How does one orchestrate this vast number of data and computing infrastructure without needing to correspondingly learn each new system? A common limitation across these systems is the lack of efficient integration between data transport mechanisms and the corresponding high-level services to support large distributed data and compute operations. A scientist running a hydrology model from their desktop may require processing a large collection of files across the aforementioned storage and compute resources and various national databases. To address these community challenges a proof-of-concept prototype was created integrating HydroShare with RADII (Resource Aware Data-centric collaboration Infrastructure) to provide software infrastructure to enable the comprehensive and rapid dynamic deployment of what we refer to as "collaborative infrastructure." In this presentation we discuss the

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  14. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  15. Comparing Computer Game and Traditional Lecture Using Experience Ratings from High and Low Achieving Students

    Science.gov (United States)

    Grimley, Michael; Green, Richard; Nilsen, Trond; Thompson, David

    2012-01-01

    Computer games are purported to be effective instructional tools that enhance motivation and improve engagement. The aim of this study was to investigate how tertiary student experiences change when instruction was computer game based compared to lecture based, and whether experiences differed between high and low achieving students. Participants…

  16. Agile infrastructure monitoring

    International Nuclear Information System (INIS)

    Andrade, P; Ascenso, J; Fedorko, I; Fiorini, B; Paladin, M; Pigueiras, L; Santos, M

    2014-01-01

    At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new 'shared monitoring architecture' which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.

  17. Infrastructuring for Quality

    DEFF Research Database (Denmark)

    Bossen, Claus; Danholt, Peter; Ubbesen, Morten Bonde

    2015-01-01

    Reimbursement and budgeting constitutes a central infrastructural element in most secondary healthcare sectors. In Denmark, Diagnose-Related Groups (DRG) function as the core element for budgeting and encouraging increase in activity and effectivity. However, DRG is known to potentially have...... indicators for quality in treatment to guide and govern their performance, in order to investigate whether this may generate a new performance measurement infrastructure that will improve quality of healthcare. The project is entitled: “New governance in the patient’s perspective”....... adverse effects by encouraging hospitals to maximize reimbursement at the expense of patients. To counter this, one Danish region has initiated an experiment involving nine hospital departments whose normal budgeting and reimbursement based on DRG is put on hold. Instead, they have been asked to develop...

  18. California Hydrogen Infrastructure Project

    Energy Technology Data Exchange (ETDEWEB)

    Heydorn, Edward C

    2013-03-12

    Air Products and Chemicals, Inc. has completed a comprehensive, multiyear project to demonstrate a hydrogen infrastructure in California. The specific primary objective of the project was to demonstrate a model of a real-world retail hydrogen infrastructure and acquire sufficient data within the project to assess the feasibility of achieving the nation's hydrogen infrastructure goals. The project helped to advance hydrogen station technology, including the vehicle-to-station fueling interface, through consumer experiences and feedback. By encompassing a variety of fuel cell vehicles, customer profiles and fueling experiences, this project was able to obtain a complete portrait of real market needs. The project also opened its stations to other qualified vehicle providers at the appropriate time to promote widespread use and gain even broader public understanding of a hydrogen infrastructure. The project engaged major energy companies to provide a fueling experience similar to traditional gasoline station sites to foster public acceptance of hydrogen. Work over the course of the project was focused in multiple areas. With respect to the equipment needed, technical design specifications (including both safety and operational considerations) were written, reviewed, and finalized. After finalizing individual equipment designs, complete station designs were started including process flow diagrams and systems safety reviews. Material quotes were obtained, and in some cases, depending on the project status and the lead time, equipment was placed on order and fabrication began. Consideration was given for expected vehicle usage and station capacity, standard features needed, and the ability to upgrade the station at a later date. In parallel with work on the equipment, discussions were started with various vehicle manufacturers to identify vehicle demand (short- and long-term needs). Discussions included identifying potential areas most suited for hydrogen fueling

  19. Computations, Complexity, Experiments, and the World Outside Physics

    International Nuclear Information System (INIS)

    Kadanoff, L.P

    2009-01-01

    Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)

  20. Blueprint and First Experiences Bridging Hardware Virtualization and Global Grids for Advanced Scientific Computing: Designing and Building a Global Edge Services Framework (ESF) for OSG, EGEE, and LCG

    CERN Document Server

    Rana, A S; Vaniachine, A; Wurthwein, F; Foster, I; Sotomayor, B; Freeman, T

    2006-01-01

    We report on first experiences with building and operating an edge services framework (ESF) based on Xen virtual machines instantiated via the workspace service in Globus toolkit, and developed as a joint project between EGEE, LCG, and OSG. Many computing facilities are architected with their compute and storage clusters behind firewalls. Edge services (ES) are instantiated on a small set of gateways to provide access to these clusters via standard grid interfaces. Experience on EGEE, LCG, and OSG has shown that at least two issues are of critical importance when designing an infrastructure in support of ES. The first concerns ES configuration. It is impractical to assume that each virtual organization (VO) using a facility will employ the same ES configuration, or that different configurations will coexist easily. Even within a VO, it should be possible to run different versions of the same ES simultaneously. The second issue concerns resource allocation: it is essential that an ESF be able to effectively gu...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  3. Investigation of the computer experiences and attitudes of pre-service mathematics teachers: new evidence from Turkey.

    Science.gov (United States)

    Birgin, Osman; Catlioğlu, Hakan; Gürbüz, Ramazan; Aydin, Serhat

    2010-10-01

    This study aimed to investigate the experiences of pre-service mathematics (PSM) teachers with computers and their attitudes toward them. The Computer Attitude Scale, Computer Competency Survey, and Computer Use Information Form were administered to 180 Turkish PSM teachers. Results revealed that most PSM teachers used computers at home and at Internet cafes, and that their competency was generally intermediate and upper level. The study concludes that PSM teachers' attitudes about computers differ according to their years of study, computer ownership, level of computer competency, frequency of computer use, computer experience, and whether they had attended a computer-aided instruction course. However, computer attitudes were not affected by gender.

  4. Status of the Grid Computing for the ALICE Experiment in the Czech Republic

    International Nuclear Information System (INIS)

    Adamova, D; Hampl, J; Chudoba, J; Kouba, T; Svec, J; Mendez, Lorenzo P; Saiz, P

    2010-01-01

    The Czech Republic (CR) has been participating in the LHC Computing Grid project (LCG) ever since 2003 and gradually, a middle-sized Tier-2 center has been built in Prague, delivering computing services for national HEP experiments groups including the ALICE project at the LHC. We present a brief overview of the computing activities and services being performed in the CR for the ALICE experiment.

  5. Computing Activities for the PANDA Experiment at FAIR

    NARCIS (Netherlands)

    Messchendorp, Johan; Gruntorad, J; Lokajicek, M

    2010-01-01

    The PANDA experiment at the future facility FAIR will provide valuable data for our present understanding of the strong interaction. In preparation for the experiments, large-scale simulations for design and feasibility studies are performed exploiting a new software framework, PandaROOT, which is

  6. Computer Simulation of Einstein-Podolsky-Rosen-Bohm Experiments

    NARCIS (Netherlands)

    De Raedt, H.; Michielsen, K.

    We review an event-based simulation approach which reproduces the statistical distributions of quantum physics experiments by generating detection events one-by-one according to an unknown distribution and without solving a wave equation. Einstein-Podolsky-Rosen-Bohm laboratory experiments are used

  7. Computer control and monitoring of neutral beam injectors on the 2XIIB CTR experiment at LLL

    International Nuclear Information System (INIS)

    Pollock, G.G.

    1975-01-01

    The original manual control system for the 12 neutral beam injectors on the 2XIIB Machine is being integrated with a computer control system. This, in turn, is a part of a multiple computer network comprised of the three computers which are involved in the operation and instrumentation of the 2XIIB experiment. The computer control system simplifies neutral beam operation and centralizes it to a single operating position. A special purpose console utilizes computer generated graphics and interactive function entry buttons to optimize the human/machine interface. Through the facilities of the computer network, a high level control function will be implemented for the use of the experimenter in a remotely located experiment diagnositcs area. In addition to controlling the injectors in normal operation, the computer system provides automatic conditioning of the injectors, bringing rebuilt units back to full energy output with minimum loss of useful life. The computer system also provides detail archive data recording

  8. Critical infrastructure protection

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, F. [Canadian Electricity Association, Toronto, ON (Canada)

    2003-04-01

    The need to protect critical electrical infrastructure from terrorist attacks, or other physical damage, including weather related events, or the potential impact of computer viruses and other attacks on IT resources are discussed. Activities of the North American Electric Reliability Council (NERC) are highlighted which seek to safeguard the North American bulk electric power system principally through the Information Sharing and Analysis Sector (ES-ISAC). ES-ISAC serves the electricity sector by facilitating communication between electric sector participants, federal government and other critical infrastructure industries by disseminating threat indications, analyses and warnings, together with interpretations, to assist the industry in taking infrastructure protection actions. Attention is drawn to the numerous cyber incidents in recent years, which although resulted in no loss of service to electricity customers so far, in at least one instance (the January 25th SOL-Slammer worm incident) resulted in degradation of service in a number of sectors, including financial, transportation and telecommunication services. The increasing frequency of cyber-based attacks, coupled with the industry's growing dependence on e-commerce and electronic controls, are good reasons to believe that critical infrastructure protection (CIP) poses a serious challenge to the industry's risk management practices. The Canadian Electricity Association (CEA) is an active participant in ES-ISAC and works cooperatively with a range of partners, such as the Edison Electric Institute and the American Public Power Association to ensure coordination and effective protection program delivery for the electric power sector. The Early Warning System (EWS) developed by the CIP Working Group is one of the results of this cooperation. EWS uses the Internet, e-mail, web-enabled cell phones and Blackberry hand-held devices to deliver real-time threat information to members on a 24/7 basis. EWS

  9. Analysis of Computer Experiments with Multiple Noise Sources

    DEFF Research Database (Denmark)

    Dehlendorff, Christian; Kulahci, Murat; Andersen, Klaus Kaae

    2010-01-01

    In this paper we present a modeling framework for analyzing computer models with two types of variations. The paper is based on a case study of an orthopedic surgical unit, which has both controllable and uncontrollable factors. Our results show that this structure of variation can be modeled...

  10. Power-Efficient Computing: Experiences from the COSA Project

    Directory of Open Access Journals (Sweden)

    Daniele Cesini

    2017-01-01

    Full Text Available Energy consumption is today one of the most relevant issues in operating HPC systems for scientific applications. The use of unconventional computing systems is therefore of great interest for several scientific communities looking for a better tradeoff between time-to-solution and energy-to-solution. In this context, the performance assessment of processors with a high ratio of performance per watt is necessary to understand how to realize energy-efficient computing systems for scientific applications, using this class of processors. Computing On SOC Architecture (COSA is a three-year project (2015–2017 funded by the Scientific Commission V of the Italian Institute for Nuclear Physics (INFN, which aims to investigate the performance and the total cost of ownership offered by computing systems based on commodity low-power Systems on Chip (SoCs and high energy-efficient systems based on GP-GPUs. In this work, we present the results of the project analyzing the performance of several scientific applications on several GPU- and SoC-based systems. We also describe the methodology we have used to measure energy performance and the tools we have implemented to monitor the power drained by applications while running.

  11. Trainee Teachers' e-Learning Experiences of Computer Play

    Science.gov (United States)

    Wright, Pam

    2009-01-01

    Pam Wright highlights the role of technology in providing situated learning opportunities for preservice teachers to explore the role commercial computer games may have in primary education. In a study designed to assess the effectiveness of an online unit on gaming incorporated into a course on learning technologies, Wright found that thoughtful…

  12. COMPUTER-AIDED DATA ACQUISITION FOR COMBUSTION EXPERIMENTS

    Science.gov (United States)

    The article describes the use of computer-aided data acquisition techniques to aid the research program of the Combustion Research Branch (CRB) of the U.S. EPA's Air and Energy Engineering Research Laboratory (AEERL) in Research Triangle Park, NC, in particular on CRB's bench-sca...

  13. Music Teachers' Experiences in One-to-One Computing Environments

    Science.gov (United States)

    Dorfman, Jay

    2016-01-01

    Ubiquitous computing scenarios such as the one-to-one model, in which every student is issued a device that is to be used across all subjects, have increased in popularity and have shown both positive and negative influences on education. Music teachers in schools that adopt one-to-one models may be inadequately equipped to integrate this kind of…

  14. Manganese Catalyzed Regioselective C–H Alkylation: Experiment and Computation

    KAUST Repository

    Wang, Chengming

    2018-05-08

    A new efficient manganese-catalyzed selective C2-alkylation of indoles via carbenoid insertion has been achieved. The newly developed C-H functionalization protocol provides access to diverse products and shows good functional group tolerance. Mechanistic and computational studies support the formation of a Mn(CO)3 acetate complex as the catalytically active species.

  15. Manganese Catalyzed Regioselective C–H Alkylation: Experiment and Computation

    KAUST Repository

    Wang, Chengming; Maity, Bholanath; Cavallo, Luigi; Rueping, Magnus

    2018-01-01

    A new efficient manganese-catalyzed selective C2-alkylation of indoles via carbenoid insertion has been achieved. The newly developed C-H functionalization protocol provides access to diverse products and shows good functional group tolerance. Mechanistic and computational studies support the formation of a Mn(CO)3 acetate complex as the catalytically active species.

  16. The Evolution of Computer Based Learning Software Design: Computer Assisted Teaching Unit Experience.

    Science.gov (United States)

    Blandford, A. E.; Smith, P. R.

    1986-01-01

    Describes the style of design of computer simulations developed by Computer Assisted Teaching Unit at Queen Mary College with reference to user interface, input and initialization, input data vetting, effective display screen use, graphical results presentation, and need for hard copy. Procedures and problems relating to academic involvement are…

  17. Central Region Green Infrastructure

    Data.gov (United States)

    Minnesota Department of Natural Resources — This Green Infrastructure data is comprised of 3 similar ecological corridor data layers ? Metro Conservation Corridors, green infrastructure analysis in counties...

  18. Armenia - Irrigation Infrastructure

    Data.gov (United States)

    Millennium Challenge Corporation — This study evaluates irrigation infrastructure rehabilitation in Armenia. The study separately examines the impacts of tertiary canals and other large infrastructure...

  19. First experience with a mobile computed tomograph in the USSR

    International Nuclear Information System (INIS)

    Portnoj, L.M.

    1989-01-01

    Utilization experience of mobile computerized tomograph mounted in the bus is presented. Problems concerning staff, selection of medical base institutes etc are considered. Efficiency of mobile computerized tomographes in revealing different diseases is pointed out

  20. Ioversol 350: clinical experience in cranial computed tomography

    International Nuclear Information System (INIS)

    Theron, J.; Paugam, J.P.; Courtheoux, P.

    1991-01-01

    A single, open trial was conducted in 40 patients to evaluate the diagnostic efficacy and safety, in cranial computed tomography, of ioversol (350 mgl/ml), a new nonionic, monomeric, low-osmolality contrast medium. Ioversol is characterized by a hydrophilicity which is not only the highest of all nonionic agents available to date, but also evenly distributed among the various sides of the benzene ring. Diagnosis was possible in 100 % of cases with a mean degree of certainty of 90.8 %. Six minor adverse reactions requiring no treatment we recorded, of which two were observed by the investigator and four reported by the patients. No pain sensation was found and heat sensations were of minor intensity. Ioversol 350, which showed good diagnostic efficacy and proved to be well tolerated, is therefore suitable for cranial computed tomography at a mean dose of 1 ml/kg

  1. Assessing computer skills in Tanzanian medical students: an elective experience

    Directory of Open Access Journals (Sweden)

    Melvin Rob

    2004-08-01

    Full Text Available Abstract Background One estimate suggests that by 2010 more than 30% of a physician's time will be spent using information technology tools. The aim of this study is to assess the information and communication technologies (ICT skills of medical students in Tanzania. We also report a pilot intervention of peer mentoring training in ICT by medical students from the UK tutoring students in Tanzania. Methods Design: Cross sectional study and pilot intervention study. Participants: Fourth year medical students (n = 92 attending Muhimbili University College of Health Sciences, Dar es Salaam, Tanzania. Main outcome measures: Self-reported assessment of competence on ICT-related topics and ability to perform specific ICT tasks. Further information related to frequency of computer use (hours per week, years of computer use, reasons for use and access to computers. Skills at specific tasks were reassessed for 12 students following 4 to 6 hours of peer mentoring training. Results The highest levels of competence in generic ICT areas were for email, Internet and file management. For other skills such as word processing most respondents reported low levels of competence. The abilities to perform specific ICT skills were low – less than 60% of the participants were able to perform the core specific skills assessed. A period of approximately 5 hours of peer mentoring training produced an approximate doubling of competence scores for these skills. Conclusion Our study has found a low level of ability to use ICT facilities among medical students in a leading university in sub-Saharan Africa. A pilot scheme utilising UK elective students to tutor basic skills showed potential. Attention is required to develop interventions that can improve ICT skills, as well as computer access, in order to bridge the digital divide.

  2. D0 experiment: its trigger, data acquisition, and computers

    International Nuclear Information System (INIS)

    Cutts, D.; Zeller, R.; Schamberger, D.; Van Berg, R.

    1984-05-01

    The new collider facility to be built at Fermilab's Tevatron-I D0 region is described. The data acquisition requirements are discussed, as well as the hardware and software triggers designed to meet these needs. An array of MicroVAX computers running VAXELN will filter in parallel (a complete event in each microcomputer) and transmit accepted events via Ethernet to a host. This system, together with its subsequent offline needs, is briefly presented

  3. Simulation in computer forensics teaching: the student experience

    OpenAIRE

    Crellin, Jonathan; Adda, Mo; Duke-Williams, Emma; Chandler, Jane

    2011-01-01

    The use of simulation in teaching computing is well established, with digital forensic investigation being a subject area where the range of simulation required is both wide and varied demanding a corresponding breadth of fidelity. Each type of simulation can be complex and expensive to set up resulting in students having only limited opportunities to participate and learn from the simulation. For example students' participation in mock trials in the University mock courtroom or in simulation...

  4. Understanding the infrastructure of European Research Infrastructures

    DEFF Research Database (Denmark)

    Lindstrøm, Maria Duclos; Kropp, Kristoffer

    2017-01-01

    European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ER....... It is also a promising theoretical framework for addressing the relationship between the ERIC construct and the large diversity of European Research Infrastructures.......European Research Infrastructure Consortia (ERIC) are a new form of legal and financial framework for the establishment and operation of research infrastructures in Europe. Despite their scope, ambition, and novelty, the topic has received limited scholarly attention. This article analyses one ERIC...... became an ERIC using the Bowker and Star’s sociology of infrastructures. We conclude that focusing on ERICs as a European standard for organising and funding research collaboration gives new insights into the problems of membership, durability, and standardisation faced by research infrastructures...

  5. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  6. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  7. The practical experience with assistance programs: view from a non-nuclear weapons-state with a significant nuclear infrastructure

    International Nuclear Information System (INIS)

    Chetvergov, S.

    2002-01-01

    /or nuclear shell device. In May 2000, with direct participation of the German Society of Nuclear Reactors and Facilities Security (GRS), a training seminar was held on establishing of design threats for hypothetical research reactors and the creation of physical protection conception. With direct participation of the IAEA a training seminar was held in Almaty on December 2002 for exchange of observed experiences and concrete pathways for establishing DBT in Kazakhstan were determined. This support from IAEA and donor countries has allowed Kazakhstan to create an ideology and a set of regulatory documents for physical protection of nuclear objects and on assessment for design threat at the international requirements level. Now, relevant authorities of Kazakhstan investigate the issue of design threat. These are mostly done by the Committee of National Security jointly with the Kazakhstan Atomic Energy Committee and the administrations of nuclear facilities. (author)

  8. Parallel Computational Fluid Dynamics 2007 : Implementations and Experiences on Large Scale and Grid Computing

    CERN Document Server

    2009-01-01

    At the 19th Annual Conference on Parallel Computational Fluid Dynamics held in Antalya, Turkey, in May 2007, the most recent developments and implementations of large-scale and grid computing were presented. This book, comprised of the invited and selected papers of this conference, details those advances, which are of particular interest to CFD and CFD-related communities. It also offers the results related to applications of various scientific and engineering problems involving flows and flow-related topics. Intended for CFD researchers and graduate students, this book is a state-of-the-art presentation of the relevant methodology and implementation techniques of large-scale computing.

  9. TRANSFORMING RURAL SECONDARY SCHOOLS IN ZIMBABWE THROUGH TECHNOLOGY: LIVED EXPERIENCES OF STUDENT COMPUTER USERS

    Directory of Open Access Journals (Sweden)

    Gomba Clifford

    2016-04-01

    Full Text Available A technological divide exists in Zimbabwe between urban and rural schools that puts rural based students at a disadvantage. In Zimbabwe, the government, through the president donated computers to most rural schools in a bid to bridge the digital divide between rural and urban schools. The purpose of this phenomenological study was to understand the experiences of Advanced Level students using computers at two rural boarding Catholic High Schools in Zimbabwe. The study was guided by two research questions: (1 How do Advanced level students in the rural areas use computers at their school? and (2 What is the experience of using computers for Advanced Level students in the rural areas of Zimbabwe? By performing this study, it was possible to understand from the students’ experiences whether computer usage was for educational learning or not. The results of the phenomenological study showed that students’ experiences can be broadly classified into five themes, namely worthwhile (interesting experience, accessibility issues, teachers’ monopoly, research and social use, and Internet availability. The participants proposed teachers use computers, but not monopolize computer usage. The solution to the computer shortage may be solved by having donors and government help in the acquisitioning of more computers.

  10. File management for experiment control parameters within a distributed function computer network

    International Nuclear Information System (INIS)

    Stubblefield, F.W.

    1976-10-01

    An attempt to design and implement a computer system for control of and data collection from a set of laboratory experiments reveals that many of the experiments in the set require an extensive collection of parameters for their control. The operation of the experiments can be greatly simplified if a means can be found for storing these parameters between experiments and automatically accessing them as they are required. A subsystem for managing files of such experiment control parameters is discussed. 3 figures

  11. Computation for LHC experiments: a worldwide computing grid; Le calcul scientifique des experiences LHC: une grille de production mondiale

    Energy Technology Data Exchange (ETDEWEB)

    Fairouz, Malek [Universite Joseph-Fourier, LPSC, CNRS-IN2P3, Grenoble I, 38 (France)

    2010-08-15

    In normal operating conditions the LHC detectors are expected to record about 10{sup 10} collisions each year. The processing of all the consequent experimental data is a real computing challenge in terms of equipment, software and organization: it requires sustaining data flows of a few 10{sup 9} octets per second and recording capacity of a few tens of 10{sup 15} octets each year. In order to meet this challenge a computing network implying the dispatch and share of tasks, has been set. The W-LCG grid (World wide LHC computing grid) is made up of 4 tiers. Tiers 0 is the computer center in CERN, it is responsible for collecting and recording the raw data from the LHC detectors and to dispatch it to the 11 tiers 1. The tiers 1 is typically a national center, it is responsible for making a copy of the raw data and for processing it in order to recover relevant data with a physical meaning and to transfer the results to the 150 tiers 2. The tiers 2 is at the level of the Institute or laboratory, it is in charge of the final analysis of the data and of the production of the simulations. Tiers 3 are at the level of the laboratories, they provide a complementary and local resource to tiers 2 in terms of data analysis. (A.C.)

  12. IrLaW an OGC compliant infrared thermography measurement system developed on mini PC with real time computing capabilities for long term monitoring of transport infrastructures

    Science.gov (United States)

    Dumoulin, J.; Averty, R.

    2012-04-01

    One of the objectives of ISTIMES project is to evaluate the potentialities offered by the integration of different electromagnetic techniques able to perform non-invasive diagnostics for surveillance and monitoring of transport infrastructures. Among the EM methods investigated, uncooled infrared camera is a promising technique due to its dissemination potential according to its relative low cost on the market. Infrared thermography, when it is used in quantitative mode (not in laboratory conditions) and not in qualitative mode (vision applied to survey), requires to process in real time thermal radiative corrections on raw data acquired to take into account influences of natural environment evolution with time. But, camera sensor has to be enough smart to apply in real time calibration law and radiometric corrections in a varying atmosphere. So, a complete measurement system was studied and developed with low cost infrared cameras available on the market. In the system developed, infrared camera is coupled with other sensors to feed simplified radiative models running, in real time, on GPU available on small PC. The system studied and developed uses a fast Ethernet camera FLIR A320 [1] coupled with a VAISALA WXT520 [2] weather station and a light GPS unit [3] for positioning and dating. It can be used with other Ethernet infrared cameras (i.e. visible ones) but requires to be able to access measured data at raw level. In the present study, it has been made possible thanks to a specific agreement signed with FLIR Company. The prototype system studied and developed is implemented on low cost small computer that integrates a GPU card to allow real time parallel computing [4] of simplified radiometric [5] heat balance using information measured with the weather station. An HMI was developed under Linux using OpenSource and complementary pieces of software developed at IFSTTAR. This new HMI called "IrLaW" has various functionalities that let it compliant to be use in

  13. Defense of Cyber Infrastructures Against Cyber-Physical Attacks Using Game-Theoretic Models.

    Science.gov (United States)

    Rao, Nageswara S V; Poole, Stephen W; Ma, Chris Y T; He, Fei; Zhuang, Jun; Yau, David K Y

    2016-04-01

    The operation of cyber infrastructures relies on both cyber and physical components, which are subject to incidental and intentional degradations of different kinds. Within the context of network and computing infrastructures, we study the strategic interactions between an attacker and a defender using game-theoretic models that take into account both cyber and physical components. The attacker and defender optimize their individual utilities, expressed as sums of cost and system terms. First, we consider a Boolean attack-defense model, wherein the cyber and physical subinfrastructures may be attacked and reinforced as individual units. Second, we consider a component attack-defense model wherein their components may be attacked and defended, and the infrastructure requires minimum numbers of both to function. We show that the Nash equilibrium under uniform costs in both cases is computable in polynomial time, and it provides high-level deterministic conditions for the infrastructure survival. When probabilities of successful attack and defense, and of incidental failures, are incorporated into the models, the results favor the attacker but otherwise remain qualitatively similar. This approach has been motivated and validated by our experiences with UltraScience Net infrastructure, which was built to support high-performance network experiments. The analytical results, however, are more general, and we apply them to simplified models of cloud and high-performance computing infrastructures. © 2015 Society for Risk Analysis.

  14. Computer-assisted experiments with a laser diode

    Energy Technology Data Exchange (ETDEWEB)

    Kraftmakher, Yaakov, E-mail: krafty@mail.biu.ac.il [Department of Physics, Bar-Ilan University, Ramat-Gan 52900 (Israel)

    2011-05-15

    A laser diode from an inexpensive laser pen (laser pointer) is used in simple experiments. The radiant output power and efficiency of the laser are measured, and polarization of the light beam is shown. The h/e ratio is available from the threshold of spontaneous emission. The lasing threshold is found using several methods. With a data-acquisition system, the measurements are possible in a short time. The frequency response of the laser diode is determined in the range 10-10{sup 7} Hz. The experiments are suitable for undergraduate laboratories and for classroom demonstrations on semiconductors.

  15. Computer-assisted experiments with a laser diode

    International Nuclear Information System (INIS)

    Kraftmakher, Yaakov

    2011-01-01

    A laser diode from an inexpensive laser pen (laser pointer) is used in simple experiments. The radiant output power and efficiency of the laser are measured, and polarization of the light beam is shown. The h/e ratio is available from the threshold of spontaneous emission. The lasing threshold is found using several methods. With a data-acquisition system, the measurements are possible in a short time. The frequency response of the laser diode is determined in the range 10-10 7 Hz. The experiments are suitable for undergraduate laboratories and for classroom demonstrations on semiconductors.

  16. Research and development of fusion grid infrastructure based on atomic energy grid infrastructure (AEGIS)

    International Nuclear Information System (INIS)

    Suzuki, Y.; Nakajima, K.; Kushida, N.; Kino, C.; Aoyagi, T.; Nakajima, N.; Iba, K.; Hayashi, N.; Ozeki, T.; Totsuka, T.; Nakanishi, H.; Nagayama, Y.

    2008-01-01

    In collaboration with the Naka Fusion Institute of Japan Atomic Energy Agency (NFI/JAEA) and the National Institute for Fusion Science of National Institute of Natural Science (NIFS/NINS), Center for Computational Science and E-systems of Japan Atomic Energy Agency (CCSE/JAEA) aims at establishing an integrated framework for experiments and analyses in nuclear fusion research based on the atomic energy grid infrastructure (AEGIS). AEGIS has been being developed by CCSE/JAEA aiming at providing the infrastructure that enables atomic energy researchers in remote locations to carry out R and D efficiently and collaboratively through the Internet. Toward establishing the integrated framework, we have been applying AEGIS to pre-existing three systems: experiment system, remote data acquisition system, and integrated analysis system. For the experiment system, the secure remote experiment system with JT-60 has been successfully accomplished. For the remote data acquisition system, it will be possible to equivalently operate experimental data obtained from LHD data acquisition and management system (LABCOM system) and JT-60 Data System. The integrated analysis system has been extended to the system executable in heterogeneous computers among institutes

  17. COMPUTER EXPERIMENTS WITH FINITE ELEMENTS OF HIGHER ORDER

    Directory of Open Access Journals (Sweden)

    Khomchenko A.

    2017-12-01

    Full Text Available The paper deals with the problem of constructing the basic functions of a quadrilateral finite element of the fifth order by the means of the computer algebra system Maple. The Lagrangian approximation of such a finite element contains 36 nodes: 20 nodes perimeter and 16 internal nodes. Alternative models with reduced number of internal nodes are considered. Graphs of basic functions and cognitive portraits of lines of zero level are presented. The work is aimed at studying the possibilities of using modern information technologies in the teaching of individual mathematical disciplines.

  18. Experiments and computation of onshore breaking solitary waves

    DEFF Research Database (Denmark)

    Jensen, A.; Mayer, Stefan; Pedersen, G.K.

    2005-01-01

    This is a combined experimental and computational study of solitary waves that break on-shore. Velocities and accelerations are measured by a two-camera PIV technique and compared to theoretical values from an Euler model with a VOF method for the free surface. In particular, the dynamics of a so......-called collapsing breaker is scrutinized and the closure between the breaker and the beach is found to be akin to slamming. To the knowledge of the authors, no velocity measurements for this kind of breaker have been previously reported....

  19. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  20. Computer-Assisted Experiments with a Laser Diode

    Science.gov (United States)

    Kraftmakher, Yaakov

    2011-01-01

    A laser diode from an inexpensive laser pen (laser pointer) is used in simple experiments. The radiant output power and efficiency of the laser are measured, and polarization of the light beam is shown. The "h/e" ratio is available from the threshold of spontaneous emission. The lasing threshold is found using several methods. With a…

  1. Experience with computed transmission tomography of the heart in vivo

    International Nuclear Information System (INIS)

    Carlsson, E.; Lipton, M.J.; Skioeldebrand, C.G.; Berninger, W.H.; Redington, R.W.

    1980-01-01

    Cardiac computed tomography in its present form provides useful information about the heart for clinical use in patients with heart disease and for investigative work in such patients and living animals. Its great reconstructive power and unmatched density resolution are particularly advantageous in the study of ischemic heart disease. Because of its non-invasive character cardiac computed tomography has the potential of becoming an effective screening tool for large numbers of patients with suspected or known coronary heart desiase. Other cardiac conditions such as valve disease and congenital lesions can also be examined with high diagnostic yield. However presently available scanners suffer from low repetion rate, long scan times and the fact that only one transverse cardiac level at a time can be obtained. The development which must be accomplished in order to eliminate these weaknesses is technically feasible. The availability of a dynamic cardiac scanner would greatly benefit the treatment of patients with heart disease and facilitate the inquiry into the pathophysiology of such diseases. (orig.) [de

  2. Fractal actors and infrastructures

    DEFF Research Database (Denmark)

    Bøge, Ask Risom

    2011-01-01

    -network-theory (ANT) into surveillance studies (Ball 2002, Adey 2004, Gad & Lauritsen 2009). In this paper, I further explore the potential of this connection by experimenting with Marilyn Strathern’s concept of the fractal (1991), which has been discussed in newer ANT literature (Law 2002; Law 2004; Jensen 2007). I...... under surveillance. Based on fieldwork conducted in 2008 and 2011 in relation to my Master’s thesis and PhD respectively, I illustrate fractal concepts by describing the acts, actors and infrastructure that make up the ‘DNA surveillance’ conducted by the Danish police....

  3. The National Information Infrastructure: Agenda for Action.

    Science.gov (United States)

    Department of Commerce, Washington, DC. Information Infrastructure Task Force.

    The National Information Infrastructure (NII) is planned as a web of communications networks, computers, databases, and consumer electronics that will put vast amounts of information at the users' fingertips. Private sector firms are beginning to develop this infrastructure, but essential roles remain for the Federal Government. The National…

  4. INFRASTRUCTURING DESIGN

    DEFF Research Database (Denmark)

    Ertner, Sara Marie

    The fact that the average citizen in Western societies is aging has significant implications for national welfare models. What some call ’the grey tsunami’ has resulted in suggestions for, and experiments in, re-designing healthcare systems and elderly care. In Denmark, one attempted solution...... that are imagined as the target group for welfare technology, and where are they located? Based on ethnographic explorations of ’welfare technology’ and related figures that include not only ’the elderly’, but also ’prototypes’ and ’partnership’ the dissertation analyses the processes and socio...... for collaborative design to happen. The implication of this is that technological design should not be imagined as the foundation for shaping more effective health care practices and better welfare. Instead, possibilities for improving practices through welfare technology emerge out of heterogeneous assemblages...

  5. EXPERIMENTS AND COMPUTATIONAL MODELING OF PULVERIZED-COAL IGNITION; FINAL

    International Nuclear Information System (INIS)

    Samuel Owusu-Ofori; John C. Chen

    1999-01-01

    Under typical conditions of pulverized-coal combustion, which is characterized by fine particles heated at very high rates, there is currently a lack of certainty regarding the ignition mechanism of bituminous and lower rank coals as well as the ignition rate of reaction. furthermore, there have been no previous studies aimed at examining these factors under various experimental conditions, such as particle size, oxygen concentration, and heating rate. Finally, there is a need to improve current mathematical models of ignition to realistically and accurately depict the particle-to-particle variations that exist within a coal sample. Such a model is needed to extract useful reaction parameters from ignition studies, and to interpret ignition data in a more meaningful way. The authors propose to examine fundamental aspects of coal ignition through (1) experiments to determine the ignition temperature of various coals by direct measurement, and (2) modeling of the ignition process to derive rate constants and to provide a more insightful interpretation of data from ignition experiments. The authors propose to use a novel laser-based ignition experiment to achieve their first objective. Laser-ignition experiments offer the distinct advantage of easy optical access to the particles because of the absence of a furnace or radiating walls, and thus permit direct observation and particle temperature measurement. The ignition temperature of different coals under various experimental conditions can therefore be easily determined by direct measurement using two-color pyrometry. The ignition rate-constants, when the ignition occurs heterogeneously, and the particle heating rates will both be determined from analyses based on these measurements

  6. Computer simulation of FT-NMR multiple pulse experiment

    Science.gov (United States)

    Allouche, A.; Pouzard, G.

    1989-04-01

    Using the product operator formalism in its real form, SIMULDENS expands the density matrix of a scalar coupled nuclear spin system and simulates analytically a large variety of FT-NMR multiple pulse experiments. The observable transverse magnetizations are stored and can be combined to represent signal accumulation. The programming language is VAX PASCAL, but a MacIntosh Turbo Pascal Version is also available.

  7. Computer simulation of FT-NMR multiple pulse experiment

    International Nuclear Information System (INIS)

    Allouche, A.; Pouzard, G.

    1989-01-01

    Using the product operator formalism in its real form, SIMULDENS expands the density matrix of a scalar coupled nuclear spin system and simulates analytically a large variety of FT-NMR multiple pulse experiments. The observable transverse magnetizations are stored and can be combined to represent signal accumulation. The programming language is VAX PASCAL, but a MacIntosh Turbo Pascal Version is also available. (orig.)

  8. Operational experience with the Sizewell B integrated plant computer system

    International Nuclear Information System (INIS)

    Ladner, J.E.J.; Alexander, N.C.; Fitzpatrick, J.A.

    1997-01-01

    The Westinghouse Integrated System for Centralised Operation (WISCO) is the primary plant control system at the Sizewell B Power Station. It comprises three subsystems; the High Integrity Control System (HICS), the Process Control System (PCS) and the Distributed Computer system (DCS). The HICS performs the control and data acquisition of nuclear safety significant plant systems. The PCS uses redundant data processing unit pairs. The workstations and servers of the DCS communicate with each other over a standard ethernet. The maintenance requirements for every plant system are covered by a Maintenance Strategy Report. The breakdown of these reports is listed. The WISCO system has performed exceptionally well. Due to the diagnostic information presented by the HICS, problems could normally be resolved within 24 hours. There have been some 200 outstanding modifications to the system. The procedure of modification is briefly described. (A.K.)

  9. A model ecosystem experiment and its computational simulation studies

    International Nuclear Information System (INIS)

    Doi, M.

    2002-01-01

    Simplified microbial model ecosystem and its computer simulation model are introduced as eco-toxicity test for the assessment of environmental responses from the effects of environmental impacts. To take the effects on the interactions between species and environment into account, one option is to select the keystone species on the basis of ecological knowledge, and to put it in the single-species toxicity test. Another option proposed is to put the eco-toxicity tests as experimental micro ecosystem study and a theoretical model ecosystem analysis. With these tests, the stressors which are more harmful to the ecosystems should be replace with less harmful ones on the basis of unified measures. Management of radioactive materials, chemicals, hyper-eutrophic, and other artificial disturbances of ecosystem should be discussed consistently from the unified view point of environmental protection. (N.C.)

  10. Computer experiments with a coarse-grid hydrodynamic climate model

    International Nuclear Information System (INIS)

    Stenchikov, G.L.

    1990-01-01

    A climate model is developed on the basis of the two-level Mintz-Arakawa general circulation model of the atmosphere and a bulk model of the upper layer of the ocean. A detailed model of the spectral transport of shortwave and longwave radiation is used to investigate the radiative effects of greenhouse gases. The radiative fluxes are calculated at the boundaries of five layers, each with a pressure thickness of about 200 mb. The results of the climate sensitivity calculations for mean-annual and perpetual seasonal regimes are discussed. The CCAS (Computer Center of the Academy of Sciences) climate model is used to investigate the climatic effects of anthropogenic changes of the optical properties of the atmosphere due to increasing CO 2 content and aerosol pollution, and to calculate the sensitivity to changes of land surface albedo and humidity

  11. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  12. Federated data storage and management infrastructure

    International Nuclear Information System (INIS)

    Zarochentsev, A; Kiryanov, A; Klimentov, A; Krasnopevtsev, D; Hristov, P

    2016-01-01

    The Large Hadron Collider (LHC)’ operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. Computing models for the High Luminosity LHC era anticipate a growth of storage needs of at least orders of magnitude; it will require new approaches in data storage organization and data handling. In our project we address the fundamental problem of designing of architecture to integrate a distributed heterogeneous disk resources for LHC experiments and other data- intensive science applications and to provide access to data from heterogeneous computing facilities. We have prototyped a federated storage for Russian T1 and T2 centers located in Moscow, St.-Petersburg and Gatchina, as well as Russian / CERN federation. We have conducted extensive tests of underlying network infrastructure and storage endpoints with synthetic performance measurement tools as well as with HENP-specific workloads, including the ones running on supercomputing platform, cloud computing and Grid for ALICE and ATLAS experiments. We will present our current accomplishments with running LHC data analysis remotely and locally to demonstrate our ability to efficiently use federated data storage experiment wide within National Academic facilities for High Energy and Nuclear Physics as well as for other data-intensive science applications, such as bio-informatics. (paper)

  13. Computing strategy of Alpha-Magnetic Spectrometer experiment

    International Nuclear Information System (INIS)

    Choutko, V.; Klimentov, A.

    2003-01-01

    Alpha-Magnetic Spectrometer (AMS) is an experiment to search in the space for dark matter, missing matter, and antimatter scheduled for being flown on the International Space Station in the fall of year 2005 for at least 3 consecutive years. This paper gives an overview of the AMS software with emphasis on the distributed production system based on client/server approach. We also describe our choice of hardware components to build a processing farm with TByte RAID arrays of IDE disks and highlight the strategies that make our system different from many other experimental systems

  14. Predictive modeling of liquid-sodium thermal–hydraulics experiments and computations

    International Nuclear Information System (INIS)

    Arslan, Erkan; Cacuci, Dan G.

    2014-01-01

    Highlights: • We applied the predictive modeling method of Cacuci and Ionescu-Bujor (2010). • We assimilated data from sodium flow experiments. • We used computational fluid dynamics simulations of sodium experiments. • The predictive modeling method greatly reduced uncertainties in predicted results. - Abstract: This work applies the predictive modeling procedure formulated by Cacuci and Ionescu-Bujor (2010) to assimilate data from liquid-sodium thermal–hydraulics experiments in order to reduce systematically the uncertainties in the predictions of computational fluid dynamics (CFD) simulations. The predicted CFD-results for the best-estimate model parameters and results describing sodium-flow velocities and temperature distributions are shown to be significantly more precise than the original computations and experiments, in that the predicted uncertainties for the best-estimate results and model parameters are significantly smaller than both the originally computed and the experimental uncertainties

  15. Heterogeneous computation tests of both substitution and reactivity worth experiments in the RB-3 reactor

    International Nuclear Information System (INIS)

    Broccoli, U.; Cambi, G.; Vanossi, A.; Zapellini, G.

    1977-01-01

    This report presents the results of several experiments carried out in the D 2 O-moderated RB-3 reactors at the CNEN's Laboratory of Montecuccolino, Bologna. The experiments referred to are either fuel-element substitution experiments or interstitial absorber experiments and were performed during the period 1972-1974. The results of measurements are compared with those obtained by means of computational procedure based on some ''cell'' codes coupled with heterogeneous codes. (authors)

  16. Global information infrastructure.

    Science.gov (United States)

    Lindberg, D A

    1994-01-01

    The High Performance Computing and Communications Program (HPCC) is a multiagency federal initiative under the leadership of the White House Office of Science and Technology Policy, established by the High Performance Computing Act of 1991. It has been assigned a critical role in supporting the international collaboration essential to science and to health care. Goals of the HPCC are to extend USA leadership in high performance computing and networking technologies; to improve technology transfer for economic competitiveness, education, and national security; and to provide a key part of the foundation for the National Information Infrastructure. The first component of the National Institutes of Health to participate in the HPCC, the National Library of Medicine (NLM), recently issued a solicitation for proposals to address a range of issues, from privacy to 'testbed' networks, 'virtual reality,' and more. These efforts will build upon the NLM's extensive outreach program and other initiatives, including the Unified Medical Language System (UMLS), MEDLARS, and Grateful Med. New Internet search tools are emerging, such as Gopher and 'Knowbots'. Medicine will succeed in developing future intelligent agents to assist in utilizing computer networks. Our ability to serve patients is so often restricted by lack of information and knowledge at the time and place of medical decision-making. The new technologies, properly employed, will also greatly enhance our ability to serve the patient.

  17. [Brain-Computer Interface: the First Clinical Experience in Russia].

    Science.gov (United States)

    Mokienko, O A; Lyukmanov, R Kh; Chernikova, L A; Suponeva, N A; Piradov, M A; Frolov, A A

    2016-01-01

    Motor imagery is suggested to stimulate the same plastic mechanisms in the brain as a real movement. The brain-computer interface (BCI) controls motor imagery by converting EEG during this process into the commands for an external device. This article presents the results of two-stage study of the clinical use of non-invasive BCI in the rehabilitation of patients with severe hemiparesis caused by focal brain damage. It was found that the ability to control BCI did not depend on the duration of a disease, brain lesion localization and the degree of neurological deficit. The first step of the study involved 36 patients; it showed that the efficacy of rehabilitation was higher in the group with the use of BCI (the score on the Action Research Arm Test (ARAT) improved from 1 [0; 2] to 5 [0; 16] points, p = 0.012; no significant improvement was observed in control group). The second step of the study involved 19 patients; the complex BCI-exoskeleton (i.e. with the kinesthetic feedback) was used for motor imagery trainings. The improvement of the motor function of hands was proved by ARAT (the score improved from 2 [0; 37] to 4 [1; 45:5] points, p = 0.005) and Fugl-Meyer scale (from 72 [63; 110 ] to 79 [68; 115] points, p = 0.005).

  18. A review of experiments and computer analyses on RIAs

    International Nuclear Information System (INIS)

    Jernkvist, L.O.; Massih, A.R.; In de Betou, J.

    2010-01-01

    Reactivity initiated accidents (RIAs) are nuclear reactor accidents that involve an unwanted increase in fission rate and reactor power. Reactivity initiated accidents in power reactors may occur as a result of reactor control system failures, control element ejections or events caused by rapid changes in temperature or pressure of the coolant/moderator. our current understanding of reactivity initiated accidents and their consequences is based largely on three sources of information: 1) best-estimate computer analyses of the reactor response to postulated accident scenarios, 2) pulse-irradiation tests on instrumented fuel rodlets, carried out in research reactors, 3) out-of-pile separate effect tests, targeted to explore key phenomena under RIA conditions. In recent years, we have reviewed, compiled and analysed these 3 categories of data. The results is a state-of-the-art report on fuel behaviour under RIA conditions, which is currently being published by the OECD Nuclear Energy Agency. The purpose of this paper is to give a brief summary of this report

  19. Experiences of Using Automated Assessment in Computer Science Courses

    Directory of Open Access Journals (Sweden)

    John English

    2015-10-01

    Full Text Available In this paper we discuss the use of automated assessment in a variety of computer science courses that have been taught at Israel Academic College by the authors. The course assignments were assessed entirely automatically using Checkpoint, a web-based automated assessment framework. The assignments all used free-text questions (where the students type in their own answers. Students were allowed to correct errors based on feedback provided by the system and resubmit their answers. A total of 141 students were surveyed to assess their opinions of this approach, and we analysed their responses. Analysis of the questionnaire showed a low correlation between questions, indicating the statistical independence of the individual questions. As a whole, student feedback on using Checkpoint was very positive, emphasizing the benefits of multiple attempts, impartial marking, and a quick turnaround time for submissions. Many students said that Checkpoint gave them confidence in learning and motivation to practise. Students also said that the detailed feedback that Checkpoint generated when their programs failed helped them understand their mistakes and how to correct them.

  20. Multi-fidelity Gaussian process regression for computer experiments

    International Nuclear Information System (INIS)

    Le-Gratiet, Loic

    2013-01-01

    This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (i.e. the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based meta-models with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (i.e. when the process is in fact finite- dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework. (author) [fr

  1. Multislice computed tomographic coronary angiography: experience in a UK centre

    International Nuclear Information System (INIS)

    Morgan-Hughes, G.J.; Marshall, A.J.; Roobottom, C.A.

    2003-01-01

    AIM: To evaluate the technique of coronary angiography with retrospectively electrocardiogram (ECG)-gated four-slice helical computed tomography (CT). MATERIALS AND METHODS: Within 1 month of undergoing routine day-case diagnostic coronary angiography, 30 consecutive patients also underwent retrospectively ECG-gated multislice CT coronary angiography. This enabled direct comparison of seven segments of proximal and mid-coronary artery for each patient by two blinded assessors. Each segment of coronary artery from the multislice CT image was evaluated initially for 'assessability' and those segments deemed assessable were subsequently investigated for the presence or absence of a significantly (n=70%) stenotic lesion. RESULTS: Overall 68% of proximal and mid-coronary artery segments were assessable. The sensitivity and specificity of four-slice CT coronary angiography in assessable segments for detecting the presence or absence (n=70%) of stenoses were 72 and 86%, respectively. These results correspond to a positive predictive value of 53% and a 93% negative predictive value. If the 32% of non-assessable segments are added into the calculation then the sensitivity and specificity fall to 49 and 66%, respectively. CONCLUSION: Although multislice CT coronary angiography is a promising technique, the overall assessability and diagnostic accuracy of four-slice CT acquisition is not sufficient to justify routine clinical use. Further, evaluation should investigate the benefit of the reduction in temporal and spatial resolution offered by 16 and 32 slice acquisition

  2. Computer-Adaptive Testing: Implications for Students' Achievement, Motivation, Engagement, and Subjective Test Experience

    Science.gov (United States)

    Martin, Andrew J.; Lazendic, Goran

    2018-01-01

    The present study investigated the implications of computer-adaptive testing (operationalized by way of multistage adaptive testing; MAT) and "conventional" fixed order computer testing for various test-relevant outcomes in numeracy, including achievement, test-relevant motivation and engagement, and subjective test experience. It did so…

  3. Using Educational Computer Games in the Classroom: Science Teachers' Experiences, Attitudes, Perceptions, Concerns, and Support Needs

    Science.gov (United States)

    An, Yun-Jo; Haynes, Linda; D'Alba, Adriana; Chumney, Frances

    2016-01-01

    Science teachers' experiences, attitudes, perceptions, concerns, and support needs related to the use of educational computer games were investigated in this study. Data were collected from an online survey, which was completed by 111 science teachers. The results showed that 73% of participants had used computer games in teaching. Participants…

  4. Computer based workstation for development of software for high energy physics experiments

    International Nuclear Information System (INIS)

    Ivanchenko, I.M.; Sedykh, Yu.V.

    1987-01-01

    Methodical principles and results of a successful attempt to create on the base of IBM-PC/AT personal computer of effective means for development of programs for high energy physics experiments are analysed. The obtained results permit to combine the best properties and a positive materialized experience accumulated on the existing time sharing collective systems with a high quality of data representation, reliability and convenience of personal computer applications

  5. Coupling between eddy currents and rigid body rotation: analysis, computation, and experiments

    International Nuclear Information System (INIS)

    Hua, T.Q.; Turner, L.R.

    1985-01-01

    Computation and experiment show that the coupling between eddy currents and the angular deflections resulting from those eddy currents can reduce electromagnetic effects such as forces, torques, and power dissipation to levels far less severe than would be predicted without regard for the coupling. This paper explores the coupling effects beyond the parameter range that has been explored experimentally, using analytical means and the eddy-current computer code EDDYNET. The paper also describes upcoming FELIX experiments with cantilevered beams

  6. Computer-assisted training experiment used in the field of thermal energy production (EDF)

    International Nuclear Information System (INIS)

    Felgines, R.

    1982-01-01

    In 1981, the EDF carried out an experiment with computer-assisted training (EAO). This new approach, which continued until June 1982, involved about 700 employees all of whom operated nuclear power stations. The different stages of this experiment and the lessons which can be drawn from it are given the lessons were of a positive nature and make it possible to envisage complete coverage of all nuclear power stations by computer-assisted training within a very short space of time [fr

  7. Cyber Attacks and Energy Infrastructures: Anticipating Risks

    International Nuclear Information System (INIS)

    Desarnaud, Gabrielle

    2017-01-01

    This study analyses the likelihood of cyber-attacks against European energy infrastructures and their potential consequences, particularly on the electricity grid. It also delivers a comparative analysis of measures taken by different European countries to protect their industries and collaborate within the European Union. The energy sector experiences an unprecedented digital transformation upsetting its activities and business models. Our energy infrastructures, sometimes more than a decade old and designed to remain functional for many years to come, now constantly interact with light digital components. The convergence of the global industrial system with the power of advanced computing and analytics reveals untapped opportunities at every step of the energy value chain. However, the introduction of digital elements in old and unprotected industrial equipment also exposes the energy industry to the cyber risk. One of the most compelling example of the type of threat the industry is facing, is the 2015 cyber-attack on the Ukraine power grid, which deprived about 200 000 people of electricity in the middle of the winter. The number and the level of technical expertise of cyber-attacks rose significantly after the discovery of the Stuxnet worm in the network of Natanz uranium enrichment site in 2010. Energy transition policies and the growing integration of renewable sources of energy will intensify this tendency, if cyber security measures are not part of the design of our future energy infrastructures. Regulators try to catch up and adapt, like in France where the authorities collaborate closely with the energy industry to set up a strict and efficient regulatory framework, and protect critical operators. This approach is adopted elsewhere in Europe, but common measures applicable to the whole European Union are essential to protect strongly interconnected energy infrastructures against a multiform threat that defies frontiers

  8. Experiences using SciPy for computer vision research

    Energy Technology Data Exchange (ETDEWEB)

    Eads, Damian R [Los Alamos National Laboratory; Rosten, Edward J [Los Alamos National Laboratory

    2008-01-01

    SciPy is an effective tool suite for prototyping new algorithms. We share some of our experiences using it for the first time to support our research in object detection. SciPy makes it easy to integrate C code, which is essential when algorithms operating on large data sets cannot be vectorized. The universality of Python, the language in which SciPy was written, gives the researcher access to a broader set of non-numerical libraries to support GUI development, interface with databases, manipulate graph structures. render 3D graphics, unpack binary files, etc. Python's extensive support for operator overloading makes SciPy's syntax as succinct as its competitors, MATLAB, Octave, and R. More profoundly, we found it easy to rework research code written with SciPy into a production application, deployable on numerous platforms.

  9. Monitoring of computing resource utilization of the ATLAS experiment

    International Nuclear Information System (INIS)

    Rousseau, David; Vukotic, Ilija; Schaffer, RD; Dimitrov, Gancho; Aidel, Osman; Albrand, Solveig

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  10. The TESS [Tandem Experiment Simulation Studies] computer code user's manual

    International Nuclear Information System (INIS)

    Procassini, R.J.

    1990-01-01

    TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs

  11. Climate simulations and services on HPC, Cloud and Grid infrastructures

    Science.gov (United States)

    Cofino, Antonio S.; Blanco, Carlos; Minondo Tshuma, Antonio

    2017-04-01

    Cloud, Grid and High Performance Computing have changed the accessibility and availability of computing resources for Earth Science research communities, specially for Climate community. These paradigms are modifying the way how climate applications are being executed. By using these technologies the number, variety and complexity of experiments and resources are increasing substantially. But, although computational capacity is increasing, traditional applications and tools used by the community are not good enough to manage this large volume and variety of experiments and computing resources. In this contribution, we evaluate the challenges to run climate simulations and services on Grid, Cloud and HPC infrestructures and how to tackle them. The Grid and Cloud infrastructures provided by EGI's VOs ( esr , earth.vo.ibergrid and fedcloud.egi.eu) will be evaluated, as well as HPC resources from PRACE infrastructure and institutional clusters. To solve those challenges, solutions using DRM4G framework will be shown. DRM4G provides a good framework to manage big volume and variety of computing resources for climate experiments. This work has been supported by the Spanish National R&D Plan under projects WRF4G (CGL2011-28864), INSIGNIA (CGL2016-79210-R) and MULTI-SDM (CGL2015-66583-R) ; the IS-ENES2 project from the 7FP of the European Commission (grant agreement no. 312979); the European Regional Development Fund—ERDF and the Programa de Personal Investigador en Formación Predoctoral from Universidad de Cantabria and Government of Cantabria.

  12. Coordinated Use of Heterogeneous Infrastructures for Scientific Computing at CIEMAT by means of Grid Technologies; Aprovechamiento Coordinado de las Infraestructuras Heterogeneas para Calculo Cientifico Participadas por el CIEMAT por medio de Tecnologias Grid

    Energy Technology Data Exchange (ETDEWEB)

    Rubio-Montero, A. J.

    2008-08-06

    Usually, research data centres maintain platforms from a wide range of architectures to cover the computational needs of their scientists. These centres are also frequently involved in diverse national and international Grid projects. Besides, it is very difficult to achieve a complete and efficient utilization of these recourses, due to the heterogeneity in their hardware and software configurations and their unequal use along the time. This report offers a solution to the problem of enabling a simultaneous and coordinated access to the variety of computing infrastructures and platforms available in great Research Organisms such as CIEMAT. For this purpose, new Grid technologies have been deployed in order to facilitate a common interface which enables the final user to access the internal and external resources. The previous computing infrastructure has not been modified and the independence on its administration has been guaranteed. For a sake of comparison, a feasibility study has been performed with the execution of the Drift Kinetic Equation solver (Dikes) tool, a high throughput scientific application used in the TJ-II Flexible Heliac at National Fusion Laboratory. (Author) 35 refs.

  13. A cerebellar neuroprosthetic system: computational architecture and in vivo experiments

    Directory of Open Access Journals (Sweden)

    Ivan eHerreros Alonso

    2014-05-01

    Full Text Available Emulating the input-output functions performed by a brain structure opens the possibility for developing neuro-prosthetic systems that replace damaged neuronal circuits. Here, we demonstrate the feasibility of this approach by replacing the cerebellar circuit responsible for the acquisition and extinction of motor memories. Specifically, we show that a rat can undergo acquisition, retention and extinction of the eye-blink reflex even though the biological circuit responsible for this task has been chemically inactivated via anesthesia. This is achieved by first developing a computational model of the cerebellar microcircuit involved in the acquisition of conditioned reflexes and training it with synthetic data generated based on physiological recordings. Secondly, the cerebellar model is interfaced with the brain of an anesthetized rat, connecting the model's inputs and outputs to afferent and efferent cerebellar structures. As a result, we show that the anesthetized rat, equipped with our neuro-prosthetic system, can be classically conditioned to the acquisition of an eye-blink response. However, non-stationarities in the recorded biological signals limit the performance of the cerebellar model. Thus, we introduce an updated cerebellar model and validate it with physiological recordings showing that learning becomes stable and reliable. The resulting system represents an important step towards replacing lost functions of the central nervous system via neuro-prosthetics, obtained by integrating a synthetic circuit with the afferent and efferent pathways of a damaged brain region. These results also embody an early example of science-based medicine, where on the one hand the neuro-prosthetic system directly validates a theory of cerebellar learning that informed the design of the system, and on the other one it takes a step towards the development of neuro-prostheses that could recover lost learning functions in animals and, in the longer term

  14. The Czech National Grid Infrastructure

    Science.gov (United States)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  15. Automatization of physical experiments on-line with the MINSK-32 computer

    International Nuclear Information System (INIS)

    Fefilov, B.V.; Mikhushkin, A.V.; Morozov, V.M.; Sukhov, A.M.; Chelnokov, L.P.

    1978-01-01

    The system for data acquisition and processing of complex multi-dimensional experiments is described. The system includes the autonomous modules in the CAMAC standard, the NAIRI-4 small computer and the MINSK-32 base computer. The NAIRI-4 computer effects preliminary storage, data processing and experiment control. Its software includes the microprogram software of the NAIRI-4 computer, the software of the NAIRI-2 computer, the software of the PDP-11 computer, the technological software on the Es computers. A crate controller and a display driver are connected to the main channel for the operation of the NAIRI-4 computer on line with experimental devices. An input-output channel commutator, which transforms the MINSK-32 computer levels to the TTL levels and vice versa, was developed to enlarge the possibilities of the connection of the measurement modules to the MINSK-32 computer. The graphic display on the basis of the HP-1300A monitor with a light pencil is used for highly effective spectrum processing

  16. Development of Best Practices for Large-scale Data Management Infrastructure

    NARCIS (Netherlands)

    S. Stadtmüller; H.F. Mühleisen (Hannes); C. Bizer; M.L. Kersten (Martin); J.A. de Rijke (Arjen); F.E. Groffen (Fabian); Y. Zhang (Ying); G. Ladwig; A. Harth; M Trampus

    2012-01-01

    htmlabstractThe amount of available data for processing is constantly increasing and becomes more diverse. We collect our experiences on deploying large-scale data management tools on local-area clusters or cloud infrastructures and provide guidance to use these computing and storage

  17. Computer network that assists in the planning, execution and evaluation of in-reactor experiments

    International Nuclear Information System (INIS)

    Bauer, T.H.; Froehle, P.H.; August, C.; Baldwin, R.D.; Johanson, E.W.; Kraimer, M.R.; Simms, R.; Klickman, A.E.

    1985-01-01

    For over 20 years complex, in-reactor experiments have been performed at Argonne National Laboratory (ANL) to investigate the performance of nuclear reactor fuel and to support the development of large computer codes that address questions of reactor safety in full-scale plants. Not only are computer codes an important end-product of the research, but computer analysis is also involved intimately at most stages of experiment planning, data reduction, and evaluation. For instance, many experiments are of sufficiently long duration or, if they are of brief duration, occur in such a purposeful sequence that need for speedy availability of on-line data is paramount. This is made possible most efficiently by computer assisted displays and evaluation. A purposeful linking of main-frame, mini, and micro computers has been effected over the past eight years which greatly enhances the speed with which experimental data are reduced to useful forms and applied to the relevant technological issues. This greater efficiency in data management led also to improvements in the planning and execution of subsequent experiments. Raw data from experiments performed at INEL is stored directly on disk and tape with the aid of minicomputers. Either during or shortly after an experiment, data may be transferred, via a direct link, to the Illinois offices of ANL where the data base is stored on a minicomputer system. This Idaho-to-Illinois link has both enhanced experiment performance and allowed rapid dissemination of results

  18. Hardware for dynamic quantum computing experiments: Part I

    Science.gov (United States)

    Johnson, Blake; Ryan, Colm; Riste, Diego; Donovan, Brian; Ohki, Thomas

    Static, pre-defined control sequences routinely achieve high-fidelity operation on superconducting quantum processors. Efforts toward dynamic experiments depending on real-time information have mostly proceeded through hardware duplication and triggers, requiring a combinatorial explosion in the number of channels. We provide a hardware efficient solution to dynamic control with a complete platform of specialized FPGA-based control and readout electronics; these components enable arbitrary control flow, low-latency feedback and/or feedforward, and scale far beyond single-qubit control and measurement. We will introduce the BBN Arbitrary Pulse Sequencer 2 (APS2) control system and the X6 QDSP readout platform. The BBN APS2 features: a sequencer built around implementing short quantum gates, a sequence cache to allow long sequences with branching structures, subroutines for code re-use, and a trigger distribution module to capture and distribute steering information. The X6 QDSP features a single-stage DSP pipeline that combines demodulation with arbitrary integration kernels, and multiple taps to inspect data flow for debugging and calibration. We will show system performance when putting it all together, including a latency budget for feedforward operations. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office Contract No. W911NF-10-1-0324.

  19. Cooperation of experts' opinion, experiment and computer code development

    International Nuclear Information System (INIS)

    Wolfert, K.; Hicken, E.

    The connection between code development, code assessment and confidence in the analysis of transients will be discussed. In this manner, the major sources of errors in the codes and errors in applications of the codes will be shown. Standard problem results emphasize that, in order to have confidence in licensing statements, the codes must be physically realistic and the code user must be qualified and experienced. We will discuss why there is disagreement between the licensing authority and vendor concerning assessment of the fullfillment of safety goal requirements. The answer to the question lies in the different confidence levels of the assessment of transient analysis. It is expected that a decrease in the disagreement will result from an increased confidence level. Strong efforts will be made to increase this confidence level through improvements in the codes, experiments and related organizational strcutures. Because of the low probability for loss-of-coolant-accidents in the nuclear industry, assessment must rely on analytical techniques and experimental investigations. (orig./HP) [de

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  1. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    Uram, Thomas D; LeCompte, Thomas J; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  2. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  3. E-Infrastructure Concertation Meeting

    CERN Multimedia

    Katarina Anthony

    2010-01-01

    The 8th e-Infrastructure Concertation Meeting was held in the Globe from 4 to 5 November to discuss the development of Europe’s distributed computing and storage resources.   Project leaders attend the E-Concertation Meeting at the Globe on 5 November 2010. © Corentin Chevalier E-Infrastructures have become an indispensable tool for scientific research, linking researchers to virtually unlimited e-resources like the grid. The recent e-Infrastructure Concertation Meeting brought together e-Science project leaders to discuss the development of this tool in the European context. The meeting was part of an ongoing initiative to develop a world-class e-infrastructure resource that would establish European leadership in e-Science. The e-Infrastructure Concertation Meeting was organised by the Commission Services (EC) with the support of e-ScienceTalk. “The Concertation meeting at CERN has been a great opportunity for e-ScienceTalk to meet many of the 38 new proje...

  4. A multi VO Grid infrastructure at DESY

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2010-01-01

    As a centre for research with particle accelerators and synchrotron light, DESY operates a Grid infrastructure in the context of the EU-project EGEE and the national Grid initiative D-GRID. All computing and storage resources are located in one Grid infrastructure which supports a number of Virtual Organizations of different disciplines, including non-HEP groups such as the Photon Science community. Resource distribution is based on fair share methods without dedicating hardware to user groups. Production quality of the infrastructure is guaranteed by embedding it into the DESY computer centre.

  5. Participatory Infrastructuring of Community Energy

    DEFF Research Database (Denmark)

    Capaccioli, Andrea; Poderi, Giacomo; Bettega, Mela

    2016-01-01

    Thanks to renewable energies the decentralized energy system model is becoming more relevant in the production and distribution of energy. The scenario is important in order to achieve a successful energy transition. This paper presents a reflection on the ongoing experience of infrastructuring a...

  6. Sustainable Water Infrastructure

    Science.gov (United States)

    Resources for state and local environmental and public health officials, and water, infrastructure and utility professionals to learn about sustainable water infrastructure, sustainable water and energy practices, and their role.

  7. The Importance of Business Model Factors for Cloud Computing Adoption: Role of Previous Experiences

    Directory of Open Access Journals (Sweden)

    Bogataj Habjan Kristina

    2017-08-01

    Full Text Available Background and Purpose: Bringing several opportunities for more effective and efficient IT governance and service exploitation, cloud computing is expected to impact the European and global economies significantly. Market data show that despite many advantages and promised benefits the adoption of cloud computing is not as fast and widespread as foreseen. This situation shows the need for further exploration of the potentials of cloud computing and its implementation on the market. The purpose of this research was to identify individual business model factors with the highest impact on cloud computing adoption. In addition, the aim was to identify the differences in opinion regarding the importance of business model factors on cloud computing adoption according to companies’ previous experiences with cloud computing services.

  8. Une Experience d'enseignement du francais par ordinateur (An Experiment in Teaching French by Computer).

    Science.gov (United States)

    Bougaieff, Andre; Lefebvre, France

    1986-01-01

    An experimental program for university summer students of French as a second language that provided a computer resource center and a variety of courseware, authoring aids, and other software for student use is described and the problems and advantages are discussed. (MSE)

  9. Green(ing) infrastructure

    CSIR Research Space (South Africa)

    Van Wyk, Llewellyn V

    2014-03-01

    Full Text Available the generation of electricity from renewable sources such as wind, water and solar. Grey infrastructure – In the context of storm water management, grey infrastructure can be thought of as the hard, engineered systems to capture and convey runoff..., pumps, and treatment plants.  Green infrastructure reduces energy demand by reducing the need to collect and transport storm water to a suitable discharge location. In addition, green infrastructure such as green roofs, street trees and increased...

  10. The TENCompetence Infrastructure: A Learning Network Implementation

    Science.gov (United States)

    Vogten, Hubert; Martens, Harrie; Lemmers, Ruud

    The TENCompetence project developed a first release of a Learning Network infrastructure to support individuals, groups and organisations in professional competence development. This infrastructure Learning Network infrastructure was released as open source to the community thereby allowing users and organisations to use and contribute to this development as they see fit. The infrastructure consists of client applications providing the user experience and server components that provide the services to these clients. These services implement the domain model (Koper 2006) by provisioning the entities of the domain model (see also Sect. 18.4) and henceforth will be referenced as domain entity services.

  11. The infrastructure of telecare

    DEFF Research Database (Denmark)

    Nickelsen, Niels Christian Mossfeldt

    2018-01-01

    . The analysis demonstrates and proposes that, in telecare, greater accountability, discretion and responsibility are imposed on the nurse, but that they also have less access to the means of clinical decision-making, i.e. doctors. The article explores how relational infrastructures ascribe the professions......Telecare can offer a unique experience of trust in patient-nurse relationships, embracing new standards for professional discretion among nurses, but also reflects an increasingly complicated relationship between nurses and doctors. The study uses ethnographic methodology in relation to a large 5...... million euro project at four hospitals caring for 120 patients with COPD. Twenty screen-mediated conferences were observed and two workshops, centring on nurses’ photo elucidation of the practice of telecare, were conducted with a focus on shifting tasks, professional discretion, responsibility...

  12. EDUCATIONAL COMPUTER SIMULATION EXPERIMENT «REAL-TIME SINGLE-MOLECULE IMAGING OF QUANTUM INTERFERENCE»

    Directory of Open Access Journals (Sweden)

    Alexander V. Baranov

    2015-01-01

    Full Text Available Taking part in the organized project activities students of the technical University create virtual physics laboratories. The article gives an example of the student’s project-computer modeling and visualization one of the most wonderful manifestations of reality-quantum interference of particles. The real experiment with heavy organic fluorescent molecules is used as a prototype for this computer simulation. The student’s software product can be used in informational space of the system of open education.

  13. Grid computing in pakistan and: opening to large hadron collider experiments

    International Nuclear Information System (INIS)

    Batool, N.; Osman, A.; Mahmood, A.; Rana, M.A.

    2009-01-01

    A grid computing facility was developed at sister institutes Pakistan Institute of Nuclear Science and Technology (PINSTECH) and Pakistan Institute of Engineering and Applied Sciences (PIEAS) in collaboration with Large Hadron Collider (LHC) Computing Grid during early years of the present decade. The Grid facility PAKGRID-LCG2 as one of the grid node in Pakistan was developed employing mainly local means and is capable of supporting local and international research and computational tasks in the domain of LHC Computing Grid. Functional status of the facility is presented in terms of number of jobs performed. The facility developed provides a forum to local researchers in the field of high energy physics to participate in the LHC experiments and related activities at European particle physics research laboratory (CERN), which is one of the best physics laboratories in the world. It also provides a platform of an emerging computing technology (CT). (author)

  14. Computer-controlled back scattering and sputtering-experiment using a heavy-ion-accelerator

    International Nuclear Information System (INIS)

    Becker, H.; Birnbaum, M.; Degenhardt, K.H.; Mertens, P.; Tschammer, V.

    1978-12-01

    Control and data acquisition of a PDP 11/40 computer and CAMAC instrumentation are reported for an experiment that has been developed to measure sputtering in yields and energy losses for heavy 100 - 300 keV ions in thin metal foils. Besides a quadrupole mass filter or a bending magnet, a multichannel analyser is coupled to the computer, so that also pulse height analysis can be performed under computer control. CAMAC instrumentation and measuring programs are built in a modular form to enable an easy application to other experimental problems. (orig.) 891 KBE/orig. 892 BRE

  15. Computer assisted treatments for image pattern data of laser plasma experiments

    International Nuclear Information System (INIS)

    Yaoita, Akira; Matsushima, Isao

    1987-01-01

    An image data processing system for laser-plasma experiments has been constructed. These image data are two dimensional images taken by X-ray, UV, infrared and visible light television cameras and also taken by streak cameras. They are digitized by frame memories. The digitized image data are stored in disk memories with the aid of a microcomputer. The data are processed by a host computer and stored in the files of the host computer and on magnetic tapes. In this paper, the over view of the image data processing system and some software for data handling in the host computer are reported. (author)

  16. Deploying and managing a cloud infrastructure real-world skills for the Comptia cloud+ certification and beyond exam CV0-001

    CERN Document Server

    Salam, Abdul; Ul Haq, Salman

    2015-01-01

    Learn in-demand cloud computing skills from industry experts Deploying and Managing a Cloud Infrastructure is an excellent resource for IT professionals seeking to tap into the demand for cloud administrators. This book helps prepare candidates for the CompTIA Cloud+ Certification (CV0-001) cloud computing certification exam. Designed for IT professionals with 2-3 years of networking experience, this certification provides validation of your cloud infrastructure knowledge. With over 30 years of combined experience in cloud computing, the author team provides the latest expert perspectives on

  17. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  18. PRACE - The European HPC Infrastructure

    Science.gov (United States)

    Stadelmeyer, Peter

    2014-05-01

    The mission of PRACE (Partnership for Advanced Computing in Europe) is to enable high impact scientific discovery and engineering research and development across all disciplines to enhance European competitiveness for the benefit of society. PRACE seeks to realize this mission by offering world class computing and data management resources and services through a peer review process. This talk gives a general overview about PRACE and the PRACE research infrastructure (RI). PRACE is established as an international not-for-profit association and the PRACE RI is a pan-European supercomputing infrastructure which offers access to computing and data management resources at partner sites distributed throughout Europe. Besides a short summary about the organization, history, and activities of PRACE, it is explained how scientists and researchers from academia and industry from around the world can access PRACE systems and which education and training activities are offered by PRACE. The overview also contains a selection of PRACE contributions to societal challenges and ongoing activities. Examples of the latter are beside others petascaling, application benchmark suite, best practice guides for efficient use of key architectures, application enabling / scaling, new programming models, and industrial applications. The Partnership for Advanced Computing in Europe (PRACE) is an international non-profit association with its seat in Brussels. The PRACE Research Infrastructure provides a persistent world-class high performance computing service for scientists and researchers from academia and industry in Europe. The computer systems and their operations accessible through PRACE are provided by 4 PRACE members (BSC representing Spain, CINECA representing Italy, GCS representing Germany and GENCI representing France). The Implementation Phase of PRACE receives funding from the EU's Seventh Framework Programme (FP7/2007-2013) under grant agreements RI-261557, RI-283493 and RI

  19. Management of virtualized infrastructure for physics databases

    International Nuclear Information System (INIS)

    Topurov, Anton; Gallerani, Luigi; Chatal, Francois; Piorkowski, Mariusz

    2012-01-01

    Demands for information storage of physics metadata are rapidly increasing together with the requirements for its high availability. Most of the HEP laboratories are struggling to squeeze more from their computer centers, thus focus on virtualizing available resources. CERN started investigating database virtualization in early 2006, first by testing database performance and stability on native Xen. Since then we have been closely evaluating the constantly evolving functionality of virtualisation solutions for database and middle tier together with the associated management applications – Oracle's Enterprise Manager and VM Manager. This session will detail our long experience in dealing with virtualized environments, focusing on newest Oracle OVM 3.0 for x86 and Oracle Enterprise Manager functionality for efficiently managing your virtualized database infrastructure.

  20. A Provenance-Based Infrastructure to Support the Life Cycle of Executable Papers

    DEFF Research Database (Denmark)

    2011-01-01

    As publishers establish a greater online presence as well as infrastructure to support the distribution of more varied information, the idea of an executable paper that enables greater interaction has developed. An executable paper provides more information for computational experiments and results...... than the text, tables, and figures of standard papers. Executable papers can bundle computational content that allow readers and reviewers to interact, validate, and explore experiments. By including such content, authors facilitate future discoveries by lowering the barrier to reproducing...... and extending results. We present an infrastructure for creating, disseminating, and maintaining executable papers. Our approach is rooted in provenance, the documentation of exactly how data, experiments, and results were generated. We seek to improve the experience for everyone involved in the life cycle...

  1. DABIE: a data banking system of integral experiments for reactor core characteristics computer codes

    International Nuclear Information System (INIS)

    Matsumoto, Kiyoshi; Naito, Yoshitaka; Ohkubo, Shuji; Aoyanagi, Hideo.

    1987-05-01

    A data banking system of integral experiments for reactor core characteristics computer codes, DABIE, has been developed to lighten the burden on searching so many documents to obtain experiment data required for verification of reactor core characteristics computer code. This data banking system, DABIE, has capabilities of systematic classification, registration and easy retrieval of experiment data. DABIE consists of data bank and supporting programs. Supporting programs are data registration program, data reference program and maintenance program. The system is designed so that user can easily register information of experiment systems including figures as well as geometry data and measured data or obtain those data through TSS terminal interactively. This manual describes the system structure, how-to-use and sample uses of this code system. (author)

  2. The Gerici project: management of risks related to climate change for infrastructures. First lessons of three years of vulnerability study experience

    International Nuclear Information System (INIS)

    Guerard, H.; Ray, M.

    2007-01-01

    Climate change considerably modifies the vulnerability of infrastructures, and such concepts as the 'hundred-year flood' can even become dangerous in this new context. Interesting conclusions were reached for contracting authorities and a specific tool developed for infrastructure operators resulting from three years of research carried out after labelling by the RGCU (civil engineering and urban network) and with co-financing by the public works ministry. This project, managed by Egis (Scetauroute and Bceom) groups Sanef, ASF, Meteo-France, LCPC and Esri France. The article describes the stages in the procedure and the geographical information system (SIG), a user-friendly and transposable support tool for technical and strategic investigations. (authors)

  3. Computational methods for fracture analysis of heavy-section steel technology (HSST) pressure vessel experiments

    International Nuclear Information System (INIS)

    Bass, B.R.; Bryan, R.H.; Bryson, J.W.; Merkle, J.G.

    1983-01-01

    This paper summarizes the capabilities and applications of the general-purpose and special-purpose computer programs that have been developed for use in fracture mechanics analyses of HSST pressure vessel experiments. Emphasis is placed on the OCA/USA code, which is designed for analysis of pressurized-thermal-shock (PTS) conditions, and on the ORMGEN/ADINA/ORVIRT system which is used for more general analysis. Fundamental features of these programs are discussed, along with applications to pressure vessel experiments

  4. Permafrost Hazards and Linear Infrastructure

    Science.gov (United States)

    Stanilovskaya, Julia; Sergeev, Dmitry

    2014-05-01

    The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the

  5. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  6. Use of Tablet Computers to Promote Physical Therapy Students' Engagement in Knowledge Translation During Clinical Experiences

    Science.gov (United States)

    Loeb, Kathryn; Barbosa, Sabrina; Jiang, Fei; Lee, Karin T.

    2016-01-01

    Background and Purpose: Physical therapists strive to integrate research into daily practice. The tablet computer is a potentially transformational tool for accessing information within the clinical practice environment. The purpose of this study was to measure and describe patterns of tablet computer use among physical therapy students during clinical rotation experiences. Methods: Doctor of physical therapy students (n = 13 users) tracked their use of tablet computers (iPad), loaded with commercially available apps, during 16 clinical experiences (6-16 weeks in duration). Results: The tablets were used on 70% of 691 clinic days, averaging 1.3 uses per day. Information seeking represented 48% of uses; 33% of those were foreground searches for research articles and syntheses and 66% were for background medical information. Other common uses included patient education (19%), medical record documentation (13%), and professional communication (9%). The most frequently used app was Safari, the preloaded web browser (representing 281 [36.5%] incidents of use). Users accessed 56 total apps to support clinical practice. Discussion and Conclusions: Physical therapy students successfully integrated use of a tablet computer into their clinical experiences including regular activities of information seeking. Our findings suggest that the tablet computer represents a potentially transformational tool for promoting knowledge translation in the clinical practice environment. Video Abstract available for more insights from the authors (see Supplemental Digital Content 1, http://links.lww.com/JNPT/A127). PMID:26945431

  7. CMS computing support at JINR

    International Nuclear Information System (INIS)

    Golutvin, I.; Koren'kov, V.; Lavrent'ev, A.; Pose, R.; Tikhonenko, E.

    1998-01-01

    Participation of JINR specialists in the CMS experiment at LHC requires a wide use of computer resources. In the context of JINR activities in the CMS Project hardware and software resources have been provided for full participation of JINR specialists in the CMS experiment; the JINR computer infrastructure was made closer to the CERN one. JINR also provides the informational support for the CMS experiment (web-server http://sunct2.jinr.dubna.su). Plans for further CMS computing support at JINR are stated

  8. Using a Computer Microphone Port to Study Circular Motion: Proposal of a Secondary School Experiment

    Science.gov (United States)

    Soares, A. A.; Borcsik, F. S.

    2016-01-01

    In this work we present an inexpensive experiment proposal to study the kinematics of uniform circular motion in a secondary school. We used a PC sound card to connect a homemade simple sensor to a computer and used the free sound analysis software "Audacity" to record experimental data. We obtained quite good results even in comparison…

  9. Experiments Using Cell Phones in Physics Classroom Education: The Computer-Aided "g" Determination

    Science.gov (United States)

    Vogt, Patrik; Kuhn, Jochen; Muller, Sebastian

    2011-01-01

    This paper continues the collection of experiments that describe the use of cell phones as experimental tools in physics classroom education. We describe a computer-aided determination of the free-fall acceleration "g" using the acoustical Doppler effect. The Doppler shift is a function of the speed of the source. Since a free-falling objects…

  10. Evaluating a multi-player brain-computer interface game: challenge versus co-experience

    NARCIS (Netherlands)

    Gürkök, Hayrettin; Volpe, G; Reidsma, Dennis; Poel, Mannes; Camurri, A.; Obbink, Michel; Nijholt, Antinus

    2013-01-01

    Brain–computer interfaces (BCIs) have started to be considered as game controllers. The low level of control they provide prevents them from providing perfect control but allows the design of challenging games which can be enjoyed by players. Evaluation of enjoyment, or user experience (UX), is

  11. Computational Modeling of the Optical Rotation of Amino Acids: An "in Silico" Experiment for Physical Chemistry

    Science.gov (United States)

    Simpson, Scott; Autschbach, Jochen; Zurek, Eva

    2013-01-01

    A computational experiment that investigates the optical activity of the amino acid valine has been developed for an upper-level undergraduate physical chemistry laboratory course. Hybrid density functional theory calculations were carried out for valine to confirm the rule that adding a strong acid to a solution of an amino acid in the l…

  12. Evaluating the Relationship of Computer Literacy Training Competence and Nursing Experience to CPIS Resistance

    Science.gov (United States)

    Reese, Dorothy J.

    2012-01-01

    The purpose of this quantitative, descriptive/correlational project was to examine the relationship between the level of computer literacy, informatics training, nursing experience, and perceived competence in using computerized patient information systems (CPIS) and nursing resistance to using CPIS. The Nurse Computerized Patient Information…

  13. Development and application of a computer model for large-scale flame acceleration experiments

    International Nuclear Information System (INIS)

    Marx, K.D.

    1987-07-01

    A new computational model for large-scale premixed flames is developed and applied to the simulation of flame acceleration experiments. The primary objective is to circumvent the necessity for resolving turbulent flame fronts; this is imperative because of the relatively coarse computational grids which must be used in engineering calculations. The essence of the model is to artificially thicken the flame by increasing the appropriate diffusivities and decreasing the combustion rate, but to do this in such a way that the burn velocity varies with pressure, temperature, and turbulence intensity according to prespecified phenomenological characteristics. The model is particularly aimed at implementation in computer codes which simulate compressible flows. To this end, it is applied to the two-dimensional simulation of hydrogen-air flame acceleration experiments in which the flame speeds and gas flow velocities attain or exceed the speed of sound in the gas. It is shown that many of the features of the flame trajectories and pressure histories in the experiments are simulated quite well by the model. Using the comparison of experimental and computational results as a guide, some insight is developed into the processes which occur in such experiments. 34 refs., 25 figs., 4 tabs

  14. ONTOLOGY OF COMPUTATIONAL EXPERIMENT ORGANIZATION IN PROBLEMS OF SEARCHING AND SORTING

    Directory of Open Access Journals (Sweden)

    A. Spivakovsky

    2011-05-01

    Full Text Available Ontologies are a key technology of semantic processing of knowledge. We examine a methodology of ontology’s usage for the organization of computational experiment in problems of searching and sorting in studies of the course "Basics of algorithms and programming".

  15. Solution of the Schrodinger Equation for a Diatomic Oscillator Using Linear Algebra: An Undergraduate Computational Experiment

    Science.gov (United States)

    Gasyna, Zbigniew L.

    2008-01-01

    Computational experiment is proposed in which a linear algebra method is applied to the solution of the Schrodinger equation for a diatomic oscillator. Calculations of the vibration-rotation spectrum for the HCl molecule are presented and the results show excellent agreement with experimental data. (Contains 1 table and 1 figure.)

  16. Computational Experience with Globally Convergent Descent Methods for Large Sparse Systems of Nonlinear Equations

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    1998-01-01

    Roč. 8, č. 3-4 (1998), s. 201-223 ISSN 1055-6788 R&D Projects: GA ČR GA201/96/0918 Keywords : nonlinear equations * Armijo-type descent methods * Newton-like methods * truncated methods * global convergence * nonsymmetric linear systems * conjugate gradient -type methods * residual smoothing * computational experiments Subject RIV: BB - Applied Statistics, Operational Research

  17. Profile modification computations for LHCD experiments on PBX-M using the TSC/LSC model

    International Nuclear Information System (INIS)

    Kaita, R.; Ignat, D.W.; Jardin, S.C.; Okabayashi, M.; Sun, Y.C.

    1996-01-01

    The TSC-LSC computational model of the dynamics of lower hybrid current drive has been exercised extensively in comparison with data from a Princeton Beta Experiment-Modification (PBX-M) discharge where the measured q(0) attained values slightly above unity. Several significant, but plausible, assumptions had to be introduced to keep the computation from behaving pathologically over time, producing singular profiles of plasma current density and q. Addition of a heuristic current diffusion estimate, or more exactly, a smoothing of the rf-driven current with a diffusion-like equation, greatly improved the behavior of the computation, and brought theory and measurement into reasonable agreement. The model was then extended to longer pulse lengths and higher powers to investigate performance to be expected in future PBX-M current profile modification experiments. copyright 1996 American Institute of Physics

  18. Methods of physical experiment and installation automation on the base of computers

    International Nuclear Information System (INIS)

    Stupin, Yu.V.

    1983-01-01

    Peculiarities of using computers for physical experiment and installation automation are considered. Systems for data acquisition and processing on the base of microprocessors, micro- and mini-computers, CAMAC equipment and real time operational systems as well as systems intended for automation of physical experiments on accelerators and installations of laser thermonuclear fusion and installations for plasma investigation are dpscribed. The problems of multimachine complex and multi-user system, arrangement, development of automated systems for collective use, arrangement of intermachine data exchange and control of experimental data base are discussed. Data on software systems used for complex experimental data processing are presented. It is concluded that application of new computers in combination with new possibilities provided for users by universal operational systems essentially exceeds efficiency of a scientist work

  19. IBERCIVIS: a stable citizen computing infrastructure, or science at home; IBERCIVIS: una infraestructura estable de computacion ciudadana o la ciencia en casa

    Energy Technology Data Exchange (ETDEWEB)

    Castejon, F.; Tarancon, A.

    2008-07-01

    Researchers deal with increasingly difficult, complex issues that require more resources and tools. In addition to strictly technical problems, they are also required to produce research that is understood, at least in part, by the public and to be able to convey what are almost always difficult ideas and concepts the frontiers of knowledge. It rarely happens, but sometimes it is possible to solve several problems at the same time. As we will see throughout the article, Volunteer Computing, when properly handled, is able to supply computing power the scientific community and also serve as a window to science in the homes of citizens. (Author) 5 refs.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  2. Applications of small computers for systems control on the Tandem Mirror Experiment-Upgrade

    International Nuclear Information System (INIS)

    Bork, R.G.; Kane, R.J.; Moore, T.L.

    1983-01-01

    Desktop computers operating into a CAMAC-based interface are used to control and monitor the operation of the various subsystems on the Tandem Mirror Experiment-Upgrade (TMX-U) at Lawrence Livermore National Laboratory (LLNL). These systems include: shot sequencer/master timing, neutral beam control (four consoles), magnet power system control, ion-cyclotron resonant heating (ICRH) control, thermocouple monitoring, getter system control, gas fueling system control, and electron-cyclotron resonant heating (ECRH) monitoring. Two additional computers are used to control the TMX-U neutral beam test stand and provide computer-aided repair/test and development of CAMAC modules. These machines are usually programmed in BASIC, but some codes have been interpreted into assembly language to increase speed. Details of the computer interfaces and system complexity are described as well as the evolution of the systems to their present states

  3. Overview of the assessment of the french in-field tritium experiment with computer codes

    International Nuclear Information System (INIS)

    Crabol, B.; Graziani, G.; Edlund, O.

    1989-01-01

    In the framework of the international cooperation settled for the realization of the French tritium experiment, an expert group for the assessment of computer codes, including the Joint Research Center of Ispra (European Communities), Studsvik (Sweden) and the Atomic Energy Commission (France), has been organized. The aim of the group was as follows: - to help the design of the experiment by evaluating beforehand the consequences of the release, - to interpret the results of the experiment. This paper describes the last task and gives the main conclusions drawn from the work

  4. Structures and infrastructures series

    National Research Council Canada - National Science Library

    2008-01-01

    "Research, developments, and applications...on the most advanced techonologies for analyzing, predicting, and optimizing the performance of structures and infrastructures such as buildings, bridges, dams...

  5. Development of the regional EPR and PACS sharing system on the infrastructure of cloud computing technology controlled by patient identifier cross reference manager.

    Science.gov (United States)

    Kondoh, Hiroshi; Teramoto, Kei; Kawai, Tatsurou; Mochida, Maki; Nishimura, Motohiro

    2013-01-01

    A Newly developed Oshidori-Net2, providing medical professionals with remote access to electronic patient record systems (EPR) and PACSs of four hospitals, of different venders, using cloud computing technology and patient identifier cross reference manager. The operation was started from April 2012. The patients moved to other hospital were applied. Objective is to show the merit and demerit of the new system.

  6. Computing activities for the P-bar ANDA experiment at FAIR

    International Nuclear Information System (INIS)

    Messchendorp, Johan

    2010-01-01

    The P-bar ANDA experiment at the future facility FAIR will provide valuable data for our present understanding of the strong interaction. In preparation for the experiments, large-scale simulations for design and feasibility studies are performed exploiting a new software framework, P-bar ANDAROOT, which is based on FairROOT and the Virtual Monte Carlo interface, and which runs on a large-scale computing GRID environment exploiting the AliEn 2 middleware. In this paper, an overview is given of the P-bar ANDA experiment with the emphasis on the various developments which are pursuit to provide a user and developer friendly computing environment for the P-bar ANDA collaboration.

  7. Instant Google Compute Engine

    CERN Document Server

    Papaspyrou, Alexander

    2013-01-01

    Get to grips with a new technology, understand what it is and what it can do for you, and then get to work with the most important features and tasks. This book is a step-by-step guide to installing and using Google Compute Engine.""Instant Google Compute Engine"" is great for developers and operators who are new to Cloud computing, and who are looking to get a good grounding in using Infrastructure-as-a-Service as part of their daily work. It's assumed that you will have some experience with the Linux operating system as well as familiarity with the concept of virtualization technologies, suc

  8. Developing a grid infrastructure in Cuba

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Aldama, D.; Dominguez, M.; Ricardo, H.; Gonzalez, A.; Nolasco, E.; Fernandez, E.; Fernandez, M.; Sanchez, M.; Suarez, F.; Nodarse, F.; Moreno, N.; Aguilera, L.

    2007-07-01

    A grid infrastructure was deployed at Centro de Gestion de la Informacion y Desarrollo de la Energia (CUBAENERGIA) in the frame of EELA project and of a national initiative for developing a Cuban Network for Science. A stand-alone model was adopted to overcome connectivity limitations. The e-infrastructure is based on gLite-3.0 middleware and is fully compatible with EELA-infrastructure. Afterwards, the work was focused on grid applications. The application GATE was deployed from the early beginning for biomedical users. Further, two applications were deployed on the local grid infrastructure: MOODLE for e-learning and AERMOD for assessment of local dispersion of atmospheric pollutants. Additionally, our local grid infrastructure was made interoperable with a Java based distributed system for bioinformatics calculations. This experience could be considered as a suitable approach for national networks with weak Internet connections. (Author)

  9. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig.

    Science.gov (United States)

    Morison, Zachary; Mehra, Akshay; Olsen, Michael; Donnelly, Michael; Schemitsch, Emil

    2013-11-01

    The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1) and the last 9 (Cohort 2) hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA) from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 (P = 0.01). Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  10. Computer navigation experience in hip resurfacing improves femoral component alignment using a conventional jig

    Directory of Open Access Journals (Sweden)

    Zachary Morison

    2013-01-01

    Full Text Available Background:The use of computer navigation has been shown to improve the accuracy of femoral component placement compared to conventional instrumentation in hip resurfacing. Whether exposure to computer navigation improves accuracy when the procedure is subsequently performed with conventional instrumentation without navigation has not been explored. We examined whether femoral component alignment utilizing a conventional jig improves following experience with the use of imageless computer navigation for hip resurfacing. Materials and Methods:Between December 2004 and December 2008, 213 consecutive hip resurfacings were performed by a single surgeon. The first 17 (Cohort 1 and the last 9 (Cohort 2 hip resurfacings were performed using a conventional guidewire alignment jig. In 187 cases, the femoral component was implanted using the imageless computer navigation. Cohorts 1 and 2 were compared for femoral component alignment accuracy. Results:All components in Cohort 2 achieved the position determined by the preoperative plan. The mean deviation of the stem-shaft angle (SSA from the preoperatively planned target position was 2.2° in Cohort 2 and 5.6° in Cohort 1 ( P = 0.01. Four implants in Cohort 1 were positioned at least 10° varus compared to the target SSA position and another four were retroverted. Conclusions: Femoral component placement utilizing conventional instrumentation may be more accurate following experience using imageless computer navigation.

  11. Experience of BESIII data production with local cluster and distributed computing model

    International Nuclear Information System (INIS)

    Deng, Z Y; Li, W D; Liu, H M; Sun, Y Z; Zhang, X M; Lin, L; Nicholson, C; Zhemchugov, A

    2012-01-01

    The BES III detector is a new spectrometer which works on the upgraded high-luminosity collider, BEPCII. The BES III experiment studies physics in the tau-charm energy region from 2 GeV to 4.6 GeV . From 2009 to 2011, BEPCII has produced 106M ψ(2S) events, 225M J/ψ events, 2.8 fb −1 ψ(3770) data, and 500 pb −1 data at 4.01 GeV. All the data samples were processed successfully and many important physics results have been achieved based on these samples. Doing data production correctly and efficiently with limited CPU and storage resources is a big challenge. This paper will describe the implementation of the experiment-specific data production for BESIII in detail, including data calibration with event-level parallel computing model, data reconstruction, inclusive Monte Carlo generation, random trigger background mixing and multi-stream data skimming. Now, with the data sample increasing rapidly, there is a growing demand to move from solely using a local cluster to a more distributed computing model. A distributed computing environment is being set up and expected to go into production use in 2012. The experience of BESIII data production, both with a local cluster and with a distributed computing model, is presented here.

  12. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  13. Cross-cultural human-computer interaction and user experience design a semiotic perspective

    CERN Document Server

    Brejcha, Jan

    2015-01-01

    This book describes patterns of language and culture in human-computer interaction (HCI). Through numerous examples, it shows why these patterns matter and how to exploit them to design a better user experience (UX) with computer systems. It provides scientific information on the theoretical and practical areas of the interaction and communication design for research experts and industry practitioners and covers the latest research in semiotics and cultural studies, bringing a set of tools and methods to benefit the process of designing with the cultural background in mind.

  14. Digital computer control on Canadian nuclear power plants -experience to date and the future outlook

    International Nuclear Information System (INIS)

    Pearson, A.

    1977-10-01

    This paper discusses the performance of the digital computer control system at Pickering through the years 1973 to 1976. This evaluation is based on a study of the Pickering Generating Station operating records. The paper goes on to explore future computer architectures and the advantages that could accrue from a distributed system approach. Also outlined are the steps being taken to develop these ideas further in the context of two Chalk River projects - REDNET, an advanced data acquisition system being installed to process information from engineering experiments in NRX and NRU reactors, and CRIP, a prototype communications network using cable television technology. (author)

  15. Application of a personal computer in a high energy physics experiment

    International Nuclear Information System (INIS)

    Petta, P.

    1987-04-01

    UA1 is a detector block at the CERN Super Synchrotron Collider, MacVEE is Micro computer applied to the Control of VME Electronic Equipment, a software development system for the data readout system and for the implementation of the user interface of the experiment control. A commercial personal computer is used. Examples of applications are the Data Acquisition Console, the Scanner Desc equipment and the AMERICA Ram Disks codes. Further topics are the MacUA1 development system for M68K-VME codes and an outline of the future MacVEE System Supervisor. 23 refs., 10 figs., 3 tabs. (qui)

  16. Building an evaluation infrastructure

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin

    Infrastructuring does not happen by itself; it must be supported. In this paper, we present a feedback mechanism implemented as a smartphone-based application, inspired by the concept of infrastructure probes, which supports the in situ elicitation of feedback. This is incorporated within an eval...

  17. Physical resources and infrastructure

    NARCIS (Netherlands)

    Foeken, D.W.J.; Hoorweg, J.; Foeken, D.W.J.; Obudho, R.A.

    2000-01-01

    This chapter describes the main physical characteristics as well as the main physical and social infrastructure features of Kenya's coastal region. Physical resources include relief, soils, rainfall, agro-ecological zones and natural resources. Aspects of the physical infrastructure discussed are

  18. Transport Infrastructure Slot Allocation

    NARCIS (Netherlands)

    Koolstra, K.

    2005-01-01

    In this thesis, transport infrastructure slot allocation has been studied, focusing on selection slot allocation, i.e. on longer-term slot allocation decisions determining the traffic patterns served by infrastructure bottlenecks, rather than timetable-related slot allocation problems. The

  19. Infrastructures for healthcare

    DEFF Research Database (Denmark)

    Langhoff, Tue Odd; Amstrup, Mikkel Hvid; Mørck, Peter

    2018-01-01

    The Danish General Practitioners Database has over more than a decade developed into a large-scale successful information infrastructure supporting medical research in Denmark. Danish general practitioners produce the data, by coding all patient consultations according to a certain set of classif...... synergy into account, if not to risk breaking down the fragile nature of otherwise successful information infrastructures supporting research on healthcare....

  20. POBE: A Computer Program for Optimal Design of Multi-Subject Blocked fMRI Experiments

    Directory of Open Access Journals (Sweden)

    Bärbel Maus

    2014-01-01

    Full Text Available For functional magnetic resonance imaging (fMRI studies, researchers can use multi-subject blocked designs to identify active brain regions for a certain stimulus type of interest. Before performing such an experiment, careful planning is necessary to obtain efficient stimulus effect estimators within the available financial resources. The optimal number of subjects and the optimal scanning time for a multi-subject blocked design with fixed experimental costs can be determined using optimal design methods. In this paper, the user-friendly computer program POBE 1.2 (program for optimal design of blocked experiments, version 1.2 is presented. POBE provides a graphical user interface for fMRI researchers to easily and efficiently design their experiments. The computer program POBE calculates the optimal number of subjects and the optimal scanning time for user specified experimental factors and model parameters so that the statistical efficiency is maximised for a given study budget. POBE can also be used to determine the minimum budget for a given power. Furthermore, a maximin design can be determined as efficient design for a possible range of values for the unknown model parameters. In this paper, the computer program is described and illustrated with typical experimental factors for a blocked fMRI experiment.