WorldWideScience

Sample records for atlas computers

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  3. Volunteer Computing Experience with ATLAS@Home

    CERN Document Server

    Cameron, David; The ATLAS collaboration; Bourdarios, Claire; Lan\\c con, Eric

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers' resources make up a sizable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one job to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  4. Volunteer Computing Experience with ATLAS@Home

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration; Cameron, David; Filip\\v{c}i\\v{c}, Andrej

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers' resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  5. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  6. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  7. Exploiting Virtualization and Cloud Computing in ATLAS

    Science.gov (United States)

    Harald Barreiro Megino, Fernando; Benjamin, Doug; De, Kaushik; Gable, Ian; Hendrix, Val; Panitkin, Sergey; Paterson, Michael; De Silva, Asoka; van der Ster, Daniel; Taylor, Ryan; Vitillo, Roberto A.; Walker, Rod

    2012-12-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R&D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  8. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  9. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  10. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  11. ATLAS distributed computing: experience and evolution

    Science.gov (United States)

    Nairz, A.; Atlas Collaboration

    2014-06-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future.

  12. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  13. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  14. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in respons...

  15. The Evolution of Cloud Computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Berghaus, Frank; Brasolin, Franco; Domingues Cordeiro, Cristovao Jose; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; LeBlanc, Matthew; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-12-01

    The ATLAS experiment at the LHC has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing Infrastructure as a Service resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, a system for dynamic location-based discovery of caching proxy servers, and the usage of a data federation to unify the worldwide grid of storage elements into a single namespace and access point. The usage of the experiment's high level trigger farm for Monte Carlo production, in a specialized cloud environment, is presented. Finally, we evaluate and compare the performance of commercial clouds using several benchmarks.

  16. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    Taylor, Ryan P.; The ATLAS collaboration; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  17. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  18. ATLAS and LHC computing on CRAY

    CERN Document Server

    Sciacca, Gianfranco; The ATLAS collaboration

    2017-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  19. ATLAS and LHC computing on CRAY

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb from a dedicated cluster to the large CRAY systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  20. ATLAS Distributed Computing in LHC Run2

    Science.gov (United States)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  1. The ATLAS Distributed Computing: the challenges of the future

    CERN Document Server

    Sakamoto, H; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providing more than 100.000 computing cores and orchestrated by the ATLAS in-house developed job and data management services. The discovery of the Higgs-like boson in 2012 would not be possible without the excellent performance of the ATLAS Distributed Computing. The future ATLAS experiment operation with increased LHC beam energy and luminosity foreseen for 2014 imposes a significant increase in computing demands the ATLAS Distributed Computing needs to satisfy. Therefore, a development of the new data-processing, storage and data-distribution systems has been started to efficiently use the computing resources exploiting current and future technologies of distributed computing.

  2. Automating usability of ATLAS Distributed Computing resources

    Science.gov (United States)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  3. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  4. Automating ATLAS Computing Operations using the Site Status Board

    CERN Document Server

    Andreeva, J.; Campana, S.; Di Girolamo, A.; Dzhunov, I.; Espinal Curull, X.; Gayazov, S.; Magradze, E.; Nowotka, M.M.; Rinaldi, L.; Saiz, P.; Schovancova, J.; Stewart, G.A.; Wright, M.

    2012-01-01

    The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The presentation will describe how SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in SSB. It will demonstrate the positive impact of the use of SS...

  5. ATLAS Computing on the Swiss Cloud SWITCHengines

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performance used and achieved running ATLAS production on SWITCHengines. SWITCHengines is the new cloud infrastructure offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, which we also report on, are country specific.

  6. ATLAS computing on Swiss Cloud SWITCHengines

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2017-01-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  7. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  8. Computerized Atlases: The Potential of Computers in Social Studies.

    Science.gov (United States)

    de Leeuw, G.; Waters, N. M.

    1986-01-01

    Examines the use of computer atlases to see how they might contribute to the attainment of established social studies goals. Reviews advantages and disadvantages of existing software and hardware. Describes the potentials of computerized atlases and the hardware required to support such uses. (JDH)

  9. Integrating network awareness in ATLAS distributed computing

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Maeno, T; Mckee, S; Nilsson, P; Petrosyan, A; Vukotic, I; Wenaus, T

    2014-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networks hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networking and data flow performance further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management.

  10. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  11. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  12. ATLAS@Home: Harnessing Volunteer Computing for HEP

    Science.gov (United States)

    Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration

    2015-12-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  13. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Bourdarios, Claire; Filipcic, Andrej; Lancon, Eric; Wu, Wenjing

    2015-01-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  14. Common accounting system for monitoring the ATLAS Distributed Computing resources

    CERN Document Server

    Karavakis, E; The ATLAS collaboration; Campana, S; Gayazov, S; Jezequel, S; Saiz, P; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  15. The ATLAS Computing activities and developments of the Italian Cloud

    CERN Document Server

    Rinaldi, L; The ATLAS collaboration; Antonelli, M; Barberis, D; Barberis, S; Brunengo, A; Campana, S; Capone, V; Carlino, G; Carminati, L; Ciocca, C; Corosu, M; De Salvo, A; Di Girolamo, A; Doria, A; Esposito, R; Jha, M K; Luminari, L; Martini, A; Merola, L; Perini, L; Prelz, F; Rebatto, D; Russo, G; Vaccarossa, L; Vilucchi, E

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, involving many Computing Centres spread around the world. The computing workload is managed by regional federations, called Clouds. The Italian Cloud consists of a main (Tier-1) centre, located in Bologna, four secondary (Tier-2) centres, and a few smaller (Tier-3) sites. In this contribution we describe the Italian Cloud site facilities and the activities of Data Processing, Analysis, Simulation and Software Development performed within the Cloud, and we discuss the tests of the new Computing Technologies contributing to the ATLAS Computing Model evolution.

  16. ATLAS computing activities and developments in the Italian Grid cloud

    CERN Document Server

    Rinaldi, L; The ATLAS collaboration; Antonelli, M; Barberis, D; Barberis, S; Brunengo, A; Campana, S; Capone, V; Carlino, G; Carminati, L; Ciocca, C; Corosu, M; De Salvo, A; Di Girolamo, A; Doria, A; Esposito, R; Jha, M K; Luminari, L; Martini, A; Merola, L; Perini, L; Prelz, F; Rebatto, D; Russo, G; Vaccarossa, L; Vilucchi, E

    2012-01-01

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, involving many computing centres spread around the world. The computing workload is managed by regional federations, called Clouds. The Italian Cloud consists of a main (Tier-1) centre, located in Bologna, four secondary (Tier-2) centres, and a few smaller (Tier-3) sites. In this contribution we describe the Italian Cloud site facilities and the activities of data processing, analysis, simulation and software development performed within the Cloud, and we discuss the tests of the new computing technologies contributing to the ATLAS Computing Model evolution.

  17. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  18. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  19. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2015-01-01

    The ATLAS collaboration has setup a volunteer computing project called ATLAS@home. Volunteers running Monte-Carlo simulation on their personal computer provide significant computing resources, but also belong to a community potentially interested in HEP. Four types of contributors have been identified, whose questions range from advanced technical details to the reason why simulation is needed, how Computing is organized and how it relates to society. The creation of relevant outreach material for simulation, event visualization and distributed production will be described, as well as lessons learned while interacting with the BOINC volunteers community.

  20. The December 2006 ATLAS Computing & Software Workshop

    CERN Document Server

    Fred Luehring

    The 29th ATLAS Computing & Software Workshop was held on December 11-15 at CERN. With the rapidly approaching onset of data taking, the workshop participants had an air of urgency about them. There was considerable discussion on hot topics such as physics validation of the software, data analysis, actual software production on the GRID, and the schedule of work for 2007 including the Final Dress Rehearsal (FDR). However don't be fooled, the workshop was not all work - there were also two social events which were greatly enjoyed by the attendees. The workshop welcomed Wouter Verkerke as the new Physics Validation Coordinator (replacing Davide Costanzo). Most recent validation work has centered on the 12.0.X release series that will be used for the Computing System Commissioning (CSC) exercise. The validation is now a big job because it needs to be done over a variety of conditions (magnetic field on/off, aligned/misaligned geometry) for every candidate release. Luckily there have been a large number of pe...

  1. ATLAS distributed computing operations in the GridKa cloud

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Serfon, Cedric; Walker, Rodney [Ludwig-Maximilians-Universitaet, Garching (Germany); Harenberg, Torsten; Kalinin, Sergey; Schultes, Joachim [Bergische Universitaet, Wuppertal (Germany); Kawamura, Gen [Johannes-Gutenberg-Universitaet, Mainz (Germany); Leffhalm, Kai [DESY, Zeuthen (Germany); Meyer, Joerg [Georg-August-Universitaet, Goettingen (Germany); Petzold, Andreas [Karlsruher Institut fuer Technologie (Germany); Sundermann, Jan Erik [Albert-Ludwigs-Universitaet, Freiburg (Germany)

    2011-07-01

    The ATLAS Grid Computing resources in Germany, Poland, the Czech Republic, Austria, and Switzerland consist of a cloud of 12 Tier-2 computing centers grouped around the Tier-1 center GridKa at the Steinbuch Centre for Computing at KIT. While the Tier-1 center serves as a hub for data management in the cloud and is the principal resource for reprocessing and custodial storage of raw ATLAS data, the Tier-2 centers provide the resources for user analysis and production of simulated events. During the first full year of data taking at the LHC, the GridKa cloud has successfully contributed to the overall ATLAS computing effort, enabling physicists to quickly analyze the large volume of new incoming data and the corresponding simulated events. This talk covers the computing operations in the GridKa cloud with focus on performance and experiences at both the Tier-1 and Tier-2 centers.

  2. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration; Crepe-Renaudin, Sabine Chrystel; De, Kaushik

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  3. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  4. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration

    2017-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities.

  5. Distributed computing operations in the German ATLAS cloud

    Energy Technology Data Exchange (ETDEWEB)

    Boehler, Michael; Gamel, Anton; Sundermann, Jan Erik [Universitaet Freiburg, Freiburg im Breisgau (Germany); Petzold, Andreas [KIT, Karlsruhe (Germany); Kawamura, Gen [Universitaet Mainz (Germany); Leffhalm, Kai [DESY (Germany); Sandhoff, Marisa; Harenberg, Torsten [Bergische Universitaet Wuppertal (Germany); Walker, Rod; Duckeck, Guenter [LMU Muenchen (Germany)

    2013-07-01

    Before announcing the discovery of a Higgs-like boson at the 4th of July 2012 a huge amount of data had to be distributed around the world and analysed. Moreover, to have well optimised analyses with solid background estimates, Monte Carlo simulated event samples needed to be generated. All of this, data distribution, Monte Carlo production, and also data reprocessing, is performed by the Worldwide LHC Computing Grid. The ATLAS grid computing resources in Austria, the Czech Republic, Germany, Poland, and Switzerland are organized in the GridKa cloud which is one out of 10 ATLAS computing clouds. It consists of the Tier-1 centre at KIT in Karlsruhe which serves as a hub for data management and stores raw ATLAS data and the Tier-2 centres that provide the resources for user analysis and Monte Carlo samples production. This talk gives an overview of the ATLAS grid computing operations in 2012 focusing on the performance and experiences at both the Tier-1 and Tier-2 centres and it summarises the prospects and requirements for grid computing during and after the long shut-down of the LHC in 2013/2014.

  6. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  7. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    CERN Document Server

    Schovancová, J; The ATLAS collaboration

    2012-01-01

    This paper details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfers throughput, through data processing and readiness of the computing services at the ATLAS computing centers, to the reliability and usability of the ATLAS computing centers. Described tools provide monitoring for issues of different level of criticality: from spotting issues with the instant online monitoring to the long-term accounting information.

  8. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    Science.gov (United States)

    Schovancová, Jaroslava

    2012-12-01

    This paper details a variety of Monitoring tools used within ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the CERN Analysis Facility after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centres distributed worldwide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfer throughput, through data processing and readiness of the computing services at the ATLAS computing centres, to the reliability and usability of the ATLAS computing centres. The described tools provide monitoring for issues of varying levels of criticality: from identifying issues with the instant online monitoring to long-term accounting information.

  9. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    Filip\\v{c}i\\v{c}, Andrej; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  10. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration

    2017-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  11. ATLAS Distributed Computing Shift Operation in the first 2 full years of LHC data taking

    CERN Document Server

    Schovancová, J; The ATLAS collaboration; Elmsheuser, J; Jézéquel, S; Negri, G; Ozturk, N; Sakamoto, H; Slater, M; Smirnov, Y; Ueda, I; Van Der Ster, D C

    2012-01-01

    ATLAS Distributed Computing organized 3 teams to support data processing at Tier-0 facility at CERN, data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located world-wide. In this paper we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the glamorous era of the LHC data taking. We describe experience with ways how to improve degraded service performance, we detail on the Distributed Analysis support over the exciting period of the computing model evolution.

  12. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  13. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Bauce, Matteo; Dankel, Maik; Howard, Jacob; Kama, Sami

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. These data are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to parallel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software framework called Athena. In this talk we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline reconstruction as well as their integration into a multiple process based Athena frame...

  14. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Dankel, Maik; The ATLAS collaboration; Howard, Jacob; Bauce, Matteo; Boing, Rene

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. This data is processed by in-house built software frameworks which have lifetimes longer than the detector it- self. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to paral- lel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software frame- work called Athena. In this proceedings we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline recon- struction as well as their integration into a multiple process based...

  15. Validating a new computed tomography atlas for grading ankle osteoarthritis.

    Science.gov (United States)

    Cohen, Michael M; Vela, Nathan D; Levine, Jason E; Barnoy, Eran A

    2015-01-01

    As the most common joint disease, osteoarthritis (OA) poses a significant source of pain and disability. It can be defined by classic radiographic findings, particular symptoms, or a combination of the 2. Although specific grading scales have been developed to evaluate OA in various joints, such as the shoulder, hip, and knee, no definitive classification system is available for grading OA in the ankle. The purpose of the present study was to create and validate a standardized atlas for grading (or staging) ankle osteoarthritis using computed tomography (CT) and "hallmark" findings noted on coronal, sagittal, and axial views extrapolated from the Kellgren-Lawrence radiographic scale. The CT scans of 226 patients at the Miami Veterans Affairs Medical Center were reviewed. An atlas was derived from a retrospective review of 30 remaining CT scans taken from July 2008 to November 2011. After this review, 3 orthogonal static CT images, obtained from 11 remaining patients, were chosen to represent the various stages on the OA scale and were used to test the validity of the atlas developed by 2 of us (M.M.C. and N.D.V.). A multispecialty panel of 9 examiners, excluding ourselves, independently rated the 11 CT scan subjects. The differences among examiners and specialties were calculated, including an intra-examiner agreement for 2 separate readings spaced 9 months apart. Although the small number of subspecialty examiners made the intraspecialty comparisons difficult to validate, the findings nevertheless indicated excellent agreement among all specialty groups, with good intra-investigational (intraclass correlation coefficient 0.962 and 1) inter-investigational (intraclass correlation coefficient 0.851) values. These results appeared to validate the CT ankle OA atlas, which we believe will be a valuable clinical and research tool, one that will likely be more beneficial than less relevant generalized OA grading scales in use today.

  16. The future of PanDA in ATLAS distributed computing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  17. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This paper surveys the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  18. Analysis of Craniofacial Images using Computational Atlases and Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur

    2008-01-01

    the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed, confirming previous results...... purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy...... of a group may be constructed and brought into correspondence with a set of images of interest. Having established such a correspondence, various analyses may be carried out. This thesis discusses two types of such analyses, i.e. statistical deformation models and novel approaches for the quantification...

  19. The ATLAS computing challenge for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment successfully commissioned a software and computing infrastructure to support the physics program during LHC Run 2. The next phases of the accelerator upgrade will present new challenges in the offline area. In particular, at High Luminosity LHC (also known as Run 4) the data taking conditions will be very demanding in terms of computing resources: between 5 and 10 KHz of event rate from the HLT to be reconstructed (and possibly further reprocessed) with an average pile-up of up to 200 events per collision and an equivalent number of simulated samples to be produced. The same parameters for the current run are lower by up to an order of magnitude. While processing and storage resources would need to scale accordingly, the funding situation allows one at best to consider a flat budget over the next few years for offline computing needs. In this paper we present a study quantifying the challenge in terms of computing resources for HL-LHC and present ideas about the possible evolution of the ...

  20. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  1. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  2. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  3. On the Potential Use of Remote Computing Farms in the ATLAS TDAQ System

    CERN Document Server

    Meirosu, C; Bold, T; Caron, B; Dobinson, Robert W; Fairey, G; Hansen, J B; Hansen, J R; Hughes-Jones, R E; Korcyl, K; Martin, B; Moore, R; Nielsen, J L; Pinfold, J L; Soluk, R A; Szymocha, T; Wäänänen, A; Wheeler, S; 14th IEEE - NPSS Real Time Conference 2005 Nuclear Plasma Sciences Society

    2005-01-01

    The ATLAS experiment at CERN will require a large amount of computing resources for the online analysis system. The software and communication protocols in the ATLAS Online analysis system are optimized for a cluster environment. We setup a geographically distributed testbed to evaluate the implications of integrating remote computing resources in this environment. This paper reports on the integration scenarios and analyzes the achieved performance. We highlight limitations in the communication protocols and suggest solutions for solving them. A proposal for employing Grid-enabled resources to allow for on-demand expansion of the computing capabilities is presented at the end of the paper.

  4. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  5. Scalable Database Access Technologies for ATLAS Distributed Computing

    CERN Document Server

    Vaniachine, A

    2009-01-01

    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We pre...

  6. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  7. ATLAS computing operations within the GridKa Cloud

    Energy Technology Data Exchange (ETDEWEB)

    Kennedy, J; Walker, R [LMU Munich (Germany); Olszewski, A [Institute of Nuclear Physics Krakow (Poland); Nderitu, S [University of Bonn (Germany); Serfon, C; Duckeck, G

    2010-04-01

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  8. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  9. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  10. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  11. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  12. ATLAS Grid computing activities within the Gridka cloud

    Energy Technology Data Exchange (ETDEWEB)

    Nderitu, Simon-Kirichu [University of Bonn (Germany)

    2008-07-01

    The WLCG Tier1 at GridKa in Karlsruhe Germany, has a number of Tier2 sites associated with it. Together the Tier2s, located in Germany, Austria, Czech Republic Poland and Switzerland, and the T1 at GridKa form the ATLAS Gridka-cloud. Like other clouds in WLCG, the main activities within this cloud are running Monte-Carlo production jobs, distributed data management (DDM) issues and operations, tape reading tests with data re-processing in view and monitoring of the transfer efficiencies, through-puts and networking statuses between sites. An overview talk will be presented showing the activity, progresses and current status in each of the named areas and also an evaluational overview of the cloud's readiness for the ATLAS data taking in mid 2008.

  13. Tools and strategies to monitor the ATLAS online computing farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  14. Anatomy atlases.

    Science.gov (United States)

    Rosse, C

    1999-01-01

    Anatomy atlases are unlike other knowledge sources in the health sciences in that they communicate knowledge through annotated images without the support of narrative text. An analysis of the knowledge component represented by images and the history of anatomy atlases suggest some distinctions that should be made between atlas and textbook illustrations. Textbook and atlas should synergistically promote the generation of a mental model of anatomy. The objective of such a model is to support anatomical reasoning and thereby replace memorization of anatomical facts. Criteria are suggested for selecting anatomy texts and atlases that complement one another, and the advantages and disadvantages of hard copy and computer-based anatomy atlases are considered.

  15. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  16. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  17. The Present and Future Challenges of Distributed Computing in the ATLAS experiment

    CERN Document Server

    Ueda, I; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment has collected more than 5 fb-1 of data in 2011 at the energy of 7 TeV. Several billions of events had been promptly reconstructed and stored in the ATLAS remote data centers spanning tens of petabytes of disk and tape storage. In addition, a similar amount of data has been simulated on the Grid to study the detector performance and efficiencies. The data processing and distribution on the Grid sites with more than 100.000 computing cores is centrally controlled by the system developed by ATLAS, managing a coherent data processing and analysis of almost one million jobs daily. An increased collision energy of 8 TeV in 2012 and much larger expected data collection rate due to improved LHC operation impose new requirements on the system and suggests a further evolution of the computing model to be able the meet the new challenges in the future. The experience of large-scale data processing and analysis on the Grid is presented through the evolving model and organization of the ATLAS Distribu...

  18. ATLAS

    CERN Multimedia

    2002-01-01

    Barrel and END-CAP Toroids In order to produce a powerful magnetic field to bend the paths of the muons, the ATLAS detector uses an exceptionally large system of air-core toroids arranged outside the calorimeter volumes. The large volume magnetic field has a wide angular coverage and strengths of up to 4.7tesla. The toroids system contains over 100km of superconducting wire and has a design current of 20 500 amperes. (ATLAS brochure: The Technical Challenges)

  19. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  20. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration

    2016-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of the Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of...

  1. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration

    2017-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of the Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of...

  2. PanDA for ATLAS Distributed Computing in the Next Decade

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2016-01-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarde...

  3. PanDA for ATLAS distributed computing in the next decade

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarde...

  4. Evolution of the ATLAS PanDA workload management system for exascale computational science

    Science.gov (United States)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  5. Using Cloud Computing To Create A Multi-Wavelength Atlas Of The Galactic Plane

    Science.gov (United States)

    Berriman, G. B.; Good, J.; Rynge, M.; Juve, G.; Deelman, E.; Kinney, J.; Merrihew, A.

    2014-01-01

    We describe by example how to optimize cloud-computing resources offered by Amazon Web Services (AWS) to create and curate new datasets at scale. We are producing a co-registered atlas of the Galactic Plane at 16 wavelengths from 1 micron to 24 microns with a spatial sampling of 1 arcsec. The atlas is being created by using the Montage mosaic engine to generate co-registered mosaics of images released by the major surveys WISE, 2MASS, ADASS, GLIMPSE and MIPSGAL. The Atlas, when complete, will be 45 TB in size, composed of over 9,600 5 deg x 5 deg tiles with one degree overlap between them. The dataset will be housed on Amazon S3, designed for at-scale storage with access via web protocols. It will be publicly accessible through an API that will support access to the data and creation of cutouts according to the users’ specifications. The processing, which is estimated to require 340,000 compute hours for completion, has exploited virtual clusters created and managed on AWS platforms through the Pegasus workflow management system. We will describe the optimization methods, compute time and processing costs, as a guide for others wishing to exploit cloud platforms for processing and data creation.

  6. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  7. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  8. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  9. Monitoring of computing resource utilization of the ATLAS experiment

    Science.gov (United States)

    Rousseau, David; Dimitrov, Gancho; Vukotic, Ilija; Aidel, Osman; Schaffer, Rd; Albrand, Solveig

    2012-12-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  10. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  11. Computation of a high-resolution MRI 3D stereotaxic atlas of the sheep brain.

    Science.gov (United States)

    Ella, Arsène; Delgadillo, José A; Chemineau, Philippe; Keller, Matthieu

    2017-02-15

    The sheep model was first used in the fields of animal reproduction and veterinary sciences and then was utilized in fundamental and preclinical studies. For more than a decade, magnetic resonance (MR) studies performed on this model have been increasingly reported, especially in the field of neuroscience. To contribute to MR translational neuroscience research, a brain template and an atlas are necessary. We have recently generated the first complete T1-weighted (T1W) and T2W MR population average images (or templates) of in vivo sheep brains. In this study, we 1) defined a 3D stereotaxic coordinate system for previously established in vivo population average templates; 2) used deformation fields obtained during optimized nonlinear registrations to compute nonlinear tissues or prior probability maps (nlTPMs) of cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) tissues; 3) delineated 25 external and 28 internal sheep brain structures by segmenting both templates and nlTPMs; and 4) annotated and labeled these structures using an existing histological atlas. We built a quality high-resolution 3D atlas of average in vivo sheep brains linked to a reference stereotaxic space. The atlas and nlTPMs, associated with previously computed T1W and T2W in vivo sheep brain templates and nlTPMs, provide a complete set of imaging space that are able to be imported into other imaging software programs and could be used as standardized tools for neuroimaging studies or other neuroscience methods, such as image registration, image segmentation, identification of brain structures, implementation of recording devices, or neuronavigation. J. Comp. Neurol. 525:676-692, 2017. © 2016 Wiley Periodicals, Inc.

  12. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2016-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  13. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration

    2017-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  14. Magnetic resonance imaging and micro-computed tomography combined atlas of developing and adult mouse brains for stereotaxic surgery.

    Science.gov (United States)

    Aggarwal, M; Zhang, J; Miller, M I; Sidman, R L; Mori, S

    2009-09-15

    Stereotaxic atlases of the mouse brain are important in neuroscience research for targeting of specific internal brain structures during surgical operations. The effectiveness of stereotaxic surgery depends on accurate mapping of the brain structures relative to landmarks on the skull. During postnatal development in the mouse, rapid growth-related changes in the brain occur concurrently with growth of bony plates at the cranial sutures, therefore adult mouse brain atlases cannot be used to precisely guide stereotaxis in developing brains. In this study, three-dimensional stereotaxic atlases of C57BL/6J mouse brains at six postnatal developmental stages: postnatal day (P) 7, P14, P21, P28, P63 and in adults (P140-P160) were developed, using diffusion tensor imaging (DTI) and micro-computed tomography (CT). At present, most widely-used stereotaxic atlases of the mouse brain are based on histology, but the anatomical fidelity of ex vivo atlases to in vivo mouse brains has not been evaluated previously. To account for ex vivo tissue distortion due to fixation as well as individual variability in the brain, we developed a population-averaged in vivo magnetic resonance imaging adult mouse brain stereotaxic atlas, and a distortion-corrected DTI atlas was generated by nonlinearly warping ex vivo data to the population-averaged in vivo atlas. These atlas resources were developed and made available through a new software user-interface with the objective of improving the accuracy of targeting brain structures during stereotaxic surgery in developing and adult C57BL/6J mouse brains.

  15. Atlases: Complex models of geospace

    Directory of Open Access Journals (Sweden)

    Ikonović Vesna

    2005-01-01

    Full Text Available Atlas is modeled contexture contents of treated thematic of space on optimal map union. Atlases are higher form of cartography. Atlases content composition of maps which are different by projection, scale, format methods, contents, usage and so. Atlases can be classified by multi criteria. Modern classification of atlases by technology of making would be on: 1. classical or traditional (printed on paper and 2. electronic (made on electronic media - computer or computer station. Electronic atlases divided in three large groups: view-only electronic atlases, 2. interactive electronic atlases and 3. analytical electronic atlases.

  16. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  17. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  18. Monitoring of Computing Resource Use of Active Software Releases at ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2017-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  19. A Computer-Based Atlas of a Rat Dissection.

    Science.gov (United States)

    Quentin-Baxter, Megan; Dewhurst, David

    1990-01-01

    A hypermedia computer program that uses text, graphics, sound, and animation with associative information linking techniques to teach the functional anatomy of a rat is described. The program includes a nonintimidating tutor, to which the student may turn. (KR)

  20. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  1. Computer Simulation of the Cool Down of the ATLAS Liquid Argon Barrel Calorimeter

    CERN Document Server

    Korperud, N; Fabre, C; Owren, G; Passardi, Giorgio

    2002-01-01

    The ATLAS electromagnetic barrel calorimeter consists of a liquid argon detector with a total mass of 120 tonnes. This highly complicated structure, fabricated from copper, lead, stainless steel and glass-fiber reinforced epoxy will be placed in an aluminum cryostat. The cool down process of the detector will be limited by the maximum temperature differences accepted by the composite structure so as to avoid critical mechanical stresses. A computer program simulating the cool down of the detector by calculating the local heat transfer throughout a simplified model has been developed. The program evaluates the cool down time as a function of different contact gasses filling the spaces within the detector.

  2. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  3. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  4. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    CERN Document Server

    Gonzalez de la Hoz, S

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  5. The Evolving role of Tier2s in ATLAS with the new Computing and Data Distribution Model

    CERN Document Server

    Gonzalez de la Hoz, S; The ATLAS collaboration

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  6. SynapSense Wireless Environmental Monitoring System of the RHIC & ATLAS Computing Facility at BNL

    Science.gov (United States)

    Casella, K.; Garcia, E.; Hogue, R.; Hollowell, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), the BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated in 1990 and grew steadily up to the present configuration with 4 physically isolated IT areas with the maximum rack capacity of about 1000 racks and the total peak power consumption of 1.5 MW. In June 2012 a project was initiated with the primary goal to replace several environmental monitoring systems deployed earlier within RACF with a single commercial hardware and software solution by SynapSense Corporation based on wireless sensor groups and proprietary SynapSense™ MapSense™ software that offers a unified solution for monitoring the temperature and humidity within the rack/CRAC units as well as pressure distribution underneath the raised floor across the entire facility. The deployment was completed successfully in 2013. The new system also supports a set of additional features such as capacity planning based on measurements of total heat load, power consumption monitoring and control, CRAC unit power consumption optimization based on feedback from the temperature measurements and overall power usage efficiency estimations that are not currently implemented within RACF but may be deployed in the future.

  7. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  8. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  9. Tier-1 reprocessing and other key grid computing activities within the ATLAS-Gridka cloud

    Energy Technology Data Exchange (ETDEWEB)

    Nderitu, Simon K. [Physikalisches Institut, Univ. Bonn (Germany)

    2009-07-01

    Computing in ATLAS is organized in so-called Tier-1 clouds. The Tier-1 provides crucial services for DDM and production, which had been developed and extensively tested in the last years. A further key activity of a Tier-1 is data reprocessing which requires bulk reading of RAW data from tape. It is an I/O intensive activity. Thus an efficient performance of the tape system I/O is very important. Tape reading tests have been done with an aim of optimizing the system. The talk presents the result of the progress made and the current status in line with the expected performance. Also an overview of the current status and progress in the other areas is given.

  10. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    Science.gov (United States)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie; Atlas Collaboration

    2014-06-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  11. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    CERN Document Server

    Öhman, H; The ATLAS collaboration; Hendrix, V

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible, which precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration, dynamic resource scaling, and high degree of scalability.

  12. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Augustinack, Jean C.; Nguyen, Khoa;

    2015-01-01

    datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer...... level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise...... from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using...

  13. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  14. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    Science.gov (United States)

    Benjamin, D.; Caballero, J.; Ernst, M.; Guan, W.; Hover, J.; Lesny, D.; Maeno, T.; Nilsson, P.; Tsulaia, V.; van Gemmeren, P.; Vaniachine, A.; Wang, F.; Wenaus, T.; ATLAS Collaboration

    2016-10-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  15. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  16. Automatic testing and assessment of neuroanatomy using a digital brain atlas: method and development of computer- and mobile-based applications.

    Science.gov (United States)

    Nowinski, Wieslaw L; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G; Marchenko, Yevgen; Volkau, Ihar

    2009-10-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to Terminologia Anatomica. Because the cerebral models are fully segmented and labeled, our approach enables automatic and random atlas-derived generation of questions to test location and naming of cerebral structures. This is done in four steps: test individualization by the instructor, test taking by the students at their convenience, automatic student assessment by the application, and communication of the individual assessment to the instructor. A computer-based application with an interactive 3D atlas and a preliminary mobile-based application were developed to realize this approach. The application works in two test modes: instructor and student. In the instructor mode, the instructor customizes the test by setting the scope of testing and student performance criteria, which takes a few seconds. In the student mode, the student is tested and automatically assessed. Self-testing is also feasible at any time and pace. Our approach is automatic both with respect to test generation and student assessment. It is also objective, rapid, and customizable. We believe that this approach is novel from computer-based, mobile-based, and atlas-assisted standpoints.

  17. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  18. Computational neuroanatomy: mapping cell-type densities in the mouse brain, simulations from the Allen Brain Atlas

    Science.gov (United States)

    Grange, Pascal

    2015-09-01

    The Allen Brain Atlas of the adult mouse (ABA) consists of digitized expression profiles of thousands of genes in the mouse brain, co-registered to a common three-dimensional template (the Allen Reference Atlas).This brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy. Its voxelized version (with cubic voxels of side 200 microns) is available for desktop computation in MATLAB. On the other hand, brain cells exhibit a great phenotypic diversity (in terms of size, shape and electrophysiological activity), which has inspired the names of some well-studied cell types, such as granule cells and medium spiny neurons. However, no exhaustive taxonomy of brain cell is available. A genetic classification of brain cells is being undertaken, and some cell types have been chraracterized by their transcriptome profiles. However, given a cell type characterized by its transcriptome, it is not clear where else in the brain similar cells can be found. The ABA can been used to solve this region-specificity problem in a data-driven way: rewriting the brain-wide expression profiles of all genes in the atlas as a sum of cell-type-specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain. However, the estimated brain-wide densities of 64 cell types published recently were based on one series of co-registered coronal in situ hybridization (ISH) images per gene, whereas the online ABA contains several image series per gene, including sagittal ones. In the presented work, we simulate the variability of cell-type densities in a Monte Carlo way by repeatedly drawing a random image series for each gene and solving the optimization problem. This yields error bars on the region-specificity of cell types.

  19. Pocket atlas of sectional anatomy: computed tomography and magnetic resonance imaging. Vol. 3. Spine, extremities, joints

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B.; Reif, E. [Caritas Hospital, Dillingen (Germany). Dept. of Radiology

    2007-07-01

    Magnetic resonance imaging (MRI) of the musculoskeletal system is an established and important component in the diagnosis of diseases of the joints, soft tissues, bones, and bone marrow. We are therefore pleased to collect together images of the joints and the spinal column in a separate volume on the musculoskeletal system. Demonstrating the growing importance of new developments in MRI in recent years, with ever-increasing resolution, many images were acquired with 3-tesla units. We are deeply grateful to the manufacturers, Siemens and Philips, for making this possible. We believe that colored atlases are the ideal medium to represent the highly detailed images achieved nowadays with improved resolution techniques. Volume 3 of the Pocket Atlas of Sectional Anatomay provides a color illustration facing each magnetic resonance image, as in the preceding volumes on the skull, thorax, and abdomen. To ensure the greatest possible precision in details, we still produce these illustrations ourselves. Each is accompanied by a sectional image and an orientation aid. Uniform color schemes ensure optimal clarity, as similar structures, such as arteries, veins, nerves, tendons, etc., are consistently represented in the same color. Individual muscle groups are represented uniformly, but differentiated from other muscle groups, so that classification is possible even when numerous groups of muscles are shown in the same image. Maximal lucidity prevails even in highly detailed representations. This is made possible by the high quality of the production and printing process that are characteristic of Thieme International. (orig.)

  20. 26th February 2009 - US Google Vice President and Chief Internet Evangelist V. Cerf signing the guest book with Director for research and Computing S. Bertolucci; visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    HI-0902038 05: IT Department Head, F. Hemmer; US Google Vice President and Chief Internet Evangelist V. Cerf; Computing Security Officer and Colloquium Convenor D. R. Myers; Member of the Internet Society Advisory Council F. Flückiger; Director for Research and Scientific Computing, S. Bertolucci ; Honorary Staff Member, B. Segal. HI-0902038 16: Computing Security Officer and Colloquium Convenor D. R. Myers; UC Irvine, ATLAS Deputy Spokesperson elect A. J. Lankford; US Google Vice President and Chief Internet Evangelist V. Cerf; ATLAS Collaboration Spokesperson P. Jenni; IT Department Head, F. Hemmer.

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  3. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid; Reconstruction et identification des electrons dans l'experience Atlas. Participation a la mise en place d'un Tier 2 de la grille de calcul

    Energy Technology Data Exchange (ETDEWEB)

    Derue, F

    2008-03-15

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  4. The ATLAS Distributed Data Management project: Past and Future

    Science.gov (United States)

    Garonne, Vincent; Stewart, Graeme A.; Lassnig, Mario; Molfetas, Angelos; Barisits, Martin; Beermann, Thomas; Nairz, Armin; Goossens, Luc; Barreiro Megino, Fernando; Serfon, Cedric; Oleynik, Danila; Petrosyan, Artem

    2012-12-01

    ATLAS has recorded more than 8 petabyte(PB) of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB are currently stored in the Worldwide LHC Computing Grid by ATLAS. All these data are managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to these data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of PB scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  5. The ATLAS Distributed Data Management project: Past and Future

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Lassnig, M; Molfetas, A; Barisits, M; Beermann, T; Nairz, A; Goossens, L; Barreiro Megino, F; Serfon, C; Oleynik, D; Petrosyan, A

    2012-01-01

    ATLAS has recorded almost 8PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All this data is managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to this data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of petabyte scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  6. Methods and computing challenges of the realistic simulation of physics events in the presence of pile-up in the ATLAS experiment

    CERN Document Server

    Chapman, J D; The ATLAS collaboration

    2014-01-01

    We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a description of the standard approach used by the ATLAS experiment and details of how we manage the conflicting demands of keeping the background dataset size as small as possible while minimizing the effect of background event re-use. We also present details of the methods used to minimize the memory footprint of these digitization jobs, to keep them within the grid limit, despite combining the information from thousands of simulated events at once. We also describe an alternative approach, known as Overlay. Here, the actual detector conditions are sampled from raw data using a special zero-bias trigger, and the simulated physi...

  7. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  8. Distributed analysis in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We r...

  9. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  10. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Adorisio, Cristina; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahmed, Hossain; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov , Andrei; Aktas, Adil; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amelung, Christoph; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antos, Jaroslav; Antunovic, Bijana; Anulli, Fabio; Aoun, Sahar; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Argyropoulos, Theodoros; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Arutinov, David; Asai, Makoto; Asai, Shoji; Silva, José; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asner, David; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Badescu, Elisabeta; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Mark; Baker, Oliver Keith; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Baranov, Sergey; Baranov, Sergei; Barashkou, Andrei; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Bartsch, Detlef; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Bazalova, Magdalena; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Becerici, Neslihan; Bechtle, Philip; Beck, Graham; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Ayda; Beddall, Andrew; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benincasa, Gianpaolo; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blocker, Craig; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bocci, Andrea; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodet, Eyal; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bucci, Francesca; Buchanan, James; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, Françcois; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Byatt, Tom; Caballero, Jose; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Camarri, Paolo; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D.; Carron Montero, Sebastian; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chen, Hucheng; Chen, Shenjian; Chen, Xin; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Tcherniatine, Valeri; Chesneanu, Daniela; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chevallier, Florent; Chiarella, Vitaliano; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Citterio, Mauro; Clark, Allan G.; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H.; Coggeshall, James; Cogneras, Eric; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Consonni, Michele; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Cranshaw, Jack; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dallison, Steve; Daly, Colin; Dam, Mogens; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Merlin; Davison, Adam; Dawson, Ian; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De Mora, Lee; De Oliveira Branco, Miguel; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; De Zorzi, Guido; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Deng, Wensheng; Denisov, Sergey; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Doglioni, Caterina; Doherty, Tom; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dotti, Andrea; Dova, Maria-Teresa; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Dris, Manolis; Dubbert, Jörg; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen , Michael; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Dushkin, Andrei; Duxfield, Robert; Dwuznik, Michal; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Egorov, Kirill; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ermoline, Iouri; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Facius, Katrine; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Fayard, Louis; Fayette, Florent; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Feligioni, Lorenzo; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernandes, Bruno; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fonseca Martin, Teresa; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; Freestone, Julian; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, K K; Gao, Yongsheng; Gaponenko, Andrei; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gautard, Valerie; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Gentile, Simonetta; Georgatos, Fotios; George, Simon; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Girtler, Peter; Giugni, Danilo; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goggi, Virginio; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçcalo, Ricardo; Gonella, Laura; Gong, Chenwei; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Green, Barry; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Grishkevich, Yaroslav; Groh, Manfred; Groll, Marius; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Gupta, Ambreesh; Gusakov, Yury; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Härtel, Roland; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, John Renner; Hansen, Peter Henrik; Hansl-Kozanecka, Traudl; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hashemi, Kevan; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Takashi; Hayward, Helen; Haywood, Stephen; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Hemperek, Tomasz; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Henß, Tobias; Hernández Jiménez, Yesenia; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Horazdovsky, Tomas; Hori, Takuya; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howe, Travis; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Hughes, Emlyn; Hughes, Gareth; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Irles Quiles, Adrian; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Isobe, Tadaaki; Issakov, Vladimir; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jared, Richard; Jarlskog, Göran; Jeanty, Laura; Jen-La Plante, Imai; Jenni, Peter; Jež, Pavel; Jézéquel, Stéphane; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinnouchi, Osamu; Joffe, David; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jorge, Pedro; Joseph, John; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kalinowski, Artur; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagounis, Michael; Karagoz, Muge; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kastoryano, Michael; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kayumov, Fred; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Keener, Paul; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khomich, Andrei; Khoriauli, Gia; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kind, Oliver; Kind, Peter; King, Barry; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiyamura, Hironori; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Klute, Markus; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Koblitz, Birger; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kolos, Serguei; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Kondo, Takahiko; Kono, Takanori; Konoplich, Rostislav; Konovalov, Serguei; Konstantinidis, Nikolaos; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostka, Peter; Kostyukhin, Vadim; Kotov, Serguei; Kotov, Vladislav; Kotov, Konstantin; Kourkoumelis, Christine; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Henri; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumshteyn, Zinovii; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurchaninov, Leonid; Kurochkin, Yurii; Kus, Vlastimil; Kwee, Regina; La Rotonda, Laura; Labbe, Julien; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; Le Vine, Micheal; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lefebvre, Michel; Legendre, Marie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leitner, Rupert; Lellouch, Daniel; Lellouch, Jeremie; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leroy, Claude; Lessard, Jean-Raphael; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Leyton, Michael; Li, Haifeng; Li, Shumin; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lilley, Joseph; Lim, Heuijin; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Tiankuan; Liu, Yanwen; Livan, Michele; Lleres, Annick; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Lovas, Lubomir; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Luehring, Frederick; Luisa, Luca; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahmood, A.; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makouski, Mikhail; Makovec, Nikola; Malecki, Piotr; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mambelli, Marco; Mameghani, Raphael; Mamuzic, Judita; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mapelli, Alessandro; Mapelli, Livio; March , Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti-Garcia, Salvador; Martin, Alex; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martini, Agnese; Martyniuk, Alex; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastroberardino, Anna; Masubuchi, Tatsuya; Matricon, Pierre; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maxfield, Stephen; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mc Donald, Jeffrey; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCubbin, Norman; McFarlane, Kenneth; McGlone, Helen; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Menke, Sven; Meoni, Evelin; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W. Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Mills, Bill; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Misawa, Shigeki; Miscetti, Stefano; Misiejuk, Andrzej; Mitrevski, Jovan; Mitsou, Vasiliki A.; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Mladenov, Dimitar; Moa, Torbjoern; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murillo Garcia, Raul; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakamura, Koji; Nakano, Itsuo; Nakatsuka, Hiroki; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nderitu, Simon Kirichu; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newcomer, Mitchel; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicoletti, Giovanni; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Nikiforov, Andriy; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Notz, Dieter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver, John; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Ottersbach, John; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Oyarzun, Alejandro; Ozcan, Veysi Erkcan; Ozone, Kenji; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadopoulou, Theodora; Park, Su-Jung; Park, Woochun; Parker, Andy; Parker, Sherwood; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor , Gabriella; Pataraia, Sophio; Pater, Joleen; Patricelli, Sergio; Patwa, Abid; Pauly, Thilo; Peak, Lawrence; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Piacquadio, Giacinto; Piccinini, Maurizio; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pinto, Belmiro; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poffenberger, Paul; Poggioli, Luc; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Ponsot, Patrick; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Popule, Jiri; Portell Bueso, Xavier; Porter, Robert; Pospelov, Guennady; Pospisil, Stanislav; Potekhin, Maxim; Potrap, Igor; Potter, Christina; Potter, Christopher; Potter, Keith; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Pribyl, Lukas; Price, Darren; Price, Lawrence; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Puigdengoles, Carles; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qi, Ming; Qian, Jianming; Qian, Weiming; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radeka, Veljko; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renkel, Peter; Rescia, Sergio; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richards, Ronald; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Roa Romero, Diego Alejandro; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosenbaum, Gabriel; Rosselet, Laurent; Rossetti, Valerio; Rossi, Leonardo Paolo; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumyantsev, Leonid; Rurikova, Zuzana; Rusakovich, Nikolai; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryan, Patrick; Rybkin, Grigori; Rzaeva, Sevda; Saavedra, Aldo; Sadrozinski, Hartmut; Sadykov, Renat; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandhu, Pawan; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sanny, Bernd; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sasaki, Osamu; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Savard, Pierre; Savine, Alexandre; Savinov, Vladimir; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R.~Dean; Schamov, Andrey; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitz, Martin; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schreiner, Alexander; Schroeder, Christian; Schroer, Nicolai; Schroers, Marcel; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Sluka, Tomas; Smakhtin, Vladimir; Smirnov, Sergei; Smirnov, Yuri; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Soluk, Richard; Sondericker, John; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spencer, Edwin; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St. Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stancu, Stefan Nicolae; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stastny, Jan; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Su, Dong; Soh, Dart-yin; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Takuya; Suzuki, Yu; Sykora, Ivan; Sykora, Tomas; Szymocha, Tadeusz; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Ryan P.; Taylor, Wendy; Teixeira-Dias, Pedro; Ten Kate, Herman; Teng, Ping-Kun; Tennenbaum-Katan, Yaniv-David; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Stan; Thompson, Emily; Thompson, Peter; Thompson, Paul; Thompson, Ray; Thomson, Evelyn; Thun, Rudolf; Tic, Tomas; Tikhomirov, Vladimir; Tikhonov, Yury; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomasek, Lukas; Tomasek, Michal; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tuggle, Joseph; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Tuts, Michael; Twomey, Matthew Shaun; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Berg, Richard; van der Graaf, Harry; van der Kraaij, Erik; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasilyeva, Lidia; Vassilakopoulos, Vassilios; Vazeille, Francois; Vellidis, Constantine; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Villa, Mauro; Villani, Giulio; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Viret, Sébastien; Virzi, Joseph; Vitale , Antonio; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Matteo; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vudragovic, Dusan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Walbersloh, Jorg; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Wang, Chiho; Wang, Haichen; Wang, Jin; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Wastie, Roy; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Manuel; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Werthenbach, Ulrich; Wessels, Martin; Whalen, Kathleen; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Williams, Eric; Williams, Hugh; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wright, Dennis; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wulf, Evan; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xu, Da; Xu, Neng; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Zhaoyu; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yuan, Li; Yurkewicz, Adam; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zambrano, Valentina; Zanello, Lucia; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Qizhi; Zhang, Xueyao; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zutshi, Vishnu

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  11. The ATLAS Simulation Infrastructure

    Science.gov (United States)

    Aad, G.; Abbott, B.; Abdallah, J.; Abdelalim, A. A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acharya, B. S.; Adams, D. L.; Addy, T. N.; Adelman, J.; Adorisio, C.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J. A.; Aharrouche, M.; Ahlen, S. P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Ahsan, M.; Aielli, G.; Akdogan, T.; Åkesson, T. P. A.; Akimoto, G.; Akimov, A. V.; Aktas, A.; Alam, M. S.; Alam, M. A.; Albrand, S.; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P. P.; Allwood-Spiers, S. E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M. G.; Amako, K.; Amelung, C.; Amorim, A.; Amorós, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C. F.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Anduaga, X. S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Antunovic, B.; Anulli, F.; Aoun, S.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A. T. H.; Archambault, J. P.; Arfaoui, S.; Arguin, J.-F.; Argyropoulos, T.; Arik, M.; Armbruster, A. J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Arutinov, D.; Asai, M.; Asai, S.; Asfandiyarov, R.; Ask, S.; Åsman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M. A.; Bach, A. M.; Bachacou, H.; Bachas, K.; Backes, M.; Badescu, E.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J. T.; Baker, O. K.; Baker, M. D.; Baker, S.; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, S.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S. P.; Baranov, S.; Barashkou, A.; Barber, T.; Barberio, E. L.; Barberis, D.; Barbero, M.; Bardin, D. Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B. M.; Barnett, R. M.; Baroncelli, A.; Barr, A. J.; Barreiro, F.; Barreiro Guimarães da Costa, J.; Barrillon, P.; Bartoldus, R.; Bartsch, D.; Bates, R. L.; Batkova, L.; Batley, J. R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H. S.; Bazalova, M.; Beare, B.; Beau, T.; Beauchemin, P. H.; Beccherle, R.; Becerici, N.; Bechtle, P.; Beck, G. A.; Beck, H. P.; Beckingham, M.; Becks, K. H.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P. K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P. J.; Bell, W. H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Bendel, M.; Benedict, B. H.; Benekos, N.; Benhammou, Y.; Benincasa, G. P.; Benjamin, D. P.; Benoit, M.; Bensinger, J. R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Besana, M. I.; Besson, N.; Bethke, S.; Bianchi, R. M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K. M.; Blair, R. E.; Blanchard, J.-B.; Blanchot, G.; Blocker, C.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G. J.; Bocci, A.; Boehler, M.; Boek, J.; Boelaert, N.; Böser, S.; Bogaerts, J. A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Bondarenko, V. G.; Bondioli, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E. V.; Boulahouache, C.; Bourdarios, C.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G. W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Britton, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brodet, E.; Bromberg, C.; Brooijmans, G.; Brooks, W. K.; Brown, G.; Bruckman de Renstrom, P. A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bucci, F.; Buchanan, J.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Budick, B.; Büscher, V.; Bugge, L.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C. P.; Butin, F.; Butler, B.; Butler, J. M.; Buttar, C. M.; Butterworth, J. M.; Byatt, T.; Caballero, J.; Cabrera Urbán, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L. P.; Calvet, D.; Camarri, P.; Cameron, D.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M. D. M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carrillo Montoya, G. D.; Carron Montero, S.; Carter, A. A.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Cascella, M.; Castaneda Hernandez, A. M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N. F.; Cataldi, G.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapman, J. D.; Chapman, J. W.; Chareyre, E.; Charlton, D. G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chen, H.; Chen, S.; Chen, X.; Cheplakov, A.; Chepurnov, V. F.; Cherkaoui El Moursli, R.; Tcherniatine, V.; Chesneanu, D.; Cheu, E.; Cheung, S. L.; Chevalier, L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Childers, J. T.; Chilingarov, A.; Chiodini, G.; Chizhov, V.; Choudalakis, G.; Chouridou, S.; Christidi, I. A.; Christov, A.; Chromek-Burckhart, D.; Chu, M. L.; Chudoba, J.; Ciapetti, G.; Ciftci, A. K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M. D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Citterio, M.; Clark, A.; Clark, P. J.; Cleland, W.; Clemens, J. C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coggeshall, J.; Cogneras, E.; Colijn, A. P.; Collard, C.; Collins, N. J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Conde Muiño, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cooper-Smith, N. J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Costin, T.; Côté, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B. E.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crépé-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Curatolo, M.; Curtis, C. J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; da Via, C.; Dabrowski, W.; Dai, T.; Dallapiccola, C.; Dallison, S. J.; Daly, C. H.; Dam, M.; Danielsson, H. O.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G. L.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A. R.; Dawson, I.; Daya, R. K.; de, K.; de Asmundis, R.; de Castro, S.; de Castro Faria Salgado, P. E.; de Cecco, S.; de Graat, J.; de Groot, N.; de Jong, P.; de Mora, L.; de Oliveira Branco, M.; de Pedis, D.; de Salvo, A.; de Sanctis, U.; de Santo, A.; de Vivie de Regie, J. B.; de Zorzi, G.; Dean, S.; Dedovich, D. V.; Degenhardt, J.; Dehchar, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; Della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Deng, W.; Denisov, S. P.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P. O.; Dewhurst, A.; Dewilde, B.; Dhaliwal, S.; Dhullipudi, R.; di Ciaccio, A.; di Ciaccio, L.; di Domenico, A.; di Girolamo, A.; di Girolamo, B.; di Luise, S.; di Mattia, A.; di Nardo, R.; di Simone, A.; di Sipio, R.; Diaz, M. A.; Diblen, F.; Diehl, E. B.; Dietrich, J.; Dietzsch, T. A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; Do Vale, M. A. B.; Do Valle Wemans, A.; Doan, T. K. O.; Dobos, D.; Dobson, E.; Dobson, M.; Doglioni, C.; Doherty, T.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B. A.; Dohmae, T.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M. T.; Doxiadis, A.; Doyle, A. T.; Drasal, Z.; Dris, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Dührssen, M.; Duflot, L.; Dufour, M.-A.; Dunford, M.; Duran Yildiz, H.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Düren, M.; Ebenstein, W. L.; Ebke, J.; Eckweiler, S.; Edmonds, K.; Edwards, C. A.; Egorov, K.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ermoline, I.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R. M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S. M.; Farthouat, P.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Fayard, L.; Fayette, F.; Febbraro, R.; Federic, P.; Fedin, O. L.; Fedorko, W.; Feligioni, L.; Felzmann, C. U.; Feng, C.; Feng, E. J.; Fenyuk, A. B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M. L.; Ferrere, D.; Ferretti, C.; Fiascaris, M.; Fiedler, F.; Filipčič, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, G.; Fisher, M. J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L. R.; Flowerdew, M. J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fowler, A. J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; Freestone, J.; French, S. T.; Froeschl, R.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E. J.; Gallas, M. V.; Gallo, V.; Gallop, B. J.; Gallus, P.; Galyaev, E.; Gan, K. K.; Gao, Y. S.; Gaponenko, A.; Garcia-Sciveres, M.; García, C.; García Navarro, J. E.; Gardner, R. W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Gatti, C.; Gaudio, G.; Gautard, V.; Gauzzi, P.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Ge, P.; Gee, C. N. P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M. H.; Gentile, S.; Georgatos, F.; George, S.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S. M.; Gilbert, L. M.; Gilchriese, M.; Gilewsky, V.; Gingrich, D. M.; Ginzburg, J.; Giokaris, N.; Giordani, M. P.; Giordano, R.; Giorgi, F. M.; Giovannini, P.; Giraud, P. F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B. K.; Gladilin, L. K.; Glasman, C.; Glazov, A.; Glitza, K. W.; Glonti, G. L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Göpfert, T.; Goeringer, C.; Gössling, C.; Göttfert, T.; Goggi, V.; Goldfarb, S.; Goldin, D.; Golling, T.; Gomes, A.; Gomez Fajardo, L. S.; Gonçalo, R.; Gonella, L.; Gong, C.; González de La Hoz, S.; Gonzalez Silva, M. L.; Gonzalez-Sevilla, S.; Goodson, J. J.; Goossens, L.; Gordon, H. A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Gosdzik, B.; Gosselink, M.; Gostkin, M. I.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M. P.; Goussiou, A. G.; Goy, C.; Grabowska-Bold, I.; Grafström, P.; Grahn, K.-J.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H. M.; Gray, J. A.; Graziani, E.; Green, B.; Greenshaw, T.; Greenwood, Z. D.; Gregor, I. M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Grishkevich, Y. V.; Groh, M.; Groll, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Grybel, K.; Guicheney, C.; Guida, A.; Guillemin, T.; Guler, H.; Gunther, J.; Guo, B.; Gupta, A.; Gusakov, Y.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haas, S.; Haber, C.; Hadavand, H. K.; Hadley, D. R.; Haefner, P.; Härtel, R.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, J. R.; Hansen, J. B.; Hansen, J. D.; Hansen, P. H.; Hansl-Kozanecka, T.; Hansson, P.; Hara, K.; Hare, G. A.; Harenberg, T.; Harrington, R. D.; Harris, O. M.; Harrison, K.; Hartert, J.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hayakawa, T.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heller, M.; Hellman, S.; Helsens, C.; Hemperek, T.; Henderson, R. C. W.; Henke, M.; Henrichs, A.; Henriques Correia, A. M.; Henrot-Versille, S.; Hensel, C.; Henß, T.; Hernández Jiménez, Y.; Hershenhorn, A. D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N. P.; Higón-Rodriguez, E.; Hill, J. C.; Hiller, K. H.; Hillert, S.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holy, T.; Holzbauer, J. L.; Homma, Y.; Horazdovsky, T.; Hori, T.; Horn, C.; Horner, S.; Horvat, S.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howe, T.; Hrivnac, J.; Hryn'ova, T.; Hsu, P. J.; Hsu, S.-C.; Huang, G. S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Hughes, E. W.; Hughes, G.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ince, T.; Ioannou, P.; Iodice, M.; Irles Quiles, A.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issakov, V.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A. V.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jackson, B.; Jackson, J. N.; Jackson, P.; Jaekel, M. R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D. K.; Jansen, E.; Jantsch, A.; Janus, M.; Jared, R. C.; Jarlskog, G.; Jeanty, L.; Jen-La Plante, I.; Jenni, P.; Jez, P.; Jézéquel, S.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, M.; Johansson, K. E.; Johansson, P.; Johnert, S.; Johns, K. A.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, T. J.; Jorge, P. M.; Joseph, J.; Juranek, V.; Jussel, P.; Kabachenko, V. V.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L. V.; Kalinowski, A.; Kama, S.; Kanaya, N.; Kaneda, M.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagounis, M.; Karagoz Unel, M.; Kartvelishvili, V.; Karyukhin, A. N.; Kashif, L.; Kasmi, A.; Kass, R. D.; Kastanas, A.; Kastoryano, M.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M. S.; Kayumov, F.; Kazanin, V. A.; Kazarinov, M. Y.; Keates, J. R.; Keeler, R.; Keener, P. T.; Kehoe, R.; Keil, M.; Kekelidze, G. D.; Kelly, M.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kerševan, B. P.; Kersten, S.; Kessoku, K.; Khakzad, M.; Khalil-Zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, M. S.; Kim, P. C.; Kim, S. H.; Kind, O.; Kind, P.; King, B. T.; Kirk, J.; Kirsch, G. P.; Kirsch, L. E.; Kiryunin, A. E.; Kisielewska, D.; Kittelmann, T.; Kiyamura, H.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E. B.; Klioutchnikova, T.; Klok, P. F.; Klous, S.; Kluge, E.-E.; Kluge, T.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N. S.; Kneringer, E.; Ko, B. R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Köneke, K.; König, A. C.; Koenig, S.; Köpke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kolos, S.; Kolya, S. D.; Komar, A. A.; Komaragiri, J. R.; Kondo, T.; Kono, T.; Konoplich, R.; Konovalov, S. P.; Konstantinidis, N.; Koperny, S.; Korcyl, K.; Kordas, K.; Korn, A.; Korolkov, I.; Korolkova, E. V.; Korotkov, V. A.; Kortner, O.; Kostka, P.; Kostyukhin, V. V.; Kotov, S.; Kotov, V. M.; Kotov, K. Y.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, H.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kral, V.; Kramarenko, V. A.; Kramberger, G.; Krasny, M. W.; Krasznahorkay, A.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumshteyn, Z. V.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurchaninov, L. L.; Kurochkin, Y. A.; Kus, V.; Kwee, R.; La Rotonda, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, M.; Lampen, C. L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M. P. J.; Lane, J. L.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Laycock, P.; Lazarev, A. B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Le Vine, M.; Lebedev, A.; Lebel, C.; Lecompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J. S. H.; Lee, S. C.; Lefebvre, M.; Legendre, M.; Legeyt, B. C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leitner, R.; Lellouch, D.; Lellouch, J.; Lendermann, V.; Leney, K. J. C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leroy, C.; Lessard, J.-R.; Lester, C. G.; Leung Fook Cheong, A.; Levêque, J.; Levin, D.; Levinson, L. J.; Leyton, M.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lilley, J. N.; Lim, H.; Limosani, A.; Limper, M.; Lin, S. C.; Linnemann, J. T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, C.; Liu, D.; Liu, H.; Liu, J. B.; Liu, M.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Lloyd, S. L.; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F. K.; Loginov, A.; Loh, C. W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, R. E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Loureiro, K. F.; Lovas, L.; Love, J.; Love, P. A.; Lowe, A. J.; Lu, F.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L. L.; Macana Goia, J. A.; Maccarrone, G.; Macchiolo, A.; Maček, B.; Machado Miguens, J.; Mackeprang, R.; Madaras, R. J.; Mader, W. F.; Maenner, R.; Maeno, T.; Mättig, P.; Mättig, S.; Magalhaes Martins, P. J.; Magradze, E.; Mahalalel, Y.; Mahboubi, K.; Mahmood, A.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makouski, M.; Makovec, N.; Malecki, Pa.; Malecki, P.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Mandelli, L.; Mandić, I.; Mandrysch, R.; Maneira, J.; Mangeard, P. S.; Manjavidze, I. D.; Manning, P. M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J. F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C. P.; Marroquim, F.; Marshall, Z.; Marti-Garcia, S.; Martin, A. J.; Martin, A. J.; Martin, B.; Martin, B.; Martin, F. F.; Martin, J. P.; Martin, T. A.; Martin Dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martini, A.; Martyniuk, A. C.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Matricon, P.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maxfield, S. J.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; McDonald, J.; McKee, S. P.; McCarn, A.; McCarthy, R. L.; McCubbin, N. A.; McFarlane, K. W.; McGlone, H.; McHedlidze, G.; McMahon, S. J.; McPherson, R. A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T. M.; Mehlhase, S.; Mehta, A.; Meier, K.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B. R.; Mendoza Navas, L.; Meng, Z.; Menke, S.; Meoni, E.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A. M.; Metcalfe, J.; Mete, A. S.; Meyer, J.-P.; Meyer, J.; Meyer, J.; Meyer, T. C.; Meyer, W. T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R. P.; Migas, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Miller, D. W.; Mills, W. J.; Mills, C. M.; Milov, A.; Milstead, D. A.; Milstein, D.; Minaenko, A. A.; Miñano, M.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mirabelli, G.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitrevski, J.; Mitsou, V. A.; Miyagawa, P. S.; Mjörnmark, J. U.; Mladenov, D.; Moa, T.; Moed, S.; Moeller, V.; Mönig, K.; Möser, N.; Mohr, W.; Mohrdieck-Möck, S.; Moles-Valls, R.; Molina-Perez, J.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Moore, R. W.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llácer, M.; Morettini, P.; Morii, M.; Morley, A. K.; Mornacchi, G.; Morozov, S. V.; Morris, J. D.; Moser, H. G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Müller, T. A.; Muenstermann, D.; Muir, A.; Munwes, Y.; Murillo Garcia, R.; Murray, W. J.; Mussche, I.; Musto, E.; Myagkov, A. G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A. M.; Nakamura, K.; Nakano, I.; Nakatsuka, H.; Nanava, G.; Napier, A.; Nash, M.; Nation, N. R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nderitu, S. K.; Neal, H. A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nelson, A.; Nelson, T. K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neusiedl, A.; Neves, R. M.; Nevski, P.; Newcomer, F. M.; Nickerson, R. B.; Nicolaidou, R.; Nicolas, L.; Nicoletti, G.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Nikiforov, A.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nordberg, M.; Nordkvist, B.; Notz, D.; Novakova, J.; Nozaki, M.; Nožička, M.; Nugent, I. M.; Nuncio-Quiroz, A.-E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Neil, D. C.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohshima, T.; Ohshita, H.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olchevski, A. G.; Oliveira, M.; Oliveira Damazio, D.; Oliver, J.; Oliveira Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R. S.; Ortega, E. O.; Osculati, B.; Ospanov, R.; Osuna, C.; Ottersbach, J. P.; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A.; Ozcan, V. E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Pahl, C.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J. D.; Pan, Y. B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Papadopoulou, Th. D.; Park, S. J.; Park, W.; Parker, M. A.; Parker, S. I.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Pater, J. R.; Patricelli, S.; Patwa, A.; Pauly, T.; Peak, L. S.; Pecsy, M.; Pedraza Morales, M. I.; Peleganchuk, S. V.; Peng, H.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Codina, E.; Pérez García-Estañ, M. T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Persembe, S.; Perus, P.; Peshekhonov, V. D.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D.; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A. W.; Piacquadio, G.; Piccinini, M.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pinto, B.; Pizio, C.; Placakyte, R.; Plamondon, M.; Pleier, M.-A.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomeroy, D.; Pommès, K.; Ponsot, P.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Porter, R.; Pospelov, G. E.; Pospisil, S.; Potekhin, M.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Potter, K. P.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Pribyl, L.; Price, D.; Price, L. E.; Prichard, P. M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qin, Z.; Quadt, A.; Quarrie, D. R.; Quayle, W. B.; Quinonez, F.; Raas, M.; Radeka, V.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A. M.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A. L.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinherz-Aronis, E.; Reinsch, A.; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z. L.; Renkel, P.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richards, R. A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R. R.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Roa Romero, D. A.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robinson, M.; Robson, A.; Rocha de Lima, J. G.; Roda, C.; Roda Dos Santos, D.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, S.; Røhne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V. M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, G. A.; Rosselet, L.; Rossetti, V.; Rossi, L. P.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Ruckert, B.; Ruckstuhl, N.; Rud, V. I.; Rudolph, G.; Rühr, F.; Ruggieri, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N. A.; Rutherfoord, J. P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y. F.; Ryan, P.; Rybkin, G.; Rzaeva, S.; Saavedra, A. F.; Sadrozinski, H. F.-W.; Sadykov, R.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M. S.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B. M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Samset, B. H.; Sandaker, H.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandhu, P.; Sandstroem, R.; Sandvoss, S.; Sankey, D. P. C.; Sanny, B.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Saraiva, J. G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sasaki, O.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Savard, P.; Savine, A. Y.; Savinov, V.; Sawyer, L.; Saxon, D. H.; Says, L. P.; Sbarra, C.; Sbrizzi, A.; Scannicchio, D. A.; Schaarschmidt, J.; Schacht, P.; Schäfer, U.; Schaetzel, S.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Schamov, A. G.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Scherzer, M. I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schroers, M.; Schultes, J.; Schultz-Coulon, H.-C.; Schumacher, J. W.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W. G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Seliverstov, D. M.; Sellden, B.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M. E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L. Y.; Shank, J. T.; Shao, Q. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M. J.; Shupe, M. A.; Sicho, P.; Sidoti, A.; Siegert, F.; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S. B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N. B.; Sipica, V.; Siragusa, G.; Sisakyan, A. N.; Sivoklokov, S. Yu.; Sjoelin, J.; Sjursen, T. B.; Skovpen, K.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloper, J.; Sluka, T.; Smakhtin, V.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, B. C.; Smith, D.; Smith, K. M.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snow, S. W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C. A.; Solar, M.; Solc, J.; Solfaroli Camillocci, E.; Solodkov, A. A.; Solovyanov, O. V.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spanò, F.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; Denis, R. D. St.; Stahl, T.; Stahlman, J.; Stamen, R.; Stancu, S. N.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stapnes, S.; Starchenko, E. A.; Stark, J.; Staroba, P.; Starovoitov, P.; Stastny, J.; Stavina, P.; Stavropoulos, G.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G. A.; Stockton, M. C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A. R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strube, J.; Stugu, B.; Soh, D. A.; Su, D.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, X. H.; Sundermann, J. E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M. R.; Suzuki, T.; Suzuki, Y.; Sykora, I.; Sykora, T.; Szymocha, T.; Sánchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M. C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F. E.; Taylor, G. N.; Taylor, R. P.; Taylor, W.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P. K.; Tennenbaum-Katan, Y. D.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R. J.; Thioye, M.; Thoma, S.; Thomas, J. P.; Thompson, E. N.; Thompson, P. D.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomson, E.; Thun, R. P.; Tic, T.; Tikhomirov, V. O.; Tikhonov, Y. A.; Tipton, P.; Tique Aires Viegas, F. J.; Tisserant, S.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tollefson, K.; Tomasek, L.; Tomasek, M.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonoyan, A.; Topfel, C.; Topilin, N. D.; Torrence, E.; Torró Pastor, E.; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Trinh, T. N.; Tripiana, M. F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocmé, B.; Troncon, C.; Trzupek, A.; Tsarouchas, C.; Tseng, J. C.-L.; Tsiakiris, M.; Tsiareshka, P. V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsukerman, I. I.; Tsulaia, V.; Tsung, J.-W.; Tsuno, S.; Tsybychev, D.; Tuggle, J. M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P. M.; Twomey, M. S.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J. A.; van Berg, R.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vari, R.; Varnes, E. W.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vasilyeva, L.; Vassilakopoulos, V. I.; Vazeille, F.; Vellidis, C.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vetterli, M. C.; Vichou, I.; Vickey, T.; Viehhauser, G. H. A.; Villa, M.; Villani, E. G.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M. G.; Vinek, E.; Vinogradov, V. B.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T. T.; Vossebeld, J. H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vudragovic, D.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Walbersloh, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wang, C.; Wang, H.; Wang, J.; Wang, S. M.; Warburton, A.; Ward, C. P.; Warsinsky, M.; Wastie, R.; Watkins, P. M.; Watson, A. T.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, A. T.; Waugh, B. M.; Weber, M. D.; Weber, M.; Weber, M. S.; Weber, P.; Weidberg, A. R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P. S.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Werthenbach, U.; Wessels, M.; Whalen, K.; White, A.; White, M. J.; White, S.; Whitehead, S. R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L. A. M.; Wildauer, A.; Wildt, M. A.; Wilkens, H. G.; Williams, E.; Williams, H. H.; Willocq, S.; Wilson, J. A.; Wilson, M. G.; Wilson, A.; Wingerter-Seez, I.; Winklmeier, F.; Wittgen, M.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wraight, K.; Wright, C.; Wright, D.; Wrona, B.; Wu, S. L.; Wu, X.; Wulf, E.; Wynne, B. M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xu, D.; Xu, N.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U. K.; Yang, Z.; Yao, W.-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S. P.; Yu, D.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaidan, R.; Zaitsev, A. M.; Zajacova, Z.; Zambrano, V.; Zanello, L.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zemla, A.; Zendler, C.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi Della Porta, G.; Zhan, Z.; Zhang, H.; Zhang, J.; Zhang, Q.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C. G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Zur Nedden, M.; Zutshi, V.

    2010-12-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  12. Event visualization in ATLAS

    CERN Document Server

    Bianchi, Riccardo-Maria; The ATLAS collaboration

    2017-01-01

    At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.

  13. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  14. A time for atlases and atlases for time

    Directory of Open Access Journals (Sweden)

    Yoav Livneh

    2010-02-01

    Full Text Available Advances in neuroanatomy and computational power are leading to the construction of new digital brain atlases. Atlases are rising as indispensable tools for comparing anatomical data as well as being stimulators of new hypotheses and experimental designs. Brain atlases describe nervous systems which are inherently plastic and variable. Thus, the levels of brain plasticity and stereotypy would be important to evaluate as limiting factors in the context of static brain atlases. In this review, we discuss the extent of structural changes which neurons undergo over time, and how these changes would impact the static nature of atlases. We describe the anatomical stereotypy between neurons of the same type, highlighting the differences between invertebrates and vertebrates. We review some recent experimental advances in our understanding of anatomical dynamics in adult neural circuits, and how these are modulated by the organism’s experience. In this respect, we discuss some analogies between brain atlases and the sequenced genome and the emerging epigenome. We argue that variability and plasticity of neurons are substantially high, and should thus be considered as integral features of high-resolution digital brain atlases.

  15. A fast atlas pre-selection procedure for multi-atlas based brain segmentation.

    Science.gov (United States)

    Ma, Jingbo; Ma, Heather T; Li, Hengtong; Ye, Chenfei; Wu, Dan; Tang, Xiaoying; Miller, Michael; Mori, Susumu

    2015-01-01

    Multi-atlas based MR image segmentation has been recognized as a quantitative analysis approach for brain. For such purpose, atlas databases keep increasing to include various anatomical characteristics of human brain. Atlas pre-selection becomes a necessary step for efficient and accurate automated segmentation of human brain images. In this study, we proposed a method of atlas pre-selection for target image segmentation on the MriCloud platform, which is a state-of-the-art multi-atlas based segmentation tool. In the MRIcloud pipeline, segmentation of lateral ventricle (LV) label is generated as an additional input in the segmentation pipeline. Under this circumstance, similarity of the LV label between target image and atlases was adopted as the atlas ranking scheme. Dice overlap coefficient was calculated and taken as the quantitative measure for atlas ranking. Segmentation results based on the proposed method were compared with that based on atlas pre-selection by mutual information (MI) between images. The final segmentation results showed a comparable accuracy of the proposed method with that from MI based atlas pre-selection. However, the computation load for the atlas pre-selection was speeded up by about 20 times compared to MI based pre-selection. The proposed method provides a promising assistance for quantitative analysis of brain images.

  16. Evolution of the ATLAS Nightly Build System

    CERN Document Server

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Builds and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, test...

  17. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  18. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias;

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Scandinavia and other countries. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed...... by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the LHC Computing Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous...... environment. Also, the service used for cataloging the location of data files is different from other Grids but must still be useable by DQ2 and ATLAS users to locate data within NDGF. This paper presents in detail how we solve these issues to allow seamless access worldwide to data within NDGF....

  19. Automatic Testing and Assessment of Neuroanatomy Using a Digital Brain Atlas: Method and Development of Computer- and Mobile-Based Applications

    Science.gov (United States)

    Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar

    2009-01-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…

  20. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  1. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  2. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  3. Networks in ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2016-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  4. Networks in ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2017-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  5. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  6. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  7. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  8. Mongolian Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climatic atlas dated 1985, in Mongolian, with introductory material also in Russian and English. One hundred eight pages in single page PDFs.

  9. Web Exhibition – ATLASES: Poetics, Politics, and Performance

    Directory of Open Access Journals (Sweden)

    Nedjeljko Frančula

    2013-12-01

    Full Text Available ATLASES: Poetics, Politics, and Performance is a web exhibition of atlases from the Special Collections and School of Geographical Sciences of the University of Bristol (http://uobatlases.net/. It includes atlases produced between 1570 to approximately 1970.The exhibition consists of four thematic parts. Renaissance Theatres contains famous and les famous atlases produced between the end of the 16th century to the middle of the 17th century, such as atlases by Ortelius (1574, Camden (1610, Speed (1611 and four atlas tomes by Blaeu (1645. Rhetoric of Truth contains geological and archaeological atlases from the 18th and the beginning of the 19th century. However, Rhetoric of Truth is not only limited to renaissance, but it also encompasses first computer generated atlases, e.g. Atlas of Breeding Birds in England and Ireland (1976 and others. The Colonial Gaze focuses on atlases applied in colonial projects and land exploitation in Africa and the Caribbean Islands, as well as in circulation of race theories in Europe and North America at the end of the 19th century. The last part, National Identities and Conflict explores the role of atlas as a powerful instrument for visualizing conflicts and shaping territorial-political ideas in the 20th century.

  10. ATLAS Transition Radiation Tracker

    CERN Multimedia

    2006-01-01

    The ATLAS transition radiation tracker is made of 300'000 straw tubes, up to 144cm long. Filled with a gas mixture and threaded with a wire, each straw is a complete mini-detector in its own right. An electric field is applied between the wire and the outside wall of the straw. As particles pass through, they collide with atoms in the gas, knocking out electrons. The avalanche of electrons is detected as an electrical signal on the wire in the centre. The tracker plays two important roles. Firstly, it makes more position measurements, giving more dots for the computers to join up to recreate the particle tracks. Also, together with the ATLAS calorimeters, it distinguishes between different types of particles depending on whether they emit radiation as they make the transition from the surrounding foil into the straws.

  11. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  12. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  13. A service-based SLA (Service Level Agreement) for the RACF (RHIC and ATLAS computing facility) at brookhaven national lab

    Science.gov (United States)

    Karasawa, Mizuka; Chan, Tony; Smith, Jason

    2010-04-01

    The RACF provides computing support to a broad spectrum of scientific programs at Brookhaven. The continuing growth of the facility, the diverse needs of the scientific programs and the increasingly prominent role of distributed computing requires the RACF to change from a system to a service-based SLA with our user communities. A service-based SLA allows the RACF to coordinate more efficiently the operation, maintenance and development of the facility by mapping out a matrix of system and service dependencies and by creating a new, configurable alarm management layer that automates service alerts and notification of operations staff. This paper describes the adjustments made by the RACF to transition to a service-based SLA, including the integration of its monitoring software, alarm notification mechanism and service ticket system at the facility to make the new SLA a reality.

  14. The ATLAS Glasgow Overview Week

    CERN Multimedia

    Richard Hawkings

    2007-01-01

    The ATLAS Overview Weeks always provide a good opportunity to see the status and progress throughout the experiment, and the July week at Glasgow University was no exception. The setting, amidst the traditional buildings of one of the UK's oldest universities, provided a nice counterpoint to all the cutting-edge research and technology being discussed. And despite predictions to the contrary, the weather at these northern latitudes was actually a great improvement on the previous few weeks in Geneva. The meeting sessions comprehensively covered the whole ATLAS project, from the subdetector and TDAQ systems and their commissioning, through to offline computing, analysis and physics. As a long-time ATLAS member who remembers plenary meetings in 1991 with 30 people drawing detector layouts on a whiteboard, the hardware and installation sessions were particularly impressive - to see how these dreams have been translated into 7000 tons of reality (and with attendant cabling, supports and services, which certainly...

  15. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to `transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  16. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to 'transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  17. 14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

    CERN Multimedia

    Jean-claude Gadmer

    2011-01-01

    14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

  18. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  19. 11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

  20. 30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

  1. 28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

  2. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2017-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC, are followed by adjustments to the ATLAS trigger monitoring systems. During Run 1, and so far in Run 2, ATLAS has deployed monitoring updates with the installation of new software releases at Tier-0, the first level of the ATLAS computing grid. Having to wait for a new software release to be installed at Tier-0, in order to update ATLAS offline trigger monitoring configurations, results in a lag with respect to the modification of the trigger menu. We present the design and implementation of a `trigger menu-aware' monitoring system that aims to simplify the ATLAS operational workflows by allowing monitoring configuration changes to be made at the Tier-0 site by utilising an Oracle SQL database.

  3. Atlas image labeling of subcortical structures and vascular territories in brain CT images.

    Science.gov (United States)

    Du, Kaifang; Zhang, Li; Nguyen, Tony; Ordy, Vincent; Fichte, Heinz; Ditt, Hendrik; Chefd'hotel, Christophe

    2013-01-01

    We propose a multi-atlas labeling method for subcortical structures and cerebral vascular territories in brain CT images. Each atlas image is registered to the query image by a non-rigid registration and the deformation is then applied to the labeling of the atlas image to obtain the labeling of the query image. Four label fusion strategies (single atlas, most similar atlas, major voting, and STAPLE) were compared. Image similarity values in non-rigid registration were calculated and used to select and rank atlases. Major voting fusion strategy gave the best accuracy, with DSC (Dice similarity coefficient) around 0.85 ± 0.03 for caudate, putamen, and thalamus. The experimental results also show that fusing more atlases does not necessarily yield higher accuracy and we should be able to improve accuracy and decrease computation cost at the same time by selecting a preferred set with the minimum number of atlases.

  4. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2016-01-01

    The ATLAS Event Service (ES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the ES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Goggle Comput...

  5. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  6. Using the Hadoop/MapReduce approach for monitoring the CERN storage system and improving the ATLAS computing model

    CERN Document Server

    Russo, Stefano Alberto; Lamanna, M

    The processing of huge amounts of data, an already fundamental task for the research in the elementary particle physics field, is becoming more and more important also for companies operating in the Information Technology (IT) industry. In this context, if conventional approaches are adopted several problems arise, starting from the congestion of the communication channels. In the IT sector, one of the approaches designed to minimize this congestion on is to exploit the data locality, or in other words, to bring the computation as closer as possible to where the data resides. The most common implementation of this concept is the Hadoop/MapReduce framework. In this thesis work I evaluate the usage of Hadoop/MapReduce in two areas: a standard one similar to typical IT analyses, and an innovative one related to high energy physics analyses. The first consists in monitoring the history of the storage cluster which stores the data generated by the LHC experiments, the second in the physics analysis of the latter, ...

  7. Combining atlas based segmentation and intensity classification with nearest neighbor transform and accuracy weighted vote.

    Science.gov (United States)

    Sdika, Michaël

    2010-04-01

    In this paper, different methods to improve atlas based segmentation are presented. The first technique is a new mapping of the labels of an atlas consistent with a given intensity classification segmentation. This new mapping combines the two segmentations using the nearest neighbor transform and is especially effective for complex and folded regions like the cortex where the registration is difficult. Then, in a multi atlas context, an original weighting is introduced to combine the segmentation of several atlases using a voting procedure. This weighting is derived from statistical classification theory and is computed offline using the atlases as a training dataset. Concretely, the accuracy map of each atlas is computed and the vote is weighted by the accuracy of the atlases. Numerical experiments have been performed on publicly available in vivo datasets and show that, when used together, the two techniques provide an important improvement of the segmentation accuracy.

  8. ATLAS Story

    CERN Multimedia

    Nordberg, Markus

    2012-01-01

    This film produced in July 2012 explains how fundamental research connects to Society and what benefits collaborative way of working can and may generate in the future, using ATLAS Collaboration as a case study. The film is intellectually inspired by the book "Collisions and Collaboration" (OUP) by Max Boisot (ed.), see: collisionsandcollaboration.com. The film is directed by Andrew Millington (OMNI Communications)

  9. Probabilistic liver atlas construction

    OpenAIRE

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E.

    2017-01-01

    Background Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. Results A new method for probabilistic atlas con...

  10. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, GA; The ATLAS collaboration

    2011-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 55PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations to manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: - Popularity service, which measures usage of data across ATLAS. - Space monitoring and accounting at sites. - Automated blacklisting service. - Cleaning agents, which trigger deletion of unused data at sites. - Deletion agents, to reliably delete unwanted data from sites. We describe the experience of data management operation in ATLAS computing, showing how these serv...

  11. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  12. The magnetically driven imploding liner parameter space of the ATLAS capacitor bank

    CERN Document Server

    Lindemuth, I R; Faehl, R J; Reinovsky, R E

    2001-01-01

    Summary form only given, as follows. The Atlas capacitor bank (23 MJ, 30 MA) is now operational at Los Alamos. Atlas was designed primarily to magnetically drive imploding liners for use as impactors in shock and hydrodynamic experiments. We have conducted a computational "mapping" of the high-performance imploding liner parameter space accessible to Atlas. The effect of charge voltage, transmission inductance, liner thickness, liner initial radius, and liner length has been investigated. One conclusion is that Atlas is ideally suited to be a liner driver for liner-on-plasma experiments in a magnetized target fusion (MTF) context . The parameter space of possible Atlas reconfigurations has also been investigated.

  13. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2017-01-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Comp...

  14. ATLAS Distributed Analysis Tools

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Liko, Dietrich

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting a...

  15. Simulation of the heat transfer around the ATLAS muon chambers

    CERN Multimedia

    2005-01-01

    This 2D simulation recently carried out on the ATLAS muon chambers by a small team of CERN engineers specialises in the numerical computation of fluid dynamics, in other words the flow of fluids and heat.

  16. Triggering events with GPU at ATLAS

    CERN Document Server

    Kama, Sami; The ATLAS collaboration

    2015-01-01

    The growing complexity of events produced in LHC collisions demands more and more computing power both for the on line selection and for the offline reconstruction of events. In recent years, the explosive performance growth of massively parallel processors like Graphical Processing Units both in computing power and in low energy consumption, make GPU extremely attractive for using them in a complex high energy experiment like ATLAS. Together with the optimization of reconstruction algorithms exploiting this new massively parallel paradigm, a small scale prototype of the full ATLAS High Level Trigger exploiting GPU has been implemented. We discuss the integration procedure of this prototype, the achieved performance and the prospects for the future.

  17. ATLAS PhD Grants 2015

    CERN Multimedia

    Marcelloni De Oliveira, Claudia

    2015-01-01

    ATLAS PHd Grants - We are excited to announce the creation of a dedicated grant scheme (thanks to a donation from Fabiola Gianotti and Peter Jenni following their award from the Fundamental Physics Prize foundation) to encourage young and high-caliber doctoral students in particle physics research (including computing for physics) and permit them to obtain world class exposure, supervision and training within the ATLAS collaboration. This special PhD Grant is aimed at graduate students preparing a doctoral thesis in particle physics (incl. computing for physics) to spend one year at CERN followed by one year support also at the home Institute.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  19. A Lego version of ATLAS

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    There's nothing very unusual about a small child making simple objects out of Lego. But wouldn't you be surprised to learn that one six-year old has just made a life-like model of the ATLAS detector?   Bastian with his Lego ATLAS detector. © Photo provided by Kai Nicklas, Bastian's father. It all began a month ago when the boy's father was watching a video about the construction of the ATLAS detector on the Internet. He hadn't noticed that his son was watching it over his shoulder. The small boy was fascinated by what he was seeing on the computer screen and his first reaction was to exclaim: "Wow! That's a terrific machine! I think the people who built it must be really clever." The detector must have really fired his imagination because, after asking his father a few questions, he decided to make a Lego model of it. Look at the photo and you will see how closely the model he produced resembles the actual ATLAS detector. Is the little boy in question, Bastia...

  20. ATLAS Fast Tracker Simulation Challenges

    CERN Document Server

    Adelman, Jahred; The ATLAS collaboration; Borodin, Mikhail; Chakraborty, Dhiman; García Navarro, José Enrique; Golubkov, Dmitry; Kama, Sami; Panitkin, Sergey; Smirnov, Yuri; Stewart, Graeme; Tompkins, Lauren; Vaniachine, Alexandre; Volpi, Guido

    2015-01-01

    To deal with Big Data flood from the ATLAS detector most events have to be rejected in the trigger system. the trigger rejection is complicated by the presence of a large number of minimum-bias events – the pileup. To limit pileup effects in the high luminosity environment of the LHC Run-2, ATLAS relies on full tracking provided by the Fast TracKer (FTK) implemented with custom electronics. The FTK data processing pipeline has to be simulated in preparation for LHC upgrades to support electronics design and develop trigger strategies at high luminosity. The simulation of the FTK - a highly parallelized system - has inherent performance bottlenecks on general-purpose CPUs. To take advantage of the Grid Computing power, the FTK simulation is integrated with Monte Carlo simulations at the Production System level above the ATLAS workload management system PanDA. We report on ATLAS experience with FTK simulations on the Grid and next steps for accommodating the growing requirements for resources during the LHC R...

  1. ATLAS Recordings

    CERN Multimedia

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you.Enjoy the Lectures!

  2. ATLAS Distributed Data Analysis: performance and challenges

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  3. ATLAS Distributed Data Analysis: challenges and performance

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  4. ATLAS UPGRADES

    CERN Document Server

    Lacasta, C; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. In parallel the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  7. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  8. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  9. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Science.gov (United States)

    Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  11. Energy Frontier Research With ATLAS: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Butler, John [Boston Univ., MA (United States); Black, Kevin [Boston Univ., MA (United States); Ahlen, Steve [Boston Univ., MA (United States)

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections, t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).

  12. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  13. Big Data Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2017-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and to...

  14. ATLAS TDAQ application gateway upgrade during LS1

    CERN Document Server

    KOROL, A; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, A C; DUBROV, S; HAFEEZ, M; LEE, C J; SCANNICCHIO, D A; TWOMEY, M; VORONKOV, A; ZAYTSEV, A

    2014-01-01

    The ATLAS Gateway service is implemented with a set of dedicated computer nodes to provide a fine-grained access control between CERN General Public Network (GPN) and ATLAS Technical Control Network (ATCN). ATCN connects the ATLAS online farm used for ATLAS Operations and data taking, including the ATLAS TDAQ (Trigger and Data Aquisition) and DCS (Detector Control System) nodes. In particular, it provides restricted access to the web services (proxy), general login sessions (via SSH and RDP protocols), NAT and mail relay from ATCN. At the Operating System level the implementation is based on virtualization technologies. Here we report on the Gateway upgrade during Long Shutdown 1 (LS1) period: it includes the transition to the last production release of the CERN Linux distribution (SLC6), the migration to the centralized configuration management system (based on Puppet) and the redesign of the internal system architecture.

  15. Big Data Analytics Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of big data, statistical and machine learning tools...

  16. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  17. Report to users of ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Ahmad, I.; Glagola, B. [eds.

    1995-05-01

    This report contains discussing in the following areas: Status of the Atlas accelerator; highlights of recent research at Atlas; concept for an advanced exotic beam facility based on Atlas; program advisory committee; Atlas executive committee; and Atlas and ANL physics division on the world wide web.

  18. COMPUTING

    CERN Document Server

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  20. The ATLAS Fast Tracker

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    The use of tracking information at the trigger level in the LHC Run II period is crucial for the trigger an data acquisition (TDAQ) system. The tracking precision is in fact important to identify specific decay products of the Higgs boson or new phenomena, a well as to distinguish the contributions coming from many contemporary collisions that occur at every bunch crossing. However, the track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, full reconstruction at full Level-1 trigger accept rate (100 KHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a specific processor: the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronic, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker information. Patte...

  1. ATLAS Data Access Policy

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    ATLAS has fully supported the principle of open access in its publication policy. This document outlines the policy of ATLAS as regards open access to data at different levels as described in the DPHEP model. The main objective is to make the data available in a usable way to people external to the ATLAS collaboration.

  2. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  3. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  4. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  5. EnviroAtlas - Portland, OR - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Portland, OR Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  6. EnviroAtlas - Green Bay, WI - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Green Bay, WI Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  7. EnviroAtlas - Paterson, NJ - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Paterson, NJ Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  8. EnviroAtlas - Austin, TX - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Austin, TX Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas...

  9. EnviroAtlas - Phoenix, AZ - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Phoenix, AZ Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  10. [Atlas fractures].

    Science.gov (United States)

    Schären, S; Jeanneret, B

    1999-05-01

    Fractures of the atlas account for 1-2% of all vertebral fractures. We divide atlas fractures into 5 groups: isolated fractures of the anterior arch of the atlas, isolated fractures of the posterior arch, combined fractures of the anterior and posterior arch (so-called Jefferson fractures), isolated fractures of the lateral mass and fractures of the transverse process. Isolated fractures of the anterior or posterior arch are benign and are treated conservatively with a soft collar until the neck pain has disappeared. Jefferson fractures are divided into stable and unstable fracture depending on the integrity of the transverse ligament. Stable Jefferson fractures are treated conservatively with good outcome while unstable Jefferson fractures are probably best treated operatively with a posterior atlanto-axial or occipito-axial stabilization and fusion. The authors preferred treatment modality is the immediate open reduction of the dislocated lateral masses combined with a stabilization in the reduced position using a transarticular screw fixation C1/C2 according to Magerl. This has the advantage of saving the atlanto-occipital joints and offering an immediate stability which makes immobilization in an halo or Minerva cast superfluous. In late instabilities C1/2 with incongruency of the lateral masses occurring after primary conservative treatment, an occipito-cervical fusion is indicated. Isolated fractures of the lateral masses are very rare and may, if the lateral mass is totally destroyed, be a reason for an occipito-cervical fusion. Fractures of the transverse processes may be the cause for a thrombosis of the vertebral artery. No treatment is necessary for the fracture itself.

  11. Global Data Grid Efforts for ATLAS

    CERN Multimedia

    Gardner, R.

    2001-01-01

    Over the past two years computational data grids have emerged as a promising new technology for large scale, data-intensive computing required by the LHC experiments, as outlined by the recent "Hoffman" review panel that addressed the LHC computing challenge. The problem essentially is to seamlessly link physicists to petabyte-scale data and computing resources, distributed worldwide, and connected by high-bandwidth research networks. Several new collaborative initiatives in Europe, the United States, and Asia have formed to address the problem. These projects are of great interest to ATLAS physicists and software developers since their objective is to offer tools that can be integrated into the core ATLAS application framework for distributed event reconstruction, Monte Carlo simulation, and data analysis, making it possible for individuals and groups of physicists to share information, data, and computing resources in new ways and at scales not previously attempted. In addition, much of the distributed IT...

  12. ATLAS experimentet

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    Filmen innehåller mycket information om fysik och varför LHC behövs tilsammans med stora detektorer och specielt om behovet av ATLAS Experimentet. Mycket bra film för att förklara det okända- som man undersöker i CERN för att ge svar på frågor som människor har försökt förklara under flere tusen år.

  13. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  16. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  19. ATLAS Recordings

    CERN Document Server

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  20. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Lassnig, M; Molfetas, A; Baristis, M; Zhang, D; Calvet, I; Beermann, T; Barreiro Megino, F; Tykhonov, A; Campana, S; Serfon, C; Oleynik, O; Petrosyan, A

    2012-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 70PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: egin{itemize} item hspace{2mm} Popularity service, which measures usage of data across ATLAS. item hspace{2mm} Space monitoring and accounting at sites. item hspace{2mm} Automated exclusion service. item hspace{2mm} Cleaning agents, which trigger deletion of unused data at sites. item hspace{2mm} Deletion agents, to reliably delete unwanted data from sites. end{itemize} We...

  1. The ATLAS ARC backend to HPC

    Science.gov (United States)

    Haug, S.; Hostettler, M.; Sciacca, F. G.; Weber, M.

    2015-12-01

    The current distributed computing resources used for simulating and processing collision data collected by ATLAS and the other LHC experiments are largely based on dedicated x86 Linux clusters. Access to resources, job control and software provisioning mechanisms are quite different from the common concept of self-contained HPC applications run by particular users on specific HPC systems. We report on the development and the usage in ATLAS of a SSH backend to the Advanced Resource Connector (ARC) middleware to enable HPC compliant access and on the corresponding software provisioning mechanisms.

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  3. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  4. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  6. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  7. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  8. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  9. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline.

    Science.gov (United States)

    Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin

    2014-01-01

    Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual "atlases" that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures.

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  11. Canadian ATLAS data center to support CERN's LHC

    CERN Multimedia

    2006-01-01

    "The biggest science experiment in history is currently underway at the world-famous CERN labs in Switzerland, and Canada is poised to play a critical role in its success. Thanks to a $10.5 million investment announced by the Canada Foundation for Innovation (CFI), an ultra-sophisticated computing facility -- the ATLAS Data Center -- will be created to support the ATLAS project at CERN's Large Hadron Collider (LHC)." (1 page)

  12. Renewable energy atlas of the United States.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J.A.; Hlava, K.Greenwood, H.; Carr, A. (Environmental Science Division)

    2012-05-01

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. It is designed for the U.S. Department of Agriculture Forest Service (USFS) and other federal land management agencies to evaluate existing and proposed renewable energy projects. Much of the content of the Atlas was compiled at Argonne National Laboratory (Argonne) to support recent and current energy-related Environmental Impact Statements and studies, including the following projects: (1) West-wide Energy Corridor Programmatic Environmental Impact Statement (PEIS) (BLM 2008); (2) Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2010); (3) Supplement to the Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2011); (4) Upper Great Plains Wind Energy PEIS (WAPA/USFWS 2012, in progress); and (5) Energy Transport Corridors: The Potential Role of Federal Lands in States Identified by the Energy Policy Act of 2005, Section 368(b) (in progress). This report explains how to add the Atlas to your computer and install the associated software; describes each of the components of the Atlas; lists the Geographic Information System (GIS) database content and sources; and provides a brief introduction to the major renewable energy technologies.

  13. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  15. All 2006 ATLAS Tutorials online

    CERN Multimedia

    Steven Goldfarb,; Mitch McLachlan,; Homer A. Neal

    The University of Michigan has completed its full agenda of Web Lecture recording for ATLAS for 2006. The archives include all three ATLAS Week Plenary Sessions, as well as a large variety of tutorials. They are accessible at target="_top" this location. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. This is the first year our group has been asked to provide this complete service to the collaboration, so any and all feedback is welcome. We would especially like to know if you had any difficulties viewing the lectures, if you found the selection of material to be useful, and/or if you think there are any other specific events we ought to cover in 2007. Please send you comments to wlap@umich.edu. We look forward to bringing you a rich variety of new lectures in 2007, starting with the ATLAS Distributed Computing Tutorial on Feb 1, 2 in Edinburgh and concluding with the Higgs discovery talk (of course). Enjoy the Lec...

  16. Quantification of Tc-99m-ethyl cysteinate dimer brain single photon emission computed tomography images using statistical probabilistic brain atlas in depressive end-stage renal disease patients Correlation with disease severity and symptom factors

    Institute of Scientific and Technical Information of China (English)

    Heeyoung Kim; In Joo Kim; Seong-Jang Kim; Sang Heon Song; Kyoungjune Pak; Keunyoung Kim

    2012-01-01

    This study adapted a statistical probabilistic anatomical map of the brain for single photon emission computed tomography images of depressive end-stage renal disease patients. This research aimed to investigate the relationship between symptom clusters, disease severity, and cerebral blood flow. Twenty-seven patients (16 males, 11 females) with stages 4 and 5 end-stage renal disease were enrolled, along with 25 healthy controls. All patients underwent depressive mood assessment and brain single photon emission computed tomography. The statistical probabilistic anatomical map images were used to calculate the brain single photon emission computed tomography counts. Asymmetric index was acquired and Pearson correlation analysis was performed to analyze the correlation between symptom factors, severity, and regional cerebral blood flow. The depression factors of the Hamilton Depression Rating Scale showed a negative correlation with cerebral blood flow in the left amygdale. The insomnia factor showed negative correlations with cerebral blood flow in the left amygdala, right superior frontal gyrus, right middle frontal gyrus, and left middle frontal gyrus. The anxiety factor showed a positive correlation with cerebral glucose metabolism in the cerebellar vermis and a negative correlation with cerebral glucose metabolism in the left globus pallidus, right inferior frontal gyrus, both temporal poles, and left parahippocampus. The overall depression severity (total scores of Hamilton Depression Rating Scale) was negatively correlated with the statistical probabilistic anatomical map results in the left amygdala and right inferior frontal gyrus. In conclusion, our results demonstrated that the disease severity and extent of cerebral blood flow quantified by a probabilistic brain atlas was related to various brain areas in terms of the overall severity and symptom factors in end-stage renal disease patients.

  17. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  18. UNC-Emory Infant Atlases for Macaque Brain Image Analysis: Postnatal Brain Development through 12 Months

    Science.gov (United States)

    Shi, Yundi; Budin, Francois; Yapuncich, Eva; Rumple, Ashley; Young, Jeffrey T.; Payne, Christa; Zhang, Xiaodong; Hu, Xiaoping; Godfrey, Jodi; Howell, Brittany; Sanchez, Mar M.; Styner, Martin A.

    2017-01-01

    Computational anatomical atlases have shown to be of immense value in neuroimaging as they provide age appropriate reference spaces alongside ancillary anatomical information for automated analysis such as subcortical structural definitions, cortical parcellations or white fiber tract regions. Standard workflows in neuroimaging necessitate such atlases to be appropriately selected for the subject population of interest. This is especially of importance in early postnatal brain development, where rapid changes in brain shape and appearance render neuroimaging workflows sensitive to the appropriate atlas choice. We present here a set of novel computation atlases for structural MRI and Diffusion Tensor Imaging as crucial resource for the analysis of MRI data from non-human primate rhesus monkey (Macaca mulatta) data in early postnatal brain development. Forty socially-housed infant macaques were scanned longitudinally at ages 2 weeks, 3, 6, and 12 months in order to create cross-sectional structural and DTI atlases via unbiased atlas building at each of these ages. Probabilistic spatial prior definitions for the major tissue classes were trained on each atlas with expert manual segmentations. In this article we present the development and use of these atlases with publicly available tools, as well as the atlases themselves, which are publicly disseminated to the scientific community. PMID:28119564

  19. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R. [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L. [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  20. Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    CERN Document Server

    Lopez-Perez, Juan Antonio; Salt, Jose; Ros, Eduardo

    2008-01-01

    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about $1 TeV^{-}1$, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the cou...

  1. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  2. Computational Analysis of LDDMM for Brain Mapping

    Directory of Open Access Journals (Sweden)

    Can eCeritoglu

    2013-08-01

    Full Text Available One goal of computational anatomy is to develop tools to accurately segment brain structures in healthy and diseased subjects. In this paper, we examine the performance and complexity of such segmentation in the framework of the large deformation diffeomorphic metric mapping (LDDMM registration method with reference to atlases and parameters. First we report the application of a multi-atlas segmentation approach to define basal ganglia structures in healthy and diseased kids’ brains. The segmentation accuracy of the multi-atlas approach is compared with the single atlas LDDMM implementation and two state-of-the-art segmentation algorithms – Freesurfer and FSL – by computing the overlap errors between automatic and manual segmentations of the six basal ganglia nuclei in healthy subjects as well as subjects with diseases including ADHD and Autism. The high accuracy of multi-atlas segmentation is obtained at the cost of increasing the computational complexity because of the calculations necessary between the atlases and a subject. Second, we examine the effect of parameters on total LDDMM computation time and segmentation accuracy for basal ganglia structures. Single atlas LDDMM method is used to automatically segment the structures in a population of 16 subjects using different sets of parameters. The results show that a cascade approach and using fewer time steps can reduce computational complexity as much as five times while maintaining reliable segmentations.

  3. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  4. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  5. Evolution of the ATLAS Nightly Build System

    Science.gov (United States)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  6. Distributed analysis in ATLAS using GANGA

    Science.gov (United States)

    Elmsheuser, Johannes; Brochu, Frederic; Cowan, Greig; Egede, Ulrik; Gaidioz, Benjamin; Lee, Hurng-Chun; Maier, Andrew; Móscicki, Jakub; Pajchel, Katarina; Reece, Will; Samset, Bjorn; Slater, Mark; Soroko, Alexander; Vanderster, Daniel; Williams, Michael

    2010-04-01

    Distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicists will be submitting analysis jobs to the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without expertise in Grid technology. These tools enlarge the number of Grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments, provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment. Support for all Grids presently used by ATLAS, namely the LCG/EGEE, NDGF/NorduGrid, and OSG/PanDA is provided. The integration and interaction with the ATLAS data management system DQ2 into GANGA is a key functionality. An intelligent job brokering is set up by using the job splitting mechanism together with data-set and file location knowledge. The brokering is aided by an automated system that regularly processes test analysis jobs at all ATLAS DQ2 supported sites. Large numbers of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports amongst other things tasks of user analysis with reconstructed data and small scale production of Monte Carlo data.

  7. Atlas-Based Prostate Segmentation Using an Hybrid Registration

    CERN Document Server

    Martin, Sébastien; Troccaz, Jocelyne

    2008-01-01

    Purpose: This paper presents the preliminary results of a semi-automatic method for prostate segmentation of Magnetic Resonance Images (MRI) which aims to be incorporated in a navigation system for prostate brachytherapy. Methods: The method is based on the registration of an anatomical atlas computed from a population of 18 MRI exams onto a patient image. An hybrid registration framework which couples an intensity-based registration with a robust point-matching algorithm is used for both atlas building and atlas registration. Results: The method has been validated on the same dataset that the one used to construct the atlas using the "leave-one-out method". Results gives a mean error of 3.39 mm and a standard deviation of 1.95 mm with respect to expert segmentations. Conclusions: We think that this segmentation tool may be a very valuable help to the clinician for routine quantitative image exploitation.

  8. Incomplete ossification of the atlas in dogs with cervical signs.

    Science.gov (United States)

    Warren-Smith, Christopher M R; Kneissl, Sibylle; Benigni, Livia; Kenny, Patrick J; Lamb, Christopher R

    2009-01-01

    Osseous defects affecting the atlas were identified in computed tomography and magnetic resonance images of five dogs with cervical signs including pain, ataxia, tetraparesis, or tetraplegia. Osseous defects corresponded to normal positions of sutures between the halves of the neural arch and the intercentrum, and were compatible with incomplete ossification. Alignment between the portions of the atlas appeared relatively normal in four dogs. In these dogs the bone edges were smooth and rounded with a superficial layer of relatively compact cortical bone. Displacement compatible with unstable fracture was evident in one dog. Concurrent atlantoaxial subluxation, with dorsal displacement of the axis relative to the atlas, was evident in four dogs. Three dogs received surgical treatment and two dogs were treated conservatively. All dogs improved clinically. Incomplete ossification of the atlas, which may be associated with atlantoaxial subluxation, should be considered in the differential diagnosis of dogs with clinical signs localized to the cranial cervical region.

  9. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  10. The last ATLAS overview week now available on Web Lectures

    CERN Multimedia

    Jeremy Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the lectures and send us a note at wlap@umich.edu to tell us what you think. The newly available WLAP items relating to ATLAS is the following: ATLAS Week Plenary, CERN, 2-3 October 2006 All previous WLAP lectures are also avilable on the web.

  11. Migration of ATLAS PanDA to CERN

    Science.gov (United States)

    Stewart, Graeme Andrew; Klimentov, Alexei; Koblitz, Birger; Lamanna, Massimo; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; Emanuel De Castro Faria Salgado, Pedro; Wenaus, Torre

    2010-04-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  12. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  13. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  14. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  15. ATLAS Brochure (English version)

    CERN Multimedia

    Lefevre, Christiane

    2011-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  16. ATLAS brochure (Danish version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  17. ATLAS brochure (Italian version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  18. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  19. ATLAS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  20. ATLAS Brochure (english version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  1. ATLAS Brochure (french version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  2. ATLAS Brochure (english version)

    CERN Multimedia

    2004-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  3. Searches in ATLAS

    CERN Document Server

    Kondrashova, Nataliia; The ATLAS collaboration

    2017-01-01

    Many theories beyond the Standard Model predict new phenomena accessible by the LHC. Searches for new physics models are performed using the ATLAS experiment at the LHC. The results reported here use the pp collision data sample collected in 2015 and 2016 by the ATLAS detector at the LHC with a centre-of-mass energy of 13 TeV.

  4. ATLAS Colouring Book

    CERN Multimedia

    Anthony, Katarina

    2016-01-01

    The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  5. ATLAS brochure (Norwegian version)

    CERN Multimedia

    Lefevre, C

    2009-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter. Français

  6. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    La Givrine near St Cergue Cross Country Skiing and Fondue at Basse Ruche with M Nordberg, P Jenni, M Nessi, F Gianotti and Co. ATLAS Management Fondu dinner, reviewing state of play of the experiment Many fun scenes from cross country skiing and after 41 minutes of the film starts the fondue dinner in a nice chalet with many persons working for ATLAS experiment

  7. ATLAS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  8. ATLAS Thesis Awards 2015

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on Thursday 25 February. The winners also presented their work in front of members of the ATLAS Collaboration. Winners: Javier Montejo Berlingen, Barcelona (Spain), Ruth Pöttgen, Mainz (Germany), Nils Ruthmann, Freiburg (Germany), and Steven Schramm, Toronto (Canada).

  9. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  10. The ATLAS tile calorimeter

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Louis Rose-Dulcina, a technician from the ATLAS collaboration, works on the ATLAS tile calorimeter. Special manufacturing techniques were developed to mass produce the thousands of elements in this detector. Tile detectors are made in a sandwich-like structure where these scintillator tiles are placed between metal sheets.

  11. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Shots of aircraft factory where machining for ATLAS is done Shots of aircraft Work on components for ATLAS big wheel Discussions between Tikhonov and Nordberg in workshop Sequence 2 Shots of downtown Novosibirsk, including little church which is mid-point of Russian Federation Sequence 3 Interview of Yuri Tikhonov by Andrew Millington

  12. Higgs searches at ATLAS

    CERN Document Server

    Lafaye, R

    2002-01-01

    This proceeding is an overview of ATLAS capabilities on Higgs studies. After a short introduction on LEP and Tevatron searches on this subject, the ATLAS potential on a standard model and a supersymmetric Higgs discovery are summarized. Last, a section presents the Higgs parameters measurement that will be possible at LHC. (6 refs).

  13. ATLAS brochure (Polish version)

    CERN Multimedia

    Lefevre, C

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  14. A Slice of ATLAS

    CERN Multimedia

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  15. ATLAS rewards industry

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  16. ATLAS-Hadronic Calorimeter

    CERN Multimedia

    2003-01-01

    Hall 180 work on Hadronic Calorimeter The ATLAS hadronic tile calorimeter The Tile Calorimeter, which constitutes the central section of the ATLAS hadronic calorimeter, is a non-compensating sampling device made of iron and scintillating tiles. (IEEE Trans. Nucl. Sci. 53 (2006) 1275-81)

  17. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  18. ATLAS brochure (Catalan version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  19. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  20. ATLAS Virtual Visits

    CERN Document Server

    Goldfarb, Steven; The ATLAS collaboration

    2015-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  1. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  2. Triggering events with GPUs at ATLAS

    CERN Document Server

    Kama, Sami; The ATLAS collaboration

    2015-01-01

    The growing complexity of events produced in LHC collisions demands more and more computing power both for the online selection and for the offline reconstruction of events. In recent years, the explosive performance growth of massively parallel processors like Graphics Processing Units~(GPU) both in computing power and in low energy consumption, make GPU extremely attractive for using them in a complex high energy experiment like ATLAS. Together with the optimization of reconstruction algorithms this new massively parallel paradigm is exploited. For this purpose a small scale prototype of the full ATLAS High Level Trigger involving GPU has been implemented. We discuss the integration procedure of this prototype, the achieved performance and the prospects for the future

  3. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whole trackin...

  4. Class Generation for Numerical Wind Atlases

    DEFF Research Database (Denmark)

    Cutler, N.J.; Jørgensen, B.H.; Ersbøll, Bjarne Kjær;

    2006-01-01

    A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability of the atmos......A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability...... of the atmosphere. Wind atlases are typically produced using many years of on-site wind observations at many locations. Numerical wind atlases are the result of mesoscale model integrations based on synoptic scale wind climates and can be produced in a number of hours of computation. 40 years of twice daily NCEP....../NCAR reanalysis geostrophic wind data (approximately 200 km resolution) are represented in typically around 150 classes, each with a frequency of occurrence. The mean wind-speed and direction in each class is used as input data to force the mesoscale model, which downscales the wind to a 5 km resolution while...

  5. ATLAS Forward Detectors and Physics

    CERN Document Server

    Soni, N

    2010-01-01

    In this communication I describe the ATLAS forward physics program and the detectors, LUCID, ZDC and ALFA that have been designed to meet this experimental challenge. In addition to their primary role in the determination of ATLAS luminosity these detectors - in conjunction with the main ATLAS detector - will be used to study soft QCD and diffractive physics in the initial low luminosity phase of ATLAS running. Finally, I will briefly describe the ATLAS Forward Proton (AFP) project that currently represents the future of the ATLAS forward physics program.

  6. The ATLAS Eventlndex: data flow and inclusion of other metadata

    Science.gov (United States)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is the catalogue of the event-related metadata for the information collected from the ATLAS detector. The basic unit of this information is the event record, containing the event identification parameters, pointers to the files containing this event as well as trigger decision information. The main use case for the EventIndex is event picking, as well as data consistency checks for large production campaigns. The EventIndex employs the Hadoop platform for data storage and handling, as well as a messaging system for the collection of information. The information for the EventIndex is collected both at Tier-0, when the data are first produced, and from the Grid, when various types of derived data are produced. The EventIndex uses various types of auxiliary information from other ATLAS sources for data collection and processing: trigger tables from the condition metadata database (COMA), dataset information from the data catalogue AMI and the Rucio data management system and information on production jobs from the ATLAS production system. The ATLAS production system is also used for the collection of event information from the Grid jobs. EventIndex developments started in 2012 and in the middle of 2015 the system was commissioned and started collecting event metadata, as a part of ATLAS Distributed Computing operations.

  7. ATLAS data sonification : a new interface for musical expression

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This poster tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed. It is a partnership between the ATLAS collaboration and the MIT multimedia lab.

  8. 24 October 2014 - President of the Republic of Ecuador R. Correa Delgado signing the guest book with Vice President L. Moreno and Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Guillaume, Jeanneret

    2014-01-01

    visiting the ATLAS experimental cavern with Collaboration PSokesperson D. Charlton and ATLAS User F. Monticelli; throughout accompanied by Adviser for Ecuador J. Salicio Diez and Director for Research and Scientific Computing S. Bertolucci.

  9. ATLAS Data Challenge 1

    CERN Document Server

    Poulard, G

    2003-01-01

    In 2002 the ATLAS experiment started a series of Data Challenges (DC) of which the goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the production of those samples as a world-wide distributed activity. The first phase of DC1 was run during summer 2002, and involved 39 institutes in 18 countries. More than 10 million physics events and 30 million single particle events were fully simulated. Over a period of about 40 calendar days 71000 CPU-days were used producing 30 Tbytes of data in about 35000 partitions. In the second phase the next processing step was performed with the participation of 56 institutes in 21 countries (~ 4000 processors used in parallel). The basic elements of ...

  10. ATLAS Data Challenge 1

    CERN Document Server

    DC1 TaskForce

    2003-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at LHC that will start in 2007. Therefore, in 2002 a series of Data Challenges (DC's) was started whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger and Physics communities, and the production of those large data samples as a worldwide distributed activity. It should be noted that it was not an option to "run everything at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. We were therefore faced with the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, the benefits o...

  11. The ATLAS tau trigger

    CERN Document Server

    Tsuno, S; The ATLAS collaboration

    2009-01-01

    The ATLAS tau trigger has three levels: the first one (L1) is hardware based and uses FPGAs, while the second (L2) and third levels (EF -Event Filter-) are software based and use commodity computers (2 x Intel Harpertown quad-core 2.5 GHz), running scientific linux 4. In this contribution we discuss both the physics characteristics of tau leptons and the technical solutions to quick data access and fast algorithms. We show that L1 selects narrow jets in the calorimeter with an overall rejection against QCD jets of 300, whilst L2 and EF (referred together as High Level Trigger -HLT-) use all the detectors with full granularity and apply a typical rejection of 15 within the stringent timing requirements of the LHC. In the HLT there are two complementary approaches: specialized, fast algorithms are used at L2, while more refined and sophisticated algorithms, imported from the offline, are utilized in the EF.

  12. EnviroAtlas - Cleveland, OH - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Cleveland, OH EnviroAtlas Community. It represents the outside edge of all the block groups included in the...

  13. EnviroAtlas - Des Moines, IA - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Des Moines, IA EnviroAtlas Community. It represents the outside edge of all the block groups included in the...

  14. 18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

    CERN Multimedia

    Samuel Morier-Genoud

    2012-01-01

    18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

  15. 10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

  16. ATLAS Event - First Splash of Particles in ATLAS

    CERN Multimedia

    ATLAS Outreach

    2008-01-01

    A simulated event. September 10, 2008 - The ATLAS detector lit up as a flood of particles traversed the detector when the beam was occasionally directed at a target near ATLAS. This allowed ATLAS physicists to study how well the various components of the detector were functioning in preparation for the forthcoming collisions. The first ATLAS data recorded on September 10, 2008 is seen here. Running time 24 seconds

  17. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2014-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  18. Grid Site Testing for ATLAS with HammerCloud

    CERN Document Server

    Elmsheuser, J; The ATLAS collaboration; Legger, F; Medrano LLamas, R; Sciacca, G; van der Ster, D

    2013-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling VO- and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test work-flows. These new work-flows comprise e.g. tests of the ATLAS nightly build system, ATLAS MC production system, XRootD federation FAX and new site stress test work-flows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  19. Evolution of User Analysis on the Grid in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2016-01-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Based on the experience from the first run of the LHC, substantial improvements to the ATLAS computing system have been made to optimize both production and analysis workflows. These include the re-design of the production and data management systems, a new analysis data format and event model, and the development of common reduction and analysis frameworks. The impact of such changes on the distributed analysis system is evaluated. More than 100 mill...

  20. Benefits and performance of ATLAS approaches to utilizing opportunistic resources

    CERN Document Server

    Filip\\v{c}i\\v{c}, Andrej; The ATLAS collaboration

    2016-01-01

    ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The difficulties of using such opportunistic resources come from architectural differences such as unavailability of grid services, the absence of network connectivity on worker nodes or inability to use standard authorization protocols. Nevertheless, ATLAS has been extremely successful in running production payloads on a variety of sites, thanks largely to the job execution workflow design in which the job assignment, input data provisioning and execution steps are clearly separated and can be offloaded to custom services. To transparently include the opportunistic sites in the ATLAS central production system, several models with supporting services have been developed to mimic the functionality of a full WLCG site. Some are e...

  1. ATLAS Inner Detector Alignment

    CERN Document Server

    Bocci, A

    2008-01-01

    The ATLAS experiment is a multi-purpose particle detector that will study high-energy particle collisions produced by the Large Hadron Collider at CERN. In order to achieve its physics goals, the ATLAS tracking requires that the positions of the silicon detector elements have to be known to a precision better than 10 μm. Several track-based alignment algorithms have been developed for the Inner Detector. An extensive validation has been performed with simulated events and real data coming from the ATLAS. Results from such validation are reported in this paper.

  2. Ceremony for ATLAS cavern

    CERN Multimedia

    2003-01-01

    Wednesday 4 June will be a special day for CERN. The President of the Swiss Confederation, Pascal Couchepin, will officially inaugurate the huge ATLAS cavern now that the civil engineering works have ended. The inauguration ceremony will be held in the ATLAS surface building, with speeches by Pascal Couchepin and CERN, ATLAS and civil engineering personalities. This ceremony will be Webcast live. To access the Webcast on 4 June at 18h00 go to CERN Intranet home page or the following address : http://webcast.cern.ch/live.php

  3. Highlights from ATLAS

    CERN Document Server

    Charlton, D; The ATLAS collaboration

    2013-01-01

    Highlights of recent results from ATLAS were presented. The data collected to date, the detector and physics performance, and measurements of previously established Standard Model processes were reviewed briefly before summarising the latest ATLAS results in the Brout-Englert-Higgs sector, where big progress has been made in the year since the discovery. Finally, selected prospects for measurements including the data from the HL-LHC luminosity upgrade were presented, for both ATLAS and CMS. Many of the results mentioned are preliminary. These proceedings reflect only a brief summary of the material presented, and the status at the time of the conference is reported.

  4. 29 March 2011 - Ninth President of Israel S.Peres welcomed by CERN Director-General R. Heuer who introduces Council President M. Spiro, Director for Accelerators and Technology S. Myers, Head of International Relations F. Pauss, Physics Department Head P. Bloch, Technology Department Head F. Bordry, Human Resources Department Head A.-S. Catherin, Beams Department Head P. Collier, Information Technology Department Head F. Hemmer, Adviser for Israel J. Ellis, Legal Counsel E. Gröniger-Voss, ATLAS Collaboration Spokesperson F. Gianotti, Former ATLAS Collaboration Spokesperson P. Jenni, Weizmann Institute G. Mikenberg, CERN VIP and Protocol Officer W. Korda.

    CERN Document Server

    Maximilien Brice

    2011-01-01

    During his visit he toured the ATLAS underground experimental area with Giora Mikenberg of the ATLAS collaboration, Weizmann Institute of Sciences and Israeli industrial liaison office, Rolf Heuer, CERN’s director-general, and Fabiola Gianotti, ATLAS spokesperson. The president also visited the CERN computing centre and met Israeli scientists working at CERN.

  5. Atlas Skills for Learning Rather than Learning Atlas Skills.

    Science.gov (United States)

    Carswell, R. J. B.

    1986-01-01

    Presents a model for visual learning and describes an approach to skills instruction which aids students in using atlases. Maintains that teachers must help students see atlases as tools capable of providing useful information rather than experiencing atlas learning as an empty exercise with little relevance to their lives. (JDH)

  6. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  7. ATLAS data sonification: a new interface for musical expression and public interaction

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This talk tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed.

  8. A digital rat atlas of sectional anatomy

    Science.gov (United States)

    Yu, Li; Liu, Qian; Bai, Xueling; Liao, Yinping; Luo, Qingming; Gong, Hui

    2006-09-01

    This paper describes a digital rat alias of sectional anatomy made by milling. Two healthy Sprague-Dawley (SD) rat weighing 160-180 g were used for the generation of this atlas. The rats were depilated completely, then euthanized by Co II. One was via vascular perfusion, the other was directly frozen at -85 °C over 24 hour. After that, the frozen specimens were transferred into iron molds for embedding. A 3% gelatin solution colored blue was used to fill the molds and then frozen at -85 °C for one or two days. The frozen specimen-blocks were subsequently sectioned on the cryosection-milling machine in a plane oriented approximately transverse to the long axis of the body. The surface of specimen-blocks were imaged by a scanner and digitalized into 4,600 x2,580 x 24 bit array through a computer. Finally 9,475 sectional images (arterial vessel were not perfused) and 1,646 sectional images (arterial vessel were perfused) were captured, which made the volume of the digital atlas up to 369.35 Gbyte. This digital rat atlas is aimed at the whole rat and the rat arterial vessels are also presented. We have reconstructed this atlas. The information from the two-dimensional (2-D) images of serial sections and three-dimensional (3-D) surface model all shows that the digital rat atlas we constructed is high quality. This work lays the foundation for a deeper study of digital rat.

  9. An ATLAS Virtual Visit connects physicists at the Town Square of Cracow and physicists of the LHC Experiment in the ATLAS control room; special participation of CERN's General Director, Rolf Heuer and the Director for Research and Scientific Computing, Sergio Bertolucci.

    CERN Multimedia

    2012-01-01

    he 12 Festival of Science "Theory-knowledge-experience...". Fest will be located on the traditional Main Square, which is visited by thousands of citizens and tourists. The Institute of Nuclear Physics as usual participates in this annual event. Our visitors will learn the secrets of the CERN experiments on the Large Hadron Collider - ATLAS, LHCb, ALICE, CMS, find out more about the Higgs particles, antimatter quark-gluon plasma (beeing guided by our scientists and PhD students). One of the attractions will be ATLAS Control Room Virtual Visit. Visiting people will have an opportunity to see how ATLAS is controlled and operated to collect its exciting data and ask questions to scientists and engineers involved in LHC program at CERN. Institute of Nuclear Physics has prepared also several interactive demonstrations of Atomic Force Microscopy, Magnetic Resonance, Hadron Therapy and Crystal Physics.

  10. Recent results from ATLAS experiment

    CERN Document Server

    Smirnov, Sergei; The ATLAS collaboration

    2016-01-01

    The 2nd LHC run has started in 2015 with a pp centre-of-mass collision energy of 13 TeV and ATLAS has taken more than 20 fb-1 of data at the new energy by 2016 summer. In this talk, an overview is given on the ATLAS data taking and the improvements made to the ATLAS experiment during the 2-year shutdown 2013/2014. Selected new results from the recent data analysis from ATLAS is also presented.

  11. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  12. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  13. ATLAS Overview Week at Brookhaven

    CERN Multimedia

    Pilcher, J

    Over 200 ATLAS participants gathered at Brookhaven National Laboratory during the first week of June for our annual overview week. Some system communities arrived early and held meetings on Saturday and Sunday, and the detector interface group (DIG) and Technical Coordination also took advantage of the time to discuss issues of interest for all detector systems. Sunday was also marked by a workshop on the possibilities for heavy ion physics with ATLAS. Beginning on Monday, and for the rest of the week, sessions were held in common in the well equipped Berkner Hall auditorium complex. Laptop computers became the norm for presentations and a wireless network kept laptop owners well connected. Most lunches and dinners were held on the lawn outside Berkner Hall. The weather was very cooperative and it was an extremely pleasant setting. This picture shows most of the participants from a view on the roof of Berkner Hall. Technical Coordination and Integration issues started the reports on Monday and became a...

  14. Overview of ATLAS PanDA Workload Management

    Science.gov (United States)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  15. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    1999-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. The film has original working sound.

  16. Vermont Natural Resources Atlas

    Data.gov (United States)

    Vermont Center for Geographic Information — The purpose of the Natural Resources Atlas is to provide geographic information about environmental features and sites that the Vermont Agency of Natural Resources...

  17. Higgs measurements with ATLAS

    CERN Document Server

    Queitsch-Maitland, Michaela; The ATLAS collaboration

    2017-01-01

    The final Run 1 and first Run 2 results with the ATLAS detector on the measurement of the cross sections, couplings and properties of the Higgs boson in individual final states and their combination are presented.

  18. Lunar Sample Atlas

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Sample Atlas provides pictures of the Apollo samples taken in the Lunar Sample Laboratory, full-color views of the samples in microscopic thin-sections,...

  19. ATLAS TV PROJECT

    CERN Multimedia

    2006-01-01

    CERN, Building 40 Interview with theorist Mr. Philip Hinchliffe (Berkeley) as well an interview with his wife Mrs. Hinchliffe who is also Physics Department head at Berkeley. They are both working in ATLAS Experiment.

  20. California Ocean Uses Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset is a result of the California Ocean Uses Atlas Project: a collaboration between NOAA's National Marine Protected Areas Center and Marine Conservation...

  1. PeptideAtlas

    Data.gov (United States)

    U.S. Department of Health & Human Services — PeptideAtlas is a multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments. Mass...

  2. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  3. ATLAS Cavern baseplate

    CERN Multimedia

    2002-01-01

    This video shows the incredible amounth of iron used for ATLAS cavern. Please look at the related links and also videos that are concerning the civil engineering where you can see even more detailed cavern excavation work.

  4. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  5. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    ATLAS Physics Workshop at the University of Roma Tre held from Monday 06 June 2005 to Saturday 11 June 2005. Experts establishing workshop, poster, people milling Shots of Peter Jenni introduction Many audience shots Sequences from various talks

  6. Budker INP in ATLAS

    CERN Multimedia

    2001-01-01

    The Novosibirsk group has proposed a new design for the ATLAS liquid argon electromagnetic end-cap calorimeter with a constant thickness of absorber plates. This design has signifi- cant advantages compared to one in the Technical Proposal and it has been accepted by the ATLAS Collaboration. The Novosibirsk group is responsible for the fabrication of the precision aluminium structure for the e.m.end-cap calorimeter.

  7. The ATLAS electromagnetic calorimeter

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Michel Mathieu, a technician for the ATLAS collaboration, is cabling the ATLAS electromagnetic calorimeter's first end-cap, before insertion into its cryostat. Millions of wires are connected to the electromagnetic calorimeter on this end-cap that must be carefully fed out from the detector so that data can be read out. Every element on the detector will be attached to one of these wires so that a full digital map of the end-cap can be recreated.

  8. ATLAS physics results

    CERN Document Server

    AUTHOR|(CDS)2074312

    2015-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN has been successfully taking data since the end of 2009 in proton-proton collisions at centre-of-mass energies of 7 and 8 TeV, and in heavy ion collisions. In these lectures, some of the most recent ATLAS results will be given on Standard Model measurements, the discovery of the Higgs boson, searches for supersymmetry and exotics and on heavy-ion results.

  9. ATLAS Transitional Radiation Tracker

    CERN Multimedia

    ATLAS Outreach

    2006-01-01

    This colorful 3D animation is an excerpt from the film "ATLAS-Episode II, The Particles Strike Back." Shot with a bug's eye view of the inside of the detector. The viewer is taken on a tour of the inner workings of the transitional radiation tracker within the ATLAS detector. Subjects covered include what the tracker is used to measure, its structure, what happens when particles pass through the tracker, how it distinguishes between different types of particles within it.

  10. ATLAS construction status

    CERN Document Server

    Jenni, P

    2006-01-01

    The ATLAS detector is being constructed at the LHC, in view of a data-taking start-up in 2007. This report concentrates on the progress and the technical challenges of the detector construction, and summarizes the status of the work as of August 2004. The project is on track to allow the highly motivated ATLAS collaboration to enter into a new exploratory domain of high-energy physics in 2007.

  11. Quench modeling of the ATLAS superconducting toroids

    CERN Document Server

    Gavrilin, A V; ten Kate, H H J

    2001-01-01

    Details of the normal zone propagation and the temperature distribution in the coils of ATLAS toroids under quench are presented. A tailor-made mathematical model and corresponding computer code enable obtainment of computational results for the propagation process over the coils in transverse (turn-to-turn) and longitudinal directions. The slow electromagnetic diffusion into the pure aluminum stabilizer of the toroid's conductor, as well as the essentially transient heat transfer through inter-turn insulation, is appropriately included in the model. The effect of nonuniform distribution of the magnetic field and the thermal links to the coil casing on the temperature gradients within the coils is analyzed in full. (5 refs).

  12. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  13. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  14. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  15. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    J. Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the l...

  16. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Goldfarb, S.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project. A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please e...

  17. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  18. EnviroAtlas - Metrics for Cleveland, OH

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web...

  19. EnviroAtlas - Metrics for Austin, TX

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web...

  20. EnviroAtlas Community Boundaries Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundaries of all EnviroAtlas Communities. It represents the outside edge of all the block groups included in each EnviroAtlas...

  1. Status and Evolution of ATLAS Workload Management System PanDA

    CERN Document Server

    De, K; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC uses a sophisticated workload management system, PanDA, to provide access for thousands of physicists to distributed computing resources of unprecedented scale. This system has proved to be robust and scalable during three years of LHC operations. We describe the design and performance of PanDA in ATLAS. The features which make PanDA successful in ATLAS could be applicable to other exabyte scale scientific projects. We describe plans to evolve PanDA towards a general workload management system for the new BigData initiative announced by the US government. Other planned future improvements to PanDA will also be described

  2. ATLAS EventIndex monitoring system using Kibana analytics and visualization platform

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Prokoshin, Fedor; Gallas, Elizabeth; Favareto, Andrea; Hrivnac, Julius; Sanchez, Javier; Fernandez Casani, Alvaro; Gonzalez de la Hoz, Santiago; Garcia Montoro, Carlos; Salt, Jose; Malon, David; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  3. Multi-atlas-based segmentation with local decision fusion--application to cardiac and aortic segmentation in CT scans.

    Science.gov (United States)

    Isgum, Ivana; Staring, Marius; Rutten, Annemarieke; Prokop, Mathias; Viergever, Max A; van Ginneken, Bram

    2009-07-01

    A novel atlas-based segmentation approach based on the combination of multiple registrations is presented. Multiple atlases are registered to a target image. To obtain a segmentation of the target, labels of the atlas images are propagated to it. The propagated labels are combined by spatially varying decision fusion weights. These weights are derived from local assessment of the registration success. Furthermore, an atlas selection procedure is proposed that is equivalent to sequential forward selection from statistical pattern recognition theory. The proposed method is compared to three existing atlas-based segmentation approaches, namely 1) single atlas-based segmentation, 2) average-shape atlas-based segmentation, and 3) multi-atlas-based segmentation with averaging as decision fusion. These methods were tested on the segmentation of the heart and the aorta in computed tomography scans of the thorax. The results show that the proposed method outperforms other methods and yields results very close to those of an independent human observer. Moreover, the additional atlas selection step led to a faster segmentation at a comparable performance.

  4. New format for ATLAS e-news

    CERN Document Server

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  5. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  6. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  7. Spinal canal stenosis at the level of Atlas

    Directory of Open Access Journals (Sweden)

    Suchanda Bhattacharjee

    2011-01-01

    Full Text Available We report here a rare case of high cervical stenosis at the level of atlas who presented with progressively deteriorating quadriparesis and respiratory distress. A 10-year-old boy presented with above symptoms of one-year duration with a preceding history of trivial trauma prior to onset of such symptoms. Cervical spine MRI revealed a significant stenosis at the level of atlas from the posterior side with a syrinx extending above and below. High-resolution computed tomography of the above level yielded an ill-defined osseous bar compressing the canal at the level of C 1 posterior arch, which appeared bifid in the midline. The patient was immediately taken up for surgery in view of his respiratory complaints. The child showed an excellent recovery after excision of the posterior arch of atlas and removal of the compressing osseous structure.

  8. Big Data processing experience in the ATLAS experiment

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2014-01-01

    To improve the data quality for physics analysis, the ATLAS collaboration completed three major data reprocessing campaigns on the Grid during 2010-2012, with up to 2 PB of data being reprocessed every year. The Worldwide LHC Computing Grid provided petabytes of disk storage and tens of thousands of job slots for a faster throughput. High throughput is critical for timely completion of the reprocessing campaigns conducted in preparation for major physics conferences. In 2011 reprocessing the throughput doubled in comparison to the 2010 reprocessing campaign. To deliver new physics results for the 2013 Moriond Conference, ATLAS reprocessed twice more data in November 2012 within the same time period as in 2011 reprocessing, while due to increased LHC pileup, the 2012 pp events required twice more time to reconstruct than 2011 events. For a faster throughput, the number of jobs running concurrently exceeded 33k during ATLAS reprocessing campaign in November 2012. For comparison the daily average number of runni...

  9. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  10. A generative probability model of joint label fusion for multi-atlas based brain segmentation.

    Science.gov (United States)

    Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang

    2014-08-01

    Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling

  11. ATLAS copies its first PetaByte out of CERN

    CERN Multimedia

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  12. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  13. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  14. OCCIPITALIZATION OF ATLAS

    Directory of Open Access Journals (Sweden)

    Sween Walia

    2014-12-01

    Full Text Available Occipitalization of atlas is an osseous anomaly of the craniovertebral junction which occurs at the base of the skull in the region of the foramen magnum. The knowledge of such a fusion is important because skeletal abnormalities at the craniocervical junction may result in sudden death. During bone cleaning procedure and routine undergraduate osteology teaching, three skulls with Occipitalization of atlas were encountered in the department of Anatomy at MMIMSR, Mullana, India. In one skull, both anterior and posterior arch were completely fused with occipital bone while the transverse process on the right side was not fused whereas left transverse process was fused with occipital bone. Both anterior and posterior arch were completely fused whereas transverse process on both sides were not fused in other skull. In another skull, partial and asymmetrical Occipitalization of atlas vertebra with occipital bone was found with bifid posterior arch of atlas at the level of posterior tubercle. Anterior arch was completely fused with basilar part of occipital bone but both the transverse processes were not fused. Reduced diameter of foramen magnum due to the atlanto-occipital fusion might cause neurological complications due to compression of spinal cord or medulla oblongata, vertebral vessels, 1st cervical nerve, thus, knowledge of occipitalization of the atlas may be of substantial importance to orthopaedicians, neurosurgeons, physicians and radiologists dealing with abnormalities of the cervical spine.

  15. Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G.; Abat, E.; Abbott, B.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Acharya, Bobby Samir; Adams, D.L.; Addy, T.N.; Adorisio, C.; Adragna, P.; Adye, T.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; /SUNY, Albany /Alberta U. /Ankara U. /Annecy, LAPP /Argonne /Arizona U. /Texas U., Arlington /Athens U. /Natl. Tech. U., Athens /Baku, Inst. Phys. /Barcelona, IFAE /Belgrade U. /VINCA Inst. Nucl. Sci., Belgrade /Bergen U. /LBL, Berkeley /Humboldt U., Berlin /Bern U., LHEP /Birmingham U. /Bogazici U. /INFN, Bologna /Bologna U.

    2011-11-28

    The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. In this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of

  16. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  17. Analysis of empty ATLAS pilot jobs

    CERN Document Server

    Love, Peter; The ATLAS collaboration

    2016-01-01

    The pilot model used by the ATLAS production system has been in use for many years. The model has proven to be a success with many advantages over push models. However one of the negative side-effects of using a pilot model is the presence of 'empty pilots' running on sites which consume a small amount of walltime and not running a useful payload job. The impact on a site can be significant with previous studies showing a total 0.5% walltime usage with no benefit to either the site or to ATLAS. Another impact is the number of empty pilots being processed by a site's Compute Element and batch system which can be 5% of the total number of pilots being handled. In this paper we review the latest statistics using both ATLAS and site data and highlight edge cases where the number of empty pilots dominate. We also study the effect of tuning the pilot factories to reduce the number of empty pilots.

  18. Organization and management of ATLAS nightly builds

    Science.gov (United States)

    Luehring, F.; Obreshkov, E.; Quarrie, D.; Rybkine, G.; Undrus, A.

    2010-04-01

    The automated multi-platform software nightly build system is a major component in the ATLAS collaborative software organization, validation and code approval schemes. Code developers from ATLAS participating Institutes spread all around the world use about 30 branches of nightly releases for testing new packages, verification of patches to existing software, and migration to new platforms and compilers. The nightly releases lead up to, and are the basis of, stable software releases used for data processing worldwide. The ATLAS nightly builds are managed by the fully automated NICOS framework on the computing farm with 44 powerful multiprocessor nodes. The ATN test tool is embedded within the nightly system and provides results shortly after full compilations complete. Other test frameworks are synchronized with NICOS jobs and run larger scale validation jobs using the nightly releases. NICOS web pages dynamically provide information about the progress and results of the builds. For faster feedback, e-mail notifications about nightly releases problems are automatically distributed to the developers responsible.

  19. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  20. Calorimetry triggering in ATLAS

    CERN Document Server

    Igonkina, O; Adragna, P; Aharrouche, M; Alexandre, G; Andrei, V; Anduaga, X; Aracena, I; Backlund, S; Baines, J; Barnett, B M; Bauss, B; Bee, C; Behera, P; Bell, P; Bendel, M; Benslama, K; Berry, T; Bogaerts, A; Bohm, C; Bold, T; Booth, J R A; Bosman, M; Boyd, J; Bracinik, J; Brawn, I, P; Brelier, B; Brooks, W; Brunet, S; Bucci, F; Casadei, D; Casado, P; Cerri, A; Charlton, D G; Childers, J T; Collins, N J; Conde Muino, P; Coura Torres, R; Cranmer, K; Curtis, C J; Czyczula, Z; Dam, M; Damazio, D; Davis, A O; De Santo, A; Degenhardt, J; Delsart, P A; Demers, S; Demirkoz, B; Di Mattia, A; Diaz, M; Djilkibaev, R; Dobson, E; Dova, M, T; Dufour, M A; Eckweiler, S; Ehrenfeld, W; Eifert, T; Eisenhandler, E; Ellis, N; Emeliyanov, D; Enoque Ferreira de Lima, D; Faulkner, P J W; Ferland, J; Flacher, H; Fleckner, J E; Flowerdew, M; Fonseca-Martin, T; Fratina, S; Fhlisch, F; Gadomski, S; Gallacher, M P; Garitaonandia Elejabarrieta, H; Gee, C N P; George, S; Gillman, A R; Goncalo, R; Grabowska-Bold, I; Groll, M; Gringer, C; Hadley, D R; Haller, J; Hamilton, A; Hanke, P; Hauser, R; Hellman, S; Hidvgi, A; Hillier, S J; Hryn'ova, T; Idarraga, J; Johansen, M; Johns, K; Kalinowski, A; Khoriauli, G; Kirk, J; Klous, S; Kluge, E-E; Koeneke, K; Konoplich, R; Konstantinidis, N; Kwee, R; Landon, M; LeCompte, T; Ledroit, F; Lei, X; Lendermann, V; Lilley, J N; Losada, M; Maettig, S; Mahboubi, K; Mahout, G; Maltrana, D; Marino, C; Masik, J; Meier, K; Middleton, R P; Mincer, A; Moa, T; Monticelli, F; Moreno, D; Morris, J D; Mller, F; Navarro, G A; Negri, A; Nemethy, P; Neusiedl, A; Oltmann, B; Olvito, D; Osuna, C; Padilla, C; Panes, B; Parodi, F; Perera, V J O; Perez, E; Perez Reale, V; Petersen, B; Pinzon, G; Potter, C; Prieur, D P F; Prokishin, F; Qian, W; Quinonez, F; Rajagopalan, S; Reinsch, A; Rieke, S; Riu, I; Robertson, S; Rodriguez, D; Rogriquez, Y; Rhr, F; Saavedra, A; Sankey, D P C; Santamarina, C; Santamarina Rios, C; Scannicchio, D; Schiavi, C; Schmitt, K; Schultz-Coulon, H C; Schfer, U; Segura, E; Silverstein, D; Silverstein, S; Sivoklokov, S; Sjlin, J; Staley, R J; Stamen, R; Stelzer, J; Stockton, M C; Straessner, A; Strom, D; Sushkov, S; Sutton, M; Tamsett, M; Tan, C L A; Tapprogge, S; Thomas, J P; Thompson, P D; Torrence, E; Tripiana, M; Urquijo, P; Urrejola, P; Vachon, B; Vercesi, V; Vorwerk, V; Wang, M; Watkins, P M; Watson, A; Weber, P; Weidberg, T; Werner, P; Wessels, M; Wheeler-Ellis, S; Whiteson, D; Wiedenmann, W; Wielers, M; Wildt, M; Winklmeier, F; Wu, X; Xella, S; Zhao, L; Zobernig, H; de Seixas, J M; dos Anjos, A; Asman, B; Özcan, E

    2009-01-01

    The ATLAS experiment is preparing for data taking at 14 TeV collision energy. A rich discovery physics program is being prepared in addition to the detailed study of Standard Model processes which will be produced in abundance. The ATLAS multi-level trigger system is designed to accept one event in 2 105 to enable the selection of rare and unusual physics events. The ATLAS calorimeter system is a precise instrument, which includes liquid Argon electro-magnetic and hadronic components as well as a scintillator-tile hadronic calorimeter. All these components are used in the various levels of the trigger system. A wide physics coverage is ensured by inclusively selecting events with candidate electrons, photons, taus, jets or those with large missing transverse energy. The commissioning of the trigger system is being performed with cosmic ray events and by replaying simulated Monte Carlo events through the trigger and data acquisition system.

  1. ATLAS rewards industry

    CERN Multimedia

    2006-01-01

    Showing excellence in mechanics, electronics and cryogenics, three industries are honoured for their contributions to the ATLAS experiment. Representatives of the three award-wining companies after the ceremony. For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Close interaction with CERN was a key factor in the selection of each rewarded company, in addition to the high-quality products they delivered to the experiment. Alu Menziken Industrie AG, of Switzerland, was honoured for the production of 380,000 aluminium tubes for the Monitored Drift Tube Chambers (MDT). As Giora Mikenberg, the Muon System Project Leader stressed, the aluminium tubes were delivered on time with an extraordinary quality and precision. Between October 2000 and Jan...

  2. Two ATLAS suppliers honoured

    CERN Multimedia

    2007-01-01

    The ATLAS experiment has recognised the outstanding contribution of two firms to the pixel detector. Recipients of the supplier award with Peter Jenni, ATLAS spokesperson, and Maximilian Metzger, CERN Secretary-General.At a ceremony held at CERN on 28 November, the ATLAS collaboration presented awards to two of its suppliers that had produced sensor wafers for the pixel detector. The CiS Institut für Mikrosensorik of Erfurt in Germany has supplied 655 sensor wafers containing a total of 1652 sensor tiles and the firm ON Semiconductor has supplied 515 sensor wafers (1177 sensor tiles) from its foundry at Roznov in the Czech Republic. Both firms have successfully met the very demanding requirements. ATLAS’s huge pixel detector is very complicated, requiring expertise in highly specialised integrated microelectronics and precision mechanics. Pixel detector project leader Kevin Einsweiler admits that when the project was first propo...

  3. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  4. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  5. Local atlas selection for discrete multi-atlas segmentation

    OpenAIRE

    Alchatzidis, Stavros; Sotiras, Aristeidis; Paragios, Nikos

    2015-01-01

    International audience; Multi-atlas segmentation is commonly performed in two separate steps: i) multiple pairwise registrations, and ii) fusion of the deformed segmentation masks towards labeling objects of interest. In this paper we propose an approach for integrated volume segmentation through multi-atlas registration. To tackle this problem, we opt for a graphical model where registration and segmentation nodes are coupled. The aim is to recover simultaneously all atlas deformations along...

  6. ATLAS forward physics program

    CERN Document Server

    HELLER, M; The ATLAS collaboration

    2010-01-01

    The variety of forward detectors installed in the vicinity of the ATLAS experiment allows to look over a wide range of forward physics topics. They ensure a good information about rapidity gaps, and the installation of very forward detectors (ALFA and AFP) will allow to tag the leading proton(s) remaining from the different processes studied. Most of the studies have to be done at low luminosity to avoid pile-up, but the AFP project offers a really exiting future for the ATLAS forward physics program. We also present how these forward detectors can be used to measure the relative and absolute luminosity.

  7. Jet Physics in ATLAS

    CERN Document Server

    Sandoval, C; The ATLAS collaboration

    2012-01-01

    Measurements of hadronic jets provide tests of strong interactions which are interesting both in their own right and as backgrounds to many New Physics searches. It is also through tests of Quantum Chromodynamics that new physics may be discovered. The extensive dataset recorded with the ATLAS detector throughout the 7 TeV centre-of-mass LHC operation period allows QCD to be probed at distances never reached before. We present a review of selected ATLAS jet physics measurements. These measurements constitute precision tests of QCD in a new energy regime, and show sensitivity to the parton densities in the proton and to the value of the strong coupling, alpha_s.

  8. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    Karsten Köneke; on behalf of the ATLAS Collaboration

    2012-11-01

    The ATLAS experiment at the Large Hadron Collider is recording data from proton–proton collisions at a centre-of-mass energy of 7 TeV since the spring of 2010. The integrated luminosity has grown nearly exponentially since then and continues to rise fast. The ATLAS Collaboration has set up a framework to automatically process the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2–3 days after data taking). Hints of potentially interesting physics signals obtained this way are followed up by physics groups.

  9. ATLAS SCT Commissioning

    CERN Document Server

    Limper, Maaike

    2007-01-01

    The Barrel and End-Cap SCT detectors are installed in the ATLAS cavern. This paper will focus on the assembly, installation and first tests of the SCT in-situ. The thermal, electrical and optical services were tested and the results will be reviewed. Problems with the cooling have led to a modification for the heaters on the cooling return lines. The first tests of the SCT in-situ will be described using the calibration scans. The performance of the SCT, in particular the fraction of working channels and the noise performance, is well within the ATLAS specification.

  10. The Herschel ATLAS

    Science.gov (United States)

    Eales, S.; Dunne, L.; Clements, D.; Cooray, A.; De Zotti, G.; Dye, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Maddox, S.; Negrello, M.; Serjeant, S.; Thompson, M. A.; Van Kampen, E.; Amblard, A.; Andreani, P.; Baes, M.; Beelen, A.; Bendo, G. J.; Bertoldi, F.; Benford, D.; Bock, J.

    2010-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 570 sq deg of the extragalactic sky, 4 times larger than all the other Herschel extragalactic surveys combined, in five far-infrared and submillimeter bands. We describe the survey, the complementary multiwavelength data sets that will be combined with the Herschel data, and the six major science programs we are undertaking. Using new models based on a previous submillimeter survey of galaxies, we present predictions of the properties of the ATLAS sources in other wave bands.

  11. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    CAMERA ON TOROID The ATLAS barrel toroid system consists of eight coils, each of axial length 25.3 m, assembled radially and symmetrically around the beam axis. The coils are of a flat racetrack type with two double-pancake windings made of 20.5 kA aluminium-stabilized niobium-titanium superconductor. The video is about the slow lowering of the toroid down to the cavern of ATLAS. It is very demanding task. The camera is placed on top of the toroid.

  12. ATLAS/CMS Upgrades

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00370685; The ATLAS collaboration

    2016-01-01

    Precision studies of the Standard Model (SM) and the searches of the physics beyond the SM are ongoing at the ATLAS and CMS experiments at the Large Hadron Collider (LHC). A luminosity upgrade of LHC is planned, which provides a significant challenge for the experiments. In this report, the plans of the ATLAS and CMS upgrades are introduced. Physics prospects for selected topics, including Higgs coupling measurements, Bs,d -> mumu decays, and top quark decays through flavor changing neutral current, are also shown.

  13. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  14. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC

    Directory of Open Access Journals (Sweden)

    Megino Fernando Barreiro

    2016-01-01

    The PanDA (Production and Distributed Analysis system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS, up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.

  15. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool

    Science.gov (United States)

    Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the

    2015-11-01

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  16. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  17. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    Directory of Open Access Journals (Sweden)

    Xiaoying Tang

    Full Text Available This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  18. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    Science.gov (United States)

    Tang, Xiaoying; Oishi, Kenichi; Faria, Andreia V; Hillis, Argye E; Albert, Marilyn S; Mori, Susumu; Miller, Michael I

    2013-01-01

    This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA) for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI) atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM) algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  19. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. When passing throw the walls the succeeding can be heard and seen. The film has original working sound.

  20. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  1. Taking ATLAS to new heights

    CERN Multimedia

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  2. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  3. The Hatfield Lunar Atlas Digitally Re-Mastered Edition

    CERN Document Server

    Cook, Anthony Charles

    2012-01-01

    The Hatfield Lunar Atlas has become an amateur lunar observer's bible since it was first published in 1968. A major update of the atlas was made in 1998, using the same wonderful photographs that Commander Henry Hatfield made with his purpose-built 12-inch (300 mm) telescope, but bringing the lunar nomenclature up to date and changing the units from Imperial to S.I. metric. However, with modern telescope optics, digital imaging equipment and computer enhancement new pictures can easily surpass what was achieved with Henry Hatfield's 12-inch telescope and a film camera. This limits the usefulness of the original atlas to visual observing or imaging with rather small amateur telescopes. The new, digitally re-mastered edition vastly improves the clarity and definition of the original photographs - significantly beyond the resolution limits of the photographic grains present in earlier atlas versions - while preserving the layout and style of the original publications. This has been achieved by merging computer-v...

  4. Multi-threaded ATLAS Simulation on Intel Knights Landing Processors

    CERN Document Server

    Farrell, Steven; The ATLAS collaboration

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  5. Visits to Tier-1 Computing Centres

    CERN Multimedia

    Dario Barberis

    At the beginning of 2007 it became clear that an enhanced level of communication is needed between the ATLAS computing organisation and the Tier-1 centres. Most usual meetings are ATLAS-centric and cannot address the issues of each Tier-1; therefore we decided to organise a series of visits to the Tier-1 centres and focus on site issues. For us, ATLAS computing management, it is most useful to realize how each Tier-1 centre is organised, and its relation to the associated Tier-2s; indeed their presence at these visits is also very useful. We hope it is also useful for sites... at least, we are told so! The usual participation includes, from the ATLAS side: computing management, operations, data placement, resources, accounting and database deployment coordinators; and from the Tier-1 side: computer centre management, system managers, Grid infrastructure people, network, storage and database experts, local ATLAS liaison people and representatives of the associated Tier-2s. Visiting Tier-1 centres (1-4). ...

  6. ATLAS Data Challenges - A Collaborative Worldwide Activity

    CERN Multimedia

    Poulard, G

    The goals of the ATLAS Data Challenges (DC) are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. It is understood that these Data Challenges should be of increasing complexity and that their results will be used as input for a Computing TDR and for preparing an MoU in due time. A major feature of the current computing activities (DC1) in ATLAS is the preparation and deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the actual production of those samples. It should be noted that it is not an option to "run everything at CERN" even if we wanted to; the resources are not available at CERN to carry out the production on a reasonable time-scale. We have therefore had to face the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, th...

  7. South Baltic Wind Atlas

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Hahmann, Andrea N.; Hasager, Charlotte Bay

    A first version of a wind atlas for the South Baltic Sea has been developed using the WRF mesoscale model and verified by data from tall Danish and German masts. Six different boundary-layer parametrization schemes were evaluated by comparing the WRF results to the observed wind profiles at the m...

  8. HWW in ATLAS

    CERN Document Server

    Rados, Pere; The ATLAS collaboration

    2016-01-01

    The H-->WW channel plays an important role in Higgs boson property measurements, searches for rare decay modes, and searches for possible extended Higgs sectors. In this talk the latest H-->WW results from ATLAS will be briefly summarised.

  9. ATLAS Supersymmetry Searches

    CERN Document Server

    Ughetto, Michael; The ATLAS collaboration

    2016-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles, with focus on those obtained using proton-proton collisions at a centre of mass energy of 13 TeV.

  10. Exotic searches at ATLAS

    CERN Document Server

    Turra, Ruggero; The ATLAS collaboration

    2016-01-01

    The ATLAS detector has collected 3.2 fb^-1 of proton-proton collisions at 13 TeV centre of mass energy during the 2015 LHC run. A selected review of the recent result are presented in the context of the direct search for BSM, not SUSY, not BSM Higgs.

  11. Prototype ATLAS straw tracker

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This is an early prototype of the straw tracking device for the ATLAS detector at CERN. This detector will be part of the LHC project, scheduled to start operation in 2008. The straw tracker will consist of thousands of gas-filled straws, each containing a wire, allowing the tracks of particles to be followed.

  12. ATLAS solenoid operates underground

    CERN Multimedia

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  13. Higgs searches with ATLAS

    CERN Document Server

    Price, J D; The ATLAS collaboration

    2013-01-01

    Summary of the ATLAS analyses for the rarer SM Higgs decay channels, and the limits of the SM Higgs invisible decay width. Analyses included are the VH->Vbb, H->tautau, VH->VWW, H->Zy, H->mumu, ttH->ttyy and ZH->ll+inv.

  14. ATLAS Experiment Brochure

    CERN Multimedia

    AUTHOR|(INSPIRE)INSPIRE-00085461

    2016-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  15. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  16. An Icelandic wind atlas

    Science.gov (United States)

    Nawri, Nikolai; Nína Petersen, Gudrun; Bjornsson, Halldór; Arason, Þórður; Jónasson, Kristján

    2013-04-01

    While Iceland has ample wind, its use for energy production has been limited. Electricity in Iceland is generated from renewable hydro- and geothermal source and adding wind energy has not be considered practical or even necessary. However, adding wind into the energy mix is becoming a more viable options as opportunities for new hydro or geothermal power installation become limited. In order to obtain an estimate of the wind energy potential of Iceland a wind atlas has been developed as a part of the Nordic project "Improved Forecast of Wind, Waves and Icing" (IceWind). The atlas is based on mesoscale model runs produced with the Weather Research and Forecasting (WRF) Model and high-resolution regional analyses obtained through the Wind Atlas Analysis and Application Program (WAsP). The wind atlas shows that the wind energy potential is considerable. The regions with the strongest average wind are nevertheless impractical for wind farms, due to distance from road infrastructure and power grid as well as harsh winter climate. However, even in easily accessible regions wind energy potential in Iceland, as measured by annual average power density, is among the highest in Western Europe. There is a strong seasonal cycle, with wintertime power densities throughout the island being at least a factor of two higher than during summer. Calculations show that a modest wind farm of ten medium size turbines would produce more energy throughout the year than a small hydro power plants making wind energy a viable additional option.

  17. Prime wires for ATLAS

    CERN Multimedia

    2003-01-01

    In an award ceremony on 3 September, ATLAS honoured the French company Axon Cable for its special coaxial cables, which were purpose-built for the Liquid Argon calorimeter modules. Working for CERN since the 1970s, Axon' Cable received the ATLAS supplier award last week for its contribution to the liquid argon calorimeter cables of ATLAS (LAL/Orsay, France and University of Victoria, Canada), started in 1996. Its two sets of minicoaxial cables, called harnesses "A" and "B", are designed to function in the harsh conditions in the liquid argon (at 90 Kelvin or -183°C) and under extreme radiation (up to several Mrads). The cables are mainly used for the readout of the calorimeters, and are connected to the outside world by 114 signal feedthroughs with 1920 channels each. The signal from the detectors is transmitted directly without any amplification, which imposes tight restrictions on the impedance and on the signal propagation time of the cables. Peter Jenni, ATLAS spokesperson, gives the award for best s...

  18. ATLAS Detector Upgrade Prospects

    Science.gov (United States)

    Dobre, M.; ATLAS Collaboration

    2017-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC was ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015 and 2016. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, which will deliver of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred fb ‑1 expected for LHC running by the end of 2018 to 3000 fb ‑1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extensions to larger pseudorapidity, particularly in tracking and muon systems. This report summarizes various improvements to the ATLAS detector required to cope with the anticipated evolution of the LHC luminosity during this decade and the next. A brief overview is also given on physics prospects with a pp centre-of-mass energy of 14 TeV.

  19. ATLAS starts moving in

    CERN Multimedia

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  20. Atlas of NATO.

    Science.gov (United States)

    Young, Harry F.

    This atlas provides basic information about the North Atlantic Treaty Organization (NATO). Formed in response to growing concern for the security of Western Europe after World War II, NATO is a vehicle for Western efforts to reduce East-West tensions and the level of armaments. NATO promotes political and economic collaboration as well as military…

  1. Parcellation of the healthy neonatal brain into 107 regions using atlas propagation through intermediate time points in childhood

    Directory of Open Access Journals (Sweden)

    Manuel eBlesa Cabez

    2016-05-01

    Full Text Available Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2-41+6. An adult brain atlas (SRI24/TZO was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database, with the final atlas (Edinburgh Neonatal Atlas, ENA33 constructed using the Symmetric Group Normalization method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modelling brain growth during development.

  2. Fast and robust multi-atlas segmentation of brain magnetic resonance images

    DEFF Research Database (Denmark)

    Lötjönen, Jyrki Mp; Wolz, Robin; Koikkalainen, Juha R

    2010-01-01

    We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead of stand......We introduce an optimised pipeline for multi-atlas brain MRI segmentation. Both accuracy and speed of segmentation are considered. We study different similarity measures used in non-rigid registration. We show that intensity differences for intensity normalised images can be used instead...... of standard normalised mutual information in registration without compromising the accuracy but leading to threefold decrease in the computation time. We study and validate also different methods for atlas selection. Finally, we propose two new approaches for combining multi-atlas segmentation and intensity...

  3. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    CERN Document Server

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the co...

  4. A unified framework for cross-modality multi-atlas segmentation of brain MRI

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-01-01

    Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented....... These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when...... to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion....

  5. A digital framework to build, visualize and analyze a gene expression atlas with cellular resolution in zebrafish early embryogenesis.

    Directory of Open Access Journals (Sweden)

    Carlos Castro-González

    2014-06-01

    Full Text Available A gene expression atlas is an essential resource to quantify and understand the multiscale processes of embryogenesis in time and space. The automated reconstruction of a prototypic 4D atlas for vertebrate early embryos, using multicolor fluorescence in situ hybridization with nuclear counterstain, requires dedicated computational strategies. To this goal, we designed an original methodological framework implemented in a software tool called Match-IT. With only minimal human supervision, our system is able to gather gene expression patterns observed in different analyzed embryos with phenotypic variability and map them onto a series of common 3D templates over time, creating a 4D atlas. This framework was used to construct an atlas composed of 6 gene expression templates from a cohort of zebrafish early embryos spanning 6 developmental stages from 4 to 6.3 hpf (hours post fertilization. They included 53 specimens, 181,415 detected cell nuclei and the segmentation of 98 gene expression patterns observed in 3D for 9 different genes. In addition, an interactive visualization software, Atlas-IT, was developed to inspect, supervise and analyze the atlas. Match-IT and Atlas-IT, including user manuals, representative datasets and video tutorials, are publicly and freely available online. We also propose computational methods and tools for the quantitative assessment of the gene expression templates at the cellular scale, with the identification, visualization and analysis of coexpression patterns, synexpression groups and their dynamics through developmental stages.

  6. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  7. Improving atlas methodology

    Science.gov (United States)

    Robbins, C.S.; Dowell, B.A.; O'Brien, J.

    1987-01-01

    We are studying a sample of Maryland (2 %) and New Hampshire (4 %) Atlas blocks and a small sample in Maine. These three States used different sampling methods and block sizes. We compare sampling techniques, roadside with off-road coverage, our coverage with that of the volunteers, and different methods of quantifying Atlas results. The 7 1/2' (12-km) blocks used in the Maine Atlas are satisfactory for coarse mapping, but are too large to enable changes to be detected in the future. Most states are subdividing the standard 7 1/2' maps into six 5-km blocks. The random 1/6 sample of 5-km blocks used in New Hampshire, Vermont (published 1985), and many other states has the advantage of permitting detection of some changes in the future, but the disadvantage of leaving important habitats unsampled. The Maryland system of atlasing all 1,200 5-km blocks and covering one out of each six by quarterblocks (2 1/2-km) is far superior if enough observers can be found. A good compromise, not yet attempted, would be to Atlas a 1/6 random sample of 5-km blocks and also one other carefully selected (non-random) block on the same 7 1/2' map--the block that would include the best sample of habitats or elevations not in the random block. In our sample the second block raised the percentage of birds found from 86% of the birds recorded in the 7 1/2' quadrangle to 93%. It was helpful to list the expected species in each block and to revise this list annually. We estimate that 90-100 species could be found with intensive effort in most Maryland blocks; perhaps 95-105 in New Hampshire. It was also helpful to know which species were under-sampled so we could make a special effort to search for these. A total of 75 species per block (or 75% of the expected species in blocks with very restricted habitat diversity) is considered a practical and adequate goal in these States. When fewer than 60 species are found per block, a high proportion of the rarer species are missed, as well as some of

  8. ATLAS: civil engineering Point 1

    CERN Multimedia

    2000-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are busy to finish the different infrastructures for ATLAS. Real underground video. Nice view from the surface to the cavern from the pit side - all the big machines looked very small. The film has original working sound.

  9. ATLAS recognises its best suppliers

    CERN Document Server

    2002-01-01

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance has just been attributed to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months ago at their headquarters in Japan.

  10. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Directory of Open Access Journals (Sweden)

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  11. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  12. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whol...

  13. Cortical sulcal atlas construction using a diffeomorphic mapping approach.

    Science.gov (United States)

    Joshi, Shantanu H; Cabeen, Ryan P; Sun, Bo; Joshi, Anand A; Gutman, Boris; Zamanyan, Alen; Chakrapani, Shruthi; Dinov, Ivo; Woods, Roger P; Toga, Arthur W

    2010-01-01

    We present a geometric approach for constructing shape atlases of sulcal curves on the human cortex. Sulci and gyri are represented as continuous open curves in R3, and their shapes are studied as elements of an infinite-dimensional sphere. This shape manifold has some nice properties--it is equipped with a Riemannian L2 metric on the tangent space and facilitates computational analyses and correspondences between sulcal shapes. Sulcal mapping is achieved by computing geodesics in the quotient space of shapes modulo rigid rotations and reparameterizations. The resulting sulcal shape atlas is shown to preserve important local geometry inherently present in the sample population. This is demonstrated in our experimental results for deep brain sulci, where we integrate the elastic shape model into surface registration framework for a population of 69 healthy young adult subjects.

  14. Experience with CORBA communication middleware in the ATLAS DAQ.

    CERN Document Server

    Kolos, S; Amorim, A; Badescu, E; Burckhart-Chromek, Doris; Caprini, M; Dobson, M; Fiuza de Barrosb, N; Flammerd, J; Jones, R; Kazarov, A; Klose, D; Korobov, S; Kotov, V; Liko, D; Mapelli, L; Mineev, M; Pedro, L; Ryabov, Yu; Soloviev, I; Computing In High Energy Physics

    2005-01-01

    As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need of an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes experience of using standard Common Object Request Broker Architecture (CORBA) middleware for providing a high performance and scalable software, which will be used for the online control, configuration and monitoring in the ATLAS Data Acquisition (DAQ) system. It also recites the experience, which was gained from using several CORBA implementations together and replacing one CORBA broker with another. Finally the paper presents the results of the large scale tests, demonstrating the performance and scalability of the ATLAS DAQ online services. These results show that the standard CORBA is truly appropriate for the highly efficient online distributed computing in the HEP experiments area.

  15. ATLAS, an integrated structural analysis and design system. Volume 3: User's manual, input and execution data

    Science.gov (United States)

    Dreisbach, R. L. (Editor)

    1979-01-01

    The input data and execution control statements for the ATLAS integrated structural analysis and design system are described. It is operational on the Control Data Corporation (CDC) 6600/CYBER computers in a batch mode or in a time-shared mode via interactive graphic or text terminals. ATLAS is a modular system of computer codes with common executive and data base management components. The system provides an extensive set of general-purpose technical programs with analytical capabilities including stiffness, stress, loads, mass, substructuring, strength design, unsteady aerodynamics, vibration, and flutter analyses. The sequence and mode of execution of selected program modules are controlled via a common user-oriented language.

  16. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2016-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates with the installation of new software releases at Tier-0. This created unnecessary overhead for developers and operators, and unavoidably led to different releases for the data-taking and the monitoring setup. We present a "trigger menu-aware" monitoring system designed for the ATLAS Run 2 data-taking. The new monitoring system aims to simplify the ATLAS operational workflows, and allows for easy and flexible monitoring configuration changes at the Tier-0 site via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the ne...

  17. Data federation strategies for ATLAS using XRootD

    Science.gov (United States)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  18. Persistent ATLAS Data Structures and Reclustering of Event Data

    CERN Document Server

    Schaller, Martin

    1999-01-01

    The ATLAS experiment will start to take data in the year 2005. The amount of experimental data forms a serious challenge for data processing and data storage. About 1 PB (1015 bytes) per year has to be processed and stored. Currently, a paradigm shift in High-Energy Physics (HEP) computing is taking place. It is planned that software is written in object-oriented languages (mainly C++). For data storage the usage of object-oriented database management systems (ODBMSs) is foreseen. This thesis investigates the usage of an ODBMS in the ATLAS experiment. Work was done in several connected areas. First, we present exhaustive benchmarks of the commercial ODBMS Objectivity/DB that is today the most promising candidate for the storage system. We describe the ATLAS 1 TB milestone that was performed to investigate the reliability and performance of an ODBMS storage solution coupled to a mass storage system. Second, we report about the design and implementation of the persistent ATLAS data structures, both in the detec...

  19. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2013-01-01

    The tau lepton plays a crucial role in understanding particle physics at the Tera scale. One of the most promising probes of the Higgs boson coupling to fermions is with detector signatures involving taus. In addition, many theories beyond the Standard Model, such as supersymmetry and exotic particles (Wʹ′ and Zʹ′), predict new physics with large couplings to taus. The ability to trigger on hadronic tau decays is therefore critical to achieving the physics goals of the ATLAS experiment. The higher instantaneous luminosities of proton-proton collisions achieved by the Large Hadron Collider (LHC) in 2012 resulted in a larger probability of overlap (pile-up) between bunch crossings, and so it was critical for ATLAS to have an effective tau trigger strategy. The details of this strategy are summarized in this poster, and the latest performance measurements are presented.

  20. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2013-01-01

    The tau lepton plays a crucial role in understanding particle physics at the Tera scale. One of the most promising probes of the Higgs boson coupling to fermions is with detector signatures involving taus. In addition, many theories beyond the Standard Model, such as supersymmetry and exotic particles (Wʹ and Zʹ), predict new physics with large couplings to taus. The ability to trigger on hadronic tau decays is therefore critical to achieving the physics goals of the ATLAS experiment. The higher instantaneous luminosities of proton-proton collisions achieved by the Large Hadron Collider (LHC) in 2012 resulted in a larger probability of overlap (pile-up) between bunch crossings, and so it was critical for ATLAS to have an effective tau trigger strategy. The details of this strategy are summarized in this paper, and the results of the latest performance measurements are presented.

  1. ATLAS IBL operational experience

    CERN Document Server

    Takubo, Yosuke; The ATLAS collaboration

    2016-01-01

    The Insertable B-Layer (IBL) is the inner most pixel layer in the ATLAS experiment, which was installed at 3.3 cm radius from the beam axis in 2014 to improve the tracking performance. To cope with the high radiation and hit occupancy due to proximity to the interaction point, a new read-out chip and two different silicon sensor technologies (planar and 3D) have been developed for the IBL. After the long shut-down period over 2013 and 2014, the ATLAS experiment started data-taking in May 2015 for Run-2 of the Large Hadron Collider (LHC). The IBL has been operated successfully since the beginning of Run-2 and shows excellent performance with the low dead module fraction, high data-taking efficiency and improved tracking capability. The experience and challenges in the operation of the IBL is described as well as its performance.

  2. Jet Physics in ATLAS

    CERN Document Server

    Sandoval, C; The ATLAS collaboration

    2012-01-01

    Measurements of hadronic jets provide tests of strong interactions which are interesting both in their own right and as backgrounds to many New Physics searches. It is also through tests of Quantum Chromodynamics that new physics may be discovered. The extensive dataset recorded with the ATLAS detector throughout the 7 TeV and 8 TeV centre-of-mass LHC operation periods allows QCD to be probed at distances never reached before. We present a review of selected ATLAS jet physics measurements. These measurements constitute precision tests of QCD in a new energy regime, and show sensitivity to the parton densities in the proton and to the value of the strong coupling, alpha_s.

  3. Jet substructure in ATLAS

    CERN Document Server

    Miller, David W

    2011-01-01

    Measurements are presented of the jet invariant mass and substructure in proton-proton collisions at $\\sqrt{s} = 7$ TeV with the ATLAS detector using an integrated luminosity of 37 pb$^{-1}$. These results exercise the tools for distinguishing the signatures of new boosted massive particles in the hadronic final state. Two "fat" jet algorithms are used, along with the filtering jet grooming technique that was pioneered in ATLAS. New jet substructure observables are compared for the first time to data at the LHC. Finally, a sample of candidate boosted top quark events collected in the 2010 data is analyzed in detail for the jet substructure properties of hadronic "top-jets" in the final state. These measurements demonstrate not only our excellent understanding of QCD in a new energy regime but open the path to using complex jet substructure observables in the search for new physics.

  4. HiggsHunters - a citizen science project for ATLAS

    CERN Document Server

    Haas, Andrew; The ATLAS collaboration

    2016-01-01

    Since the launch of HiggsHunters.org in November 2014, citizen science volunteers have classified more than a million points of interest in images from the ATLAS experiment at the LHC. Volunteers have been looking for displaced vertices and unusual features in images recorded during LHC Run-1. We discuss the design of the project, its impact on the public, and the surprising results of how the human volunteers performed relative to the computer algorithms in identifying displaced secondary vertices.

  5. HiggsHunters - a citizen science project for ATLAS

    CERN Document Server

    Haas, Andrew; The ATLAS collaboration

    2017-01-01

    Since the launch of HiggsHunters.org in November 2014, citizen science volunteers have classified more than a million points of interest in images from the ATLAS experiment at the LHC. Volunteers have been looking for displaced vertices and unusual features in images recorded during LHC Run-1. We discuss the design of the project, its impact on the public, and the surprising results of how the human volunteers performed relative to the computer algorithms in identifying displaced secondary vertices.

  6. Higgs results from ATLAS

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2016-01-01

    Full Text Available The updated Higgs measurements in various search channels with ATLAS Run 1 data are reviewed. Both the Standard Model (SM Higgs results, such as H → γγ, ZZ, WW, ττ, μμ, bb̄, and Beyond Standard Model (BSM results, such as the charged Higgs, Higgs invisible decay and tensor couplings, are summarized. Prospects for future Higgs searches are briefly discussed.

  7. Hybrid Atlas Models

    CERN Document Server

    Ichiba, Tomoyuki; Banner, Adrian; Karatzas, Ioannis; Fernholz, Robert

    2009-01-01

    We study Atlas-type models of equity markets with local characteristics that depend on both name and rank, and in ways that induce a stability of the capital distribution. Ergodic properties and rankings of processes are examined with reference to the theory of reflected Brownian motions in polyhedral domains. In the context of such models, we discuss properties of various investment strategies, including the so-called growth-optimal and universal portfolios.

  8. Supersymmetry searches in ATLAS

    CERN Document Server

    Torro Pastor, Emma; The ATLAS collaboration

    2016-01-01

    Weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures.

  9. ATLAS support rails

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    These supports will hold the 7000 tonne ATLAS detector in its cavern at the LHC. The huge toroid will be assembled from eight coils that will house some of the muon chambers. Supported within the toroid will be the inner detector, containing tracking devices, as well as devices to measure the energies of the particles produced in the 14 TeV proton-proton collisions at the LHC.

  10. SUSY Searches in ATLAS

    CERN Document Server

    Zhuang, Xuai; The ATLAS collaboration

    2016-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles, with focus on those obtained using proton-proton collisions at a centre of mass energy of 13 TeV using 2015+2016 data. The searches with final states including jets, missing transverse momentum, light leptons will be presented.

  11. The ATLAS Experiment Movie

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  12. Overview of ATLAS results

    CERN Document Server

    Grabowska-Bold, Iwona; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at the Large Hadron Collider has undertaken a broad physics program to probe and characterize the hot nuclear matter created in relativistic lead-lead collisions. This talk presents recent results based on Run 2 data on production of jet, electroweak bosons and quarkonium, electromagnetic processes in ultra-peripheral collisions, and bulk particle collectivity from PbPb, pPb and pp collisions.

  13. ATLAS reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bartsch, R.R.

    1995-09-01

    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  14. El experimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  15. L'esperimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  16. ATLAS overview week highlights

    CERN Multimedia

    D. Froidevaux

    2005-01-01

    A warm and early October afternoon saw the beginning of the 2005 ATLAS overview week, which took place Rue de La Montagne Sainte-Geneviève in the heart of the Quartier Latin in Paris. All visitors had been warned many times by the ATLAS management and the organisers that the premises would be the subject of strict security clearance because of the "plan Vigipirate", which remains at some level of alert in all public buildings across France. The public building in question is now part of the Ministère de La Recherche, but used to host one of the so-called French "Grandes Ecoles", called l'Ecole Polytechnique (in France there is only one Ecole Polytechnique, whereas there are two in Switzerland) until the end of the seventies, a little while after it opened its doors also to women. In fact, the setting chosen for this ATLAS overview week by our hosts from LPNHE Paris has turned out to be ideal and the security was never an ordeal. For those seeing Paris for the first time, there we...

  17. Atlas du Liban

    Directory of Open Access Journals (Sweden)

    Ramez Philippe Maalouf

    2008-11-01

    Full Text Available Compte-rendu de l’ouvrage Atlas du Liban: territoires et société, sous la direction d’Éric Verdeil, Ghaleb Faour et Sébastien Velut, édition franco-libanaise de l’IFPO (Institut Français du Proche-Orient et du CNRS Liban (Conseil National de la Recherche Scientifique – Liban, Beyrouth 2007.Resenha do livro Atlas du Liban: territoires et société, sob a direção de Éric Verdeil, Ghaleb Faour e Sébastien Velut, editado por iniciativa franco-libanesa do IFPO (Institut Français du Proche-Orient e pelo CNRS Liban (Conseil National de la Recherche Scientifique – Liban, Beirute, 2007.Review of Atlas du Liban: territoires et société, edited by Éric Verdeil, Ghaleb Faour and Sébastien Velut, french-lebanese edition by IFPO (Institut Français du Proche-Orient and CNRS Liban (Conseil National de la Recherche Scientifique – Liban Beirut, 2007.

  18. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the centre-of-mass energies of 7 and 8 TeV in 2010-2012, the LHC is ramped up and successfully took data at the centre-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity levelling. The ultimate goal is to extend the dataset from about few hundred f b −1 expected for LHC running to 3000 f b −1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of ext...

  19. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extens...

  20. ATLAS Upgrade Plans

    CERN Document Server

    Hopkins, W; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000/fb by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new...

  1. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  2. Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation.

    Science.gov (United States)

    Wang, Hongzhi; Yushkevich, Paul A

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  3. ATLAS: Forecasting Falling Rocks

    Science.gov (United States)

    Heinze, Aren; Tonry, John L.; Denneau, Larry; Stalder, Brian; Sherstyuk, Andrei

    2016-10-01

    The Asteroid Terrestrial-impact Last Alert System (ATLAS) is a new asteroid survey aimed at detecting small (10-100 meter) asteroids inbound for impact with Earth. Relative to the larger objects targeted by most surveys, these small asteroids pose very different threats to our planet. Large asteroids can be seen at great distances and measured over many years, resulting in precise orbits that enable long-term impact predictions. If an impact were predicted, a costly deflection mission would be warranted to avert global catastrophe -- but a large asteroid impact is very unlikely in the next century. By contrast, impacts from small asteroids are inevitable. Such objects can be detected only during close encounters with Earth -- encounters too brief to yield long-term predictions. Only a few days' warning could be expected for an impactor in the 10-100 meter range, but fortunately the impact of such an asteroid would cause only regional damage. As in the case of a hurricane, a quixotic attempt to deflect or destroy it would be more expensive than the damage from its impact. A better response is to save human lives by evacuating the impact zone, and then rebuild. Only a few days warning are needed for this purpose, and ATLAS is unique among asteroid surveys in being optimized to provide it. While the optimization has many facets, the most important is rapidly surveying the entire accessible sky. A small asteroid could come from any direction and go from invisibility to impact in less than a week: ATLAS must look everywhere, all the time. Sky coverage is more important than exquisite sensitivity to faint objects, because asteroids inbound for impact will eventually become quite bright. This makes ATLAS complementary to other surveys, which scan the sky at a more leisurely pace but are able to detect asteroids at greater distances. We report on ATLAS' first year of survey operations, including the maturing of robotic observation and detection strategies, and asteroid and

  4. An image of an event in which a microscopic-black-hole was produced in the collision of two protons in a computer generated image of the ATLAS detector.

    CERN Multimedia

    Joao Pequenao

    2008-01-01

    In some theories, microscopic black holes may be produced in particle collisions that occur when very-high-energy cosmic rays hit particles in our atmosphere. These microscopic-black-holes would decay into ordinary particles in a tiny fraction of a second and would be very difficult to observe in our atmosphere. The ATLAS Experiment offers the exciting possibility to study them in the lab (if they exist). The simulated collision event shown is viewed along the beampipe. The event is one in which a microscopic-black-hole was produced in the collision of two protons (not shown). The microscopic-black-hole decayed immediately into many particles. The colors of the tracks show different types of particles emerging from the collision (at the center).

  5. Resource Utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    CERN Document Server

    Schaefer, D; The ATLAS collaboration; Ospanov, R

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excellent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trigger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and online storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to...

  6. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  7. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  8. Jet energy calibration in ATLAS

    CERN Document Server

    Schouten, Doug

    A correct energy calibration for jets is essential to the success of the ATLAS experi- ment. In this thesis I study a method for deriving an in situ jet energy calibration for the ATLAS detector. In particular, I show the applicability of the missing transverse energy projection fraction method. This method is shown to set the correct mean energy for jets. Pileup effects due to the high luminosities at ATLAS are also stud- ied. I study the correlations in lateral distributions of pileup energy, as well as the luminosity dependence of the in situ calibration metho

  9. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  10. Electrons and Photons at ATLAS

    CERN Document Server

    Heim, Sarah; The ATLAS collaboration

    2016-01-01

    The performance of the reconstruction, calibration and identification of electrons and photons with the ATLAS detector at the LHC is a key component to realize the ATLAS full physics potential, both in the searches for new physics and in precision measurements. The algorithms used for the reconstruction and identification of electrons and photons with the ATLAS detector during LHC run 2 are presented. Measurements of the identification efficiencies are derived from data. The results from the 2015 pp collision data set at sqrt(s)=13 TeV are reported. The electron and photon energy calibration procedure and its performance are also discussed.

  11. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  12. The new European wind atlas

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik; Troen, Ib; Ejsing Jørgensen, Hans;

    2014-01-01

    database. Although the project participants will come from the 27 member states it is envisioned that the project will be opened for global participation through test benches for model development and sharing of data – climatologically as well as experimental. Experiences from national wind atlases...... will be utilized, such as the Indian, the South African, the Finnish, the German, the Canadian atlases and others....... European Wind Atlas” aiming at reducing overall uncertainties in determining wind conditions; standing on three legs: A data bank from a series of intensive measuring campaigns; a thorough examination and redesign of the model chain from global, mesoscale to microscale models and creation of the wind atlas...

  13. The Pig PeptideAtlas

    DEFF Research Database (Denmark)

    Hesselager, Marianne Overgaard; Codrea, Marius; Sun, Zhi;

    2016-01-01

    underrepresented in existing repositories. We here present a significantly improved build of the Pig PeptideAtlas, which includes pig proteome data from 25 tissues and three body fluid types mapped to 7139 canonical proteins. The content of the Pig PeptideAtlas reflects actively ongoing research within...... the veterinary proteomics domain, and this article demonstrates how the expression of isoform-unique peptides can be observed across distinct tissues and body fluids. The Pig PeptideAtlas is a unique resource for use in animal proteome research, particularly biomarker discovery and for preliminary design of SRM...

  14. EnviroAtlas - Metrics for Pittsburgh, PA

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  15. EnviroAtlas - Metrics for Tampa, FL

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  16. EnviroAtlas - Metrics for Memphis, TN

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  17. Forward Physics at the ATLAS experiment

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    Poster summarize forward physics at the ATLAS experiment. It aims to AFP project which is the project to install forward detectors at 220m (AFP220) and 420m (AFP420) around ATLAS for measurements at high luminosity.

  18. EnviroAtlas - Metrics for Portland, OR

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http:/www.epa.gov/enviroatlas). The layers in these web...

  19. ATLAS : civil engineering at Point 1

    CERN Multimedia

    2002-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video.

  20. Women of ATLAS - International Women's Day 2016

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Women play key roles in the ATLAS Experiment: from young physicists at the start of their careers to analysis group leaders and spokespersons of the collaboration. Celebrate International Women's Day by meeting a few of these inspiring ATLAS researchers.

  1. EnviroAtlas - Metrics for Phoenix, AZ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  2. EnviroAtlas - Milwaukee, WI - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Milwaukee, WI EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  3. The ATLAS Trigger Muon "Vertical Slice"

    CERN Document Server

    Sidoti, A; Biglietti, M; Carlino, G; Cataldi, G; Conventi, F; Del Prete, T; Di Mattia, A; Falciano, S; Gorini, S; Kanaya, N; Kohno, T; Krasznahorkay, A; Lagouri, T; Luci, C; Luminari, L; Marzano, F; Nagano, K; Nisati, A; Panikashvili, N; Pasqualucci, E; Primavera, M; Scannicchio, D A; Spagnolo, S; Tarem, S; Tarem, Z; Tokushuku, K; Usai, G; Ventura, A; Vercesi, V; Yamazaki, Y; 10th Pisa Meeting on Advanced Detectors : Frontier Detectors For Frontier Physics

    2007-01-01

    The muon trigger system is a fundamental component of the ATLAS detector at the LHC collider. In this paper we describe the ATLAS multi-level trigger selecting events with muons: the Muon Trigger Slice.

  4. EnviroAtlas - Metrics for Woodbine, IA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  5. EnviroAtlas - Metrics for Portland, ME

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  6. EnviroAtlas - Metrics for Fresno, CA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  7. EnviroAtlas - Metrics for Paterson, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  8. EnviroAtlas - Fresno, CA - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Fresno, CA EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  9. Argonne Tandem Linac Accelerator System (ATLAS)

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a national user facility at Argonne National Laboratory in Argonne, Illinois. The ATLAS facility is a leading facility for nuclear structure research in the...

  10. EnviroAtlas - Metrics for Durham, NC

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The layers in these web...

  11. EnviroAtlas - Durham, NC - Demo (Parent)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Durham, NC EnviroAtlas Area. The block groups are from the US Census Bureau and are included/excluded based on...

  12. EnviroAtlas - Portland, OR - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Portland, OR EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  13. EnviroAtlas - Cleveland, OH - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Cleveland, OH EnviroAtlas community. The block groups are from the US Census Bureau and are included/excluded...

  14. EnviroAtlas - Metrics for Milwaukee, WI

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http://www.epa.gov/enviroatlas). The layers in these web...

  15. EnviroAtlas - Paterson, NJ - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Paterson, NJ EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  16. EnviroAtlas - Memphis, TN - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Memphis, TN EnviroAtlas community. The block groups are from the US Census Bureau and are included/excluded based...

  17. EnviroAtlas - Phoenix, AZ - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Phoenix, AZ EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  18. SU-E-J-132: Automated Segmentation with Post-Registration Atlas Selection Based On Mutual Information

    Energy Technology Data Exchange (ETDEWEB)

    Ren, X; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China); Sharp, G [Massachusetts General Hospital, Boston, MA (United States)

    2015-06-15

    Purpose: The delineation of targets and organs-at-risk is a critical step during image-guided radiation therapy, for which manual contouring is the gold standard. However, it is often time-consuming and may suffer from intra- and inter-rater variability. The purpose of this work is to investigate the automated segmentation. Methods: The automatic segmentation here is based on mutual information (MI), with the atlas from Public Domain Database for Computational Anatomy (PDDCA) with manually drawn contours.Using dice coefficient (DC) as the quantitative measure of segmentation accuracy, we perform leave-one-out cross-validations for all PDDCA images sequentially, during which other images are registered to each chosen image and DC is computed between registered contour and ground truth. Meanwhile, six strategies, including MI, are selected to measure the image similarity, with MI to be the best. Then given a target image to be segmented and an atlas, automatic segmentation consists of: (a) the affine registration step for image positioning; (b) the active demons registration method to register the atlas to the target image; (c) the computation of MI values between the deformed atlas and the target image; (d) the weighted image fusion of three deformed atlas images with highest MI values to form the segmented contour. Results: MI was found to be the best among six studied strategies in the sense that it had the highest positive correlation between similarity measure (e.g., MI values) and DC. For automated segmentation, the weighted image fusion of three deformed atlas images with highest MI values provided the highest DC among four proposed strategies. Conclusion: MI has the highest correlation with DC, and therefore is an appropriate choice for post-registration atlas selection in atlas-based segmentation. Xuhua Ren and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)

  19. Analysis Facility infrastructure (TIER3) for ATLAS High Energy physics experiment

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez de la Hoz, S.; March, L.; Ros, E.; Sanchez, J.; Amoros, G.; Fassi, F.; Fernandez, A.; Kaci, M.; Lamas, A.; Salt, J.

    2007-07-01

    ATLAS project has been asked to define the scope and role of Tier-3 resources (facilities or centres) within the existing ATLAS computing model, activities and facilities. This document attempts to address these questions by describing Tier-3 resources generally, and their relationship to the ATLAS Software and Computing Project. Originally the tiered computing model came out of MONARC (see http://monarc.web.cern.ch/MONARC/) work and was predicated upon the network being a scarce resource. In this model the tiered hierarchy ranged from the Tier-0 (CERN) down to the desktop or workstation (Tier 3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 (CERN) and Tier-1 (National centres) definition and roles. The various LHC projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2s (Regional centers) as part of their projects. Tier-3s, on the other hand, have (implicitly and sometime explicitly) been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS Research Program computing resources nor under their control, meaning there is no formal MOU process to designate sites as Tier-3s and no formal control of the program over the Tier-3 resources. Tier-3s are the responsibility of individual institutions to define, fund, deploy and support. However, having noted this, we must also recognize that Tier-3s must exist and will have implications for how our computing model should support ATLAS physicists. Tier-3 users will want to access data and simulations and will want to enable their Tier-3 resources to support their analysis and simulation work. Tiers 3s are an important resource for physicists to analyze LHC (Large Hadron Collider) data. This document will define how Tier-3s should best interact with the ATLAS computing model, detail the

  20. Common atlas format and 3D brain atlas reconstructor: infrastructure for constructing 3D brain atlases.

    Science.gov (United States)

    Majka, Piotr; Kublik, Ewa; Furga, Grzegorz; Wójcik, Daniel Krzysztof

    2012-04-01

    One of the challenges of modern neuroscience is integrating voluminous data of diferent modalities derived from a variety of specimens. This task requires a common spatial framework that can be provided by brain atlases. The first atlases were limited to two-dimentional presentation of structural data. Recently, attempts at creating 3D atlases have been made to offer navigation within non-standard anatomical planes and improve capability of localization of different types of data within the brain volume. The 3D atlases available so far have been created using frameworks which make it difficult for other researchers to replicate the results. To facilitate reproducible research and data sharing in the field we propose an SVG-based Common Atlas Format (CAF) to store 2D atlas delineations or other compatible data and 3D Brain Atlas Reconstructor (3dBAR), software dedicated to automated reconstruction of three-dimensional brain structures from 2D atlas data. The basic functionality is provided by (1) a set of parsers which translate various atlases from a number of formats into the CAF, and (2) a module generating 3D models from CAF datasets. The whole reconstruction process is reproducible and can easily be configured, tracked and reviewed, which facilitates fixing errors. Manual corrections can be made when automatic reconstruction is not sufficient. The software was designed to simplify interoperability with other neuroinformatics tools by using open file formats. The content can easily be exchanged at any stage of data processing. The framework allows for the addition of new public or proprietary content.

  1. Evaluation of atlas-based auto-segmentation software in prostate cancer patients

    Energy Technology Data Exchange (ETDEWEB)

    Greenham, Stuart, E-mail: stuart.greenham@ncahs.health.nsw.gov.au [Department of Radiation Oncology, North Coast Cancer Institute, Coffs Harbour Health Campus, Coffs Harbour, New South Wales (Australia); Dean, Jenna [North Coast Cancer Institute, Port Macquarie Health Campus, Port Macquarie, New South Wales (Australia); Fu, Cheuk Kuen Kenneth [North Coast Cancer Institute, Lismore Health Campus, Lismore, New South Wales (Australia); Goman, Joanne [Department of Radiation Oncology, Calvary Mater Newcastle, Newcastle, New South Wales (Australia); Mulligan, Jeremy [North Coast Cancer Institute, Port Macquarie Health Campus, Port Macquarie, New South Wales (Australia); Tune, Deanna [Department of Radiation Oncology, North Coast Cancer Institute, Coffs Harbour Health Campus, Coffs Harbour, New South Wales (Australia); Sampson, David [North Coast Cancer Institute, Lismore Health Campus, Lismore, New South Wales (Australia); Westhuyzen, Justin [Department of Radiation Oncology, North Coast Cancer Institute, Coffs Harbour Health Campus, Coffs Harbour, New South Wales (Australia); McKay, Michael [North Coast Cancer Institute, Lismore Health Campus, Lismore, New South Wales (Australia); Department of Radiation Oncology, North Coast Cancer Institute, Coffs Harbour Health Campus, Coffs Harbour, New South Wales (Australia)

    2014-09-15

    The performance and limitations of an atlas-based auto-segmentation software package (ABAS; Elekta Inc.) was evaluated using male pelvic anatomy as the area of interest. Contours from 10 prostate patients were selected to create atlases in ABAS. The contoured regions of interest were created manually to align with published guidelines and included the prostate, bladder, rectum, femoral heads and external patient contour. Twenty-four clinically treated prostate patients were auto-contoured using a randomised selection of two, four, six, eight or ten atlases. The concordance between the manually drawn and computer-generated contours were evaluated statistically using Pearson's product–moment correlation coefficient (r) and clinically in a validated qualitative evaluation. In the latter evaluation, six radiation therapists classified the degree of agreement for each structure using seven clinically appropriate categories. The ABAS software generated clinically acceptable contours for the bladder, rectum, femoral heads and external patient contour. For these structures, ABAS-generated volumes were highly correlated with ‘as treated’ volumes, manually drawn; for four atlases, for example, bladder r = 0.988 (P < 0.001), rectum r = 0.739 (P < 0.001) and left femoral head r = 0.560 (P < 0.001). Poorest results were seen for the prostate (r = 0.401, P < 0.05) (four atlases); however this was attributed to the comparison prostate volume being contoured on magnetic resonance imaging (MRI) rather than computed tomography (CT) data. For all structures, increasing the number of atlases did not consistently improve accuracy. ABAS-generated contours are clinically useful for a range of structures in the male pelvis. Clinically appropriate volumes were created, but editing of some contours was inevitably required. The ideal number of atlases to improve generated automatic contours is yet to be determined.

  2. ATLAS experiment : mapping the secrets of the universe

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    This 4 page color brochure describes ATLAS and the LHC, the ATLAS inner detector, calorimeters, muon spectrometer, magnet system, a short definition of the terms "particles," "dark matter," "mass," "antimatter." It also explains the ATLAS collaboration and provides the ATLAS website address with some images of the detector and the ATLAS collaboration at work.

  3. Event Reconstruction Algorithms for the ATLAS Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca-Martin, T.; /CERN; Abolins, M.; /Michigan State U.; Adragna, P.; /Queen Mary, U. of London; Aleksandrov, E.; /Dubna, JINR; Aleksandrov, I.; /Dubna, JINR; Amorim, A.; /Lisbon, LIFEP; Anderson, K.; /Chicago U., EFI; Anduaga, X.; /La Plata U.; Aracena, I.; /SLAC; Asquith, L.; /University Coll. London; Avolio, G.; /CERN; Backlund, S.; /CERN; Badescu, E.; /Bucharest, IFIN-HH; Baines, J.; /Rutherford; Barria, P.; /Rome U. /INFN, Rome; Bartoldus, R.; /SLAC; Batreanu, S.; /Bucharest, IFIN-HH /CERN; Beck, H.P.; /Bern U.; Bee, C.; /Marseille, CPPM; Bell, P.; /Manchester U.; Bell, W.H.; /Glasgow U. /Pavia U. /INFN, Pavia /Regina U. /CERN /Annecy, LAPP /Paris, IN2P3 /Royal Holloway, U. of London /Napoli Seconda U. /INFN, Naples /Argonne /CERN /UC, Irvine /Barcelona, IFAE /Barcelona, Autonoma U. /CERN /Montreal U. /CERN /Glasgow U. /Michigan State U. /Bucharest, IFIN-HH /Napoli Seconda U. /INFN, Naples /New York U. /Barcelona, IFAE /Barcelona, Autonoma U. /Salento U. /INFN, Lecce /Pisa U. /INFN, Pisa /Bucharest, IFIN-HH /UC, Irvine /CERN /Glasgow U. /INFN, Genoa /Genoa U. /Lisbon, LIFEP /Napoli Seconda U. /INFN, Naples /UC, Irvine /Valencia U. /Rio de Janeiro Federal U. /University Coll. London /New York U.; /more authors..

    2011-11-09

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  4. The ATLAS data management software engineering process

    Science.gov (United States)

    Lassnig, M.; Garonne, V.; Stewart, G. A.; Barisits, M.; Beermann, T.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also highlight the social aspects of an environment where every action is subject to detailed scrutiny.

  5. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  6. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Herr, J.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. The WLAP model is spreading. This summer, the CERN's High School Teachers program has used WLAP's system to record several physics lectures directed toward a broad audience. And a new project called MScribe, which is essentially the WLAP system coupled with an infrared tracking camera, is being used by the University of Michigan to record several University courses this academic year. All lectures can be viewed on any major platform with any common internet browser...

  7. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  8. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    Energy Technology Data Exchange (ETDEWEB)

    Campana, S, E-mail: Simone.Campana@cern.c [CERN IT/GS, Geneva (Switzerland)

    2010-04-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the computing model, for several days in a row, concurrently with other LHC experiment activities. This dedicated effort led to the consequential improvements of ATLAS and WLCG services and to daily operation activities throughout the last year. The system has been delivering to WLCG tiers many hundreds of terabytes of simulated data and, since the summer of 2008, more than two petabytes of cosmic and beam data.

  9. Neonatal atlas construction using sparse representation.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2014-09-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases.

  10. 27 CFR 9.140 - Atlas Peak.

    Science.gov (United States)

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Atlas Peak. 9.140 Section... THE TREASURY LIQUORS AMERICAN VITICULTURAL AREAS Approved American Viticultural Areas § 9.140 Atlas Peak. (a) Name. The name of the viticultural area described in this section is “Atlas Peak.”...

  11. World Ocean Atlas 2005, Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  12. ATLAS recognises its best suppliers

    CERN Document Server

    Jenni, P

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance was attributed on 2nd September 2002 during a ceremony in Hall 180 to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months before at their headquarters in Japan. The ATLAS experiment will become a reality thanks to a large international collaboration partnership. The industrial suppliers for the components all over the world play a major role in the construction of this gigantic jigsaw for the LHC. And sometimes they perform so well, that their work deserves specially to be recognised. This is the case for Kawasaki Heavy Industries and Toshiba Corporation, producers of the Liquid Argon Barrel Cryostat and of the Superconducting Central Solenoid, respectively. With these awards, the ATLAS Collaboration wants to congratulate Kawasaki and Toshiba for fulfilling the hi...

  13. Nuclear Receptor Signaling Atlas (NURSA)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Nuclear Receptor Signaling Atlas (NURSA) is designed to foster the development of a comprehensive understanding of the structure, function, and role in disease...

  14. ATLAS Civil Engineering Point 1

    CERN Multimedia

    2001-01-01

    Different phases of realisation to Point 1: zone of the ATLAS experiment 14-02-2001Realising anchorage, isolations and scaffoldings at UX 15 18-04-2001Concreting the arch and posing the metal reinforcements at UX 15

  15. Wheels lining up for ATLAS

    CERN Multimedia

    2003-01-01

    On 30 October, the mechanics test assembly of the central barrel of the ATLAS tile hadronic calorimeter was completed in building 185. It is the second wheel for the Tilecal completely assembled this year.

  16. World Ocean Atlas 2005, Salinity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  17. World Ocean Atlas 2005, Salinity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  18. Two new wheels for ATLAS

    CERN Multimedia

    2002-01-01

    Juergen Zimmer (Max Planck Institute), Roy Langstaff (TRIUMF/Victoria) and Sergej Kakurin (JINR), in front of one of the completed wheels of the ATLAS Hadronic End Cap Calorimeter. A decade of careful preparation and construction by groups in three continents is nearing completion with the assembly of two of the four 4 m diameter wheels required for the ATLAS Hadronic End Cap Calorimeter. The first two wheels have successfully passed all their mechanical and electrical tests, and have been rotated on schedule into the vertical position required in the experiment. 'This is an important milestone in the completion of the ATLAS End Cap Calorimetry' explains Chris Oram, who heads the Hadronic End Cap Calorimeter group. Like most experiments at particle colliders, ATLAS consists of several layers of detectors in the form of a 'barrel' and two 'end caps'. The Hadronic Calorimeter layer, which measures the energies of particles such as protons and pions, uses two techniques. The barrel part (Tile Calorimeter) cons...

  19. ATLAS online data quality monitoring

    CERN Document Server

    Cuenca Almenar, C; The ATLAS collaboration; Hadavand, H; Ilchenko, Y; Kolos, S; Slagle, K; Taffard, A

    2010-01-01

    Every minute the ATLAS detector is taking data, the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles millions of histogram updates coming from thousands applications, executes over forty thousand advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. The online data quality monitoring system has been of great help in providing quick feedback to the subsystems about the functioning and performance of the different parts of ATLAS by providing a configurable easy and fast visualization of all this information. The Data Quality Monitoring Display (DQMD) is a visualization tool for the automatic data quality assessment of the ATLAS experiment. It is the interface through which the shift crew and experts can validate the quality of the data being recorded or processed, be warned of problems related to data quality, an...

  20. Dartmouth Atlas of Health Care

    Data.gov (United States)

    U.S. Department of Health & Human Services — For more than 20 years, the Dartmouth Atlas Project has documented glaring variations in how medical resources are distributed and used in the United States. The...