WorldWideScience

Sample records for atlas computers

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. The ATLAS Computing Model

    CERN Document Server

    Adams, D; Bee, C P; Hawkings, R; Jarp, S; Jones, R; Malon, D; Poggioli, L; Poulard, G; Quarrie, D; Wenaus, T

    2005-01-01

    The ATLAS Offline Computing Model is described. The main emphasis is on the steady state, when normal running is established. The data flow from the output of the ATLAS trigger system through processing and analysis stages is analysed, in order to estimate the computing resources, in terms of CPU power, disk and tape storage and network bandwidth, which will be necessary to guarantee speedy access to ATLAS data to all members of the Collaboration. Data Challenges and the commissioning runs are used to prototype the Computing Model and test the infrastructure before the start of LHC operation. The initial planning for the early stages of data-taking is also presented. In this phase, a greater degree of access to the unprocessed or partially processed raw data is envisaged.

  3. Data challenges in ATLAS computing

    CERN Document Server

    Vaniachine, A

    2003-01-01

    ATLAS computing is steadily progressing towards a highly functional software suite, plus a World Wide computing model which gives all ATLAS equal and equal quality of access to ATLAS data. A key component in the period before the LHC is a series of Data Challenges of increasing scope and complexity. The goals of the ATLAS Data Challenges are the validation of the computing model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. We are committed to 'common solutions' and look forward to the LHC Computing Grid being the vehicle for providing these in an effective way. In close collaboration between the Grid and Data Challenge communities ATLAS is testing large-scale testbed prototypes around the world, deploying prototype components to integrate and test Grid software in a production environment, and running DC1 production at 39 'tier' centers in 18 countries on four continents.

  4. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  5. ATLAS computing: Technical Design Report

    OpenAIRE

    ATLAS, Collaboration; Åkesson, Torsten; Eerola, Paula; Hedberg, Vincent; Jarlskog, Göran; Lundberg, Björn; Mjörnmark, Ulf; Smirnova, Oxana; Almehed, Sverker; et, al.

    2005-01-01

    The ATLAS Computing Model embraces the Grid paradigm and a high degree of decentralization and sharing of computing resources. The required level of computing resources means that off-site facilities will be vital to the operation of ATLAS in a way that was not the case for previous CERN-based experiments. The primary event processing occurs at CERN in a Tier-0 facility. The RAW data is archived at CERN and copied (along with the primary processed data) to the Tier-1 facilities around ...

  6. Volunteer Computing Experience with ATLAS@Home

    CERN Document Server

    Cameron, David; The ATLAS collaboration; Bourdarios, Claire; Lan\\c con, Eric

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers' resources make up a sizable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one job to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  7. Volunteer Computing Experience with ATLAS@Home

    Science.gov (United States)

    Adam-Bourdarios, C.; Bianchi, R.; Cameron, D.; Filipčič, A.; Isacchini, G.; Lançon, E.; Wu, W.; ATLAS Collaboration

    2017-10-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  8. Volunteer computing experience with ATLAS@Home

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Bianchi, Riccardo-Maria; Cameron, David; Filipčič, Andrej; Lançon, Eric; Wu, Wenjing

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers’ resources make up a sizeable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one task to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  9. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  10. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  11. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  12. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  13. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  14. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  15. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  16. Exploiting Virtualization and Cloud Computing in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    This work will present the current status of the Virtualization and Cloud Computing R&D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a "cloud factory" for managing cloud VM instances. Ne...

  17. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  18. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  19. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  20. Consolidation of cloud computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration

    2017-10-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.

  1. Consolidation of cloud computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall; Giordano, Domenico

    2017-01-01

    Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in resp...

  2. System administration of ATLAS TDAQ computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Adeel-Ur-Rehman, A [National Centre for Physics, Islamabad (Pakistan); Bujor, F; Dumitrescu, A; Dumitru, I; Leahu, M; Valsan, L [Politehnica University of Bucharest (Romania); Benes, J [Zapadoceska Univerzita v Plzni (Czech Republic); Caramarcu, C [National Institute of Physics and Nuclear Engineering (Romania); Dobson, M; Unel, G [University of California at Irvine (United States); Oreshkin, A [St. Petersburg Nuclear Physics Institute (Russian Federation); Popov, D [Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany); Zaytsev, A, E-mail: Alexandr.Zaytsev@cern.c [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation)

    2010-04-01

    This contribution gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting High Level Trigger, Event Filter and other subsystems of the ATLAS detector operating on LHC collider at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, about 40 multi-screen user interface machines installed in the control rooms and various hardware and service monitoring machines as well. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The software distribution requirements are matched by the two level NFS based solution. Hardware and network monitoring systems of ATLAS TDAQ are based on NAGIOS and MySQL cluster behind it for accounting and storing the monitoring data collected, IPMI tools, CERN LANDB and the dedicated tools developed by the group, e.g. ConfdbUI. The user management schema deployed in TDAQ environment is founded on the authentication and role management system based on LDAP. External access to the ATLAS online computing facilities is provided by means of the gateways supplied with an accounting system as well. Current activities of the group include deployment of the centralized storage system, testing and validating hardware solutions for future use within the ATLAS TDAQ environment including new multi-core blade servers, developing GUI tools for user authentication and roles management, testing and validating 64-bit OS, and upgrading the existing TDAQ hardware components, authentication servers and the gateways.

  3. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  4. ATLAS and LHC computing on CRAY

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb from a dedicated cluster to the large CRAY systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  5. ATLAS and LHC computing on CRAY

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00297774; The ATLAS collaboration; Haug, Sigve

    2017-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  6. ATLAS and LHC computing on CRAY

    Science.gov (United States)

    Sciacca, F. G.; Haug, S.; ATLAS Collaboration

    2017-10-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one important measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort of moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb, from a dedicated cluster to the large Cray systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  7. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Cordeiro, Cristovao; Di Girolamo, Alessandro; Hover, John; Kouba, Tomas; Love, Peter; Mcnab, Andrew; Schovancova, Jaroslava; Sobie, Randall

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  8. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  9. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  10. ATLAS computing on Swiss Cloud SWITCHengines

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00215485; The ATLAS collaboration; Sciacca, Gianfranco

    2017-01-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  11. ATLAS computing on Swiss Cloud SWITCHengines

    Science.gov (United States)

    Haug, S.; Sciacca, F. G.; ATLAS Collaboration

    2017-10-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  12. ATLAS Computing on the Swiss Cloud SWITCHengines

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00215485; The ATLAS collaboration; Sciacca, Gianfranco

    2016-01-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performance used and achieved running ATLAS production on SWITCHengines. SWITCHengines is the new cloud infrastructure offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, which we also report on, are country specific.

  13. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  14. Integrating network awareness in ATLAS distributed computing

    CERN Document Server

    De, K; The ATLAS collaboration; Klimentov, A; Maeno, T; Mckee, S; Nilsson, P; Petrosyan, A; Vukotic, I; Wenaus, T

    2014-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networks hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networking and data flow performance further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management.

  15. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  16. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  17. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Bourdarios, Claire; Filipcic, Andrej; Lancon, Eric; Wu, Wenjing

    2015-01-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  18. Common accounting system for monitoring the ATLAS Distributed Computing resources

    CERN Document Server

    Karavakis, E; The ATLAS collaboration; Campana, S; Gayazov, S; Jezequel, S; Saiz, P; Sargsyan, L; Schovancova, J; Ueda, I

    2014-01-01

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  19. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    Science.gov (United States)

    Adam, C.; Barberis, D.; Crépé-Renaudin, S.; De, K.; Fassi, F.; Stradling, A.; Svatos, M.; Vartapetian, A.; Wolters, H.

    2017-10-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates communication between the ADC experts team and the other ADC shifters. These include the Distributed Analysis Support Team (DAST), which is the first point of contact for addressing all distributed analysis questions, and the ATLAS Distributed Computing Shifters (ADCoS), which check and report problems in central services, sites, Tier-0 export, data transfers and production tasks. Finally, the CRC looks at the level of ADC activities on a weekly or monthly timescale to ensure that ADC resources are used efficiently.

  20. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2014-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  1. ATLAS Distributed Computing Monitoring tools during the LHC Run I

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Di Girolamo, A; Jezequel, S; Ueda, I; Wenaus, T

    2013-01-01

    This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources.\\\\ During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visua...

  2. The December 2006 ATLAS Computing & Software Workshop

    CERN Multimedia

    Fred Luehring

    The 29th ATLAS Computing & Software Workshop was held on December 11-15 at CERN. With the rapidly approaching onset of data taking, the workshop participants had an air of urgency about them. There was considerable discussion on hot topics such as physics validation of the software, data analysis, actual software production on the GRID, and the schedule of work for 2007 including the Final Dress Rehearsal (FDR). However don't be fooled, the workshop was not all work - there were also two social events which were greatly enjoyed by the attendees. The workshop welcomed Wouter Verkerke as the new Physics Validation Coordinator (replacing Davide Costanzo). Most recent validation work has centered on the 12.0.X release series that will be used for the Computing System Commissioning (CSC) exercise. The validation is now a big job because it needs to be done over a variety of conditions (magnetic field on/off, aligned/misaligned geometry) for every candidate release. Luckily there have been a large number of pe...

  3. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2015-01-01

    The ATLAS collaboration has setup a volunteer computing project called ATLAS@home. Volunteers running Monte-Carlo simulation on their personal computer provide significant computing resources, but also belong to a community potentially interested in HEP. Four types of contributors have been identified, whose questions range from advanced technical details to the reason why simulation is needed, how Computing is organized and how it relates to society. The creation of relevant outreach material for simulation, event visualization and distributed production will be described, as well as lessons learned while interacting with the BOINC volunteers community.

  4. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  5. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00068610; The ATLAS collaboration; Barberis, Dario; Crepe-Renaudin, Sabine Chrystel; De, Kaushik; Fassi, Farida; Stradling, Alden; Svatos, Michal; Vartapetian, Armen; Wolters, Helmut

    2017-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run 2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts’ workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run 1, this task was accomplished by a person of the expert team called the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run 2. The CRC position was proposed to cover some of the AMODs former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help with the training of future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing...

  6. Establishing a normative atlas of the human lung: computing the average transformation and atlas construction.

    Science.gov (United States)

    Li, Baojun; Christensen, Gary E; Hoffman, Eric A; McLennan, Geoffrey; Reinhardt, Joseph M

    2012-11-01

    To establish the range of normal for quantitative computed tomography (CT)-based measures of lung structure and function, we seek to develop methods for matching pulmonary structures across individuals and establishing a normative human lung atlas. In our previous work, we have presented a three-dimensional (3D) image registration method suitable for pulmonary atlas construction based on CT datasets. The method has been applied to a population of normative lungs in multiple experiments and, in each instance, has resulted in significant reductions in registration errors. This study is a continuation to our previous work by presenting a method for synthesizing a computerized human lung atlas from previously registered and matched 3D pulmonary CT datasets from a population of normative subjects. Our method consists of defining the origin of the atlas coordinate system; defining the nomenclature and labels for anatomical structures within the atlas system; computing the average transformation based on the displacement fields to register individual subject to the common template subject; constructing the atlas by deforming the template with the average transformation; and calculating shape variations within the population. The feasibility of pulmonary atlas construction was evaluated using CT datasets from 20 normal volunteers. Substantial reductions in shape variability were demonstrated. In addition, the constructed atlas depends only slightly on a specific subject being selected as the template. These results indicate the framework is a robust and valid method for pulmonary atlas construction based on CT scans. The atlas consists of a grayscale CT dataset of the template, a labeled mask dataset of the template (ie, lungs, lobes, and lobar fissures are labeled with different gray levels), a data set representing the population's average shape, datasets representing the population's shape variations (ie, the magnitude of standard deviation), a data structure to contain

  7. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2013-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  8. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2014-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  9. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  10. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  11. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  12. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  13. Next generation database relational solutions for ATLAS distributed computing

    Science.gov (United States)

    Dimitrov, G.; Maeno, T.; Garonne, V.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  14. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Bauce, Matteo; Dankel, Maik; Howard, Jacob; Kama, Sami

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. These data are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to parallel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software framework called Athena. In this talk we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline reconstruction as well as their integration into a multiple process based Athena frame...

  15. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  16. Preparing ATLAS Distributed Computing for LHC Run 2

    CERN Document Server

    The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This presentation will survey the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  17. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2016-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This paper surveys the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  18. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This presentation will survey the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  19. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  20. ATLAS Great Lakes Tier-2 Computing and Muon Calibration Center Commissioning

    CERN Document Server

    INSPIRE-00106342

    2009-01-01

    Large-scale computing in ATLAS is based on a grid-linked system of tiered computing centers. The ATLAS Great Lakes Tier-2 came online in September 2006 and now is commissioning with full capacity to provide significant computing power and services to the USATLAS community. Our Tier-2 Center also host the Michigan Muon Calibration Center which is responsible for daily calibrations of the ATLAS Monitored Drift Tubes for ATLAS endcap muon system. During the first LHC beam period in 2008 and following ATLAS global cosmic ray data taking period, the Calibration Center received a large data stream from the muon detector to derive the drift tube timing offsets and time-to-space functions with a turn-around time of 24 hours. We will present the Calibration Center commissioning status and our plan for the first LHC beam collisions in 2009.

  1. An ATLAS distributed computing architecture for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2017-01-01

    The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect of the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for LHC Run-4. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prot...

  2. Computer modeling the ATLAS Trigger/DAQ system performance

    CERN Document Server

    Cranfield, R; Kaczmarska, A; Korcyl, K; Vermeulen, J C; Wheeler, S

    2004-01-01

    In this paper simulation ("computer modeling") of the Trigger/DAQ system of the ATLAS experiment at the LHC accelerator is discussed. The system will consist of a few thousand end-nodes, which are interconnected by a large Local Area Network. The nodes will run various applications under the Linux OS. The purpose of computer modeling is to verify the rate handling capability of the system designed and to find potential problem areas. The models of the system components are kept as simple as possible but are sufficiently detailed to reproduce behavioral aspects relevant to the issues studied. Values of the model parameters have been determined using small dedicated setups. This calibration phase has been followed by a validation process. More complex setups have been wired-up and relevant measurement results were obtained. These setups were also modeled and the results were compared to the measurement results. Discrepancies were leading to modification and extension of the set of parameters. After gaining conf...

  3. Sex-related Differences in the Developmental Morphology of the Atlas: A Computed Tomography Study.

    Science.gov (United States)

    Asukai, Mitsuru; Fujita, Tomotada; Suzuki, Daisuke; Nishida, Tatsuya; Ohishi, Tsuyoshi; Matsuyama, Yukihiro

    2017-08-23

    A retrospective study. To elucidate sex-related differences in the age at synchondroses closure, the normative size of the atlas, and the ossification patterns of the atlas in Japanese children. The atlas develops from three ossification centers during childhood. The anterior and posterior synchondroses, which are separate ossification centers, mimic fracture lines on computed tomography (CT). Sex-related differences of age dependent morphological changes of the atlas in a large sample size have not been reported. This study analyzed data of 688 subjects (449 boys) between 0 to 18 years old who underwent CT examination of the head and/or neck between January 2010 and July 2016. The age at synchondroses closure, anteroposterior outer, inner, and spinal canal widths of the atlas, and variations of the ossification centers were examined. Anterior synchondroses closed by 10 years in boys and by 7 years in girls. Significant earlier closure of anterior synchondroses was observed in girls than in boys (p atlas. Distinct sex-related differences in the age at anterior synchondroses closure and the size of the atlas were observed in Japanese children. Knowledge of morphological features of the atlas could help distinguish fractures from synchondroses. 3.

  4. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  5. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  6. ATLAS Distributed Computing Operations: Experience and improvements after 2 full years of data-taking

    CERN Document Server

    Jézéquel, S; The ATLAS collaboration

    2012-01-01

    This paper summarizes operational experience and improvements in ATLAS computing infrastructure in 2010 and 2011. ATLAS has had 2 periods of data taking, with many more events recorded in 2011 than in 2010. It ran 3 major reprocessing campaigns. The activity in 2011 was similar to 2010, but scalability issues had to be addressed due to the increase in luminosity and trigger rate. Based on improved monitoring of ATLAS Grid computing, the evolution of computing activities (data/group production, their distribution and grid analysis) over time is presented. The main changes in the implementation of the computing model that will be shown are: the optimization of data distribution over the Grid, according to effective transfer rate and site readiness for analysis; the progressive dismantling of the cloud model, for data distribution and data processing; software installation migration to cvmfs; changing database access to a Frontier/squid infrastructure.

  7. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  8. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  9. ATLAS Distributed Computing Operations in the First Two Years of Data Taking

    CERN Document Server

    Ueda, I; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment has had two years of steady data taking in 2010 and 2011. Data are calibrated, reconstructed, distributed and analysed at over 100 different sites using the World-wide LHC Computing Grid. Following the experience in 2010, the data distribution policies were revised to address scalability issues due to the increase in luminosity and trigger rate in 2011. The structure in the ATLAS computing model has also been revised to optimise the usage of the resources, according to effective transfer rates between sites and site availability. Some new infrastructures were introduced for the software installation at the sites and for database access to reduce the bottlenecks in the data processing. Issues in the end-user analysis were studied and automated control system of the analysis queues based on functional tests has been introduced. The monitoring tools have been implemented and improved to review the ATLAS activities by categories. In this talk, we will report on the operational experience and ...

  10. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  11. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  12. Enabling the ATLAS Experiment at the LHC for High Performance Computing

    CERN Document Server

    AUTHOR|(CDS)2091107; Ereditato, Antonio

    In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relatively small dedicated com...

  13. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    OpenAIRE

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D; Panitkin, S.; Petrosyan, A; Schovancova, J.; Vaniachine, A; Wenaus, T.; Yu, D.

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of othe...

  14. Getting the Most from Distributed Resources With an Analytics Platform for ATLAS Computing Services

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2016-01-01

    To meet a sharply increasing demand for computing resources for LHC Run 2, ATLAS distributed computing systems reach far and wide to gather CPU resources and storage capacity to execute an evolving ecosystem of production and analysis workflow tools. Indeed more than a hundred computing sites from the Worldwide LHC Computing Grid, plus many “opportunistic” facilities at HPC centers, universities, national laboratories, and public clouds, combine to meet these requirements. These resources have characteristics (such as local queuing availability, proximity to data sources and target destinations, network latency and bandwidth capacity, etc.) affecting the overall processing efficiency and throughput. To quantitatively understand and in some instances predict behavior, we have developed a platform to aggregate, index (for user queries), and analyze the more important information streams affecting performance. These data streams come from the ATLAS production system (PanDA), the distributed data management s...

  15. Tools and strategies to monitor the ATLAS online computing farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, D A; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  16. Monitoring of computing resource utilization of the ATLAS experiment

    OpenAIRE

    Rousseau, D; Dimitrov, G; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as wel...

  17. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00008600; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  18. ATLAS off-Grid sites (Tier-3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    Science.gov (United States)

    Petrosyan, Artem; Oleynik, Danila; Belov, Sergey; Andreeva, Julia; Kadochnikov, Ivan

    2012-12-01

    ATLAS is an LHC (Large Hadron Collider) experiment at the CERN particle physics laboratory in Geneva, Switzerland. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers, in order to handle data volumes of multiple petabytes per year. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers comprise a range of architectures and many do not possess Grid middleware, thus, monitoring of storage usage and analysis software is not possible for the typical Tier-3 site system administrator, similarly, Tier-3 site activity is not available for the virtual organization of the experiment. In this paper an ATLAS off-Grid site-monitoring software suite is presented. The software suite enables monitoring of sites not covered by the ATLAS Distributed Computing software.

  19. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  20. ATLAS Distributed Computing Experience and Performance During the LHC Run-2

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the new model was demonstrated through the delivery of analysis datasets to users just one week after data taking, by completing the calibration loop, Tier-0 processing and train production steps promptly. The great flexibility of the new system also makes it possible to execute part of the Tier-0 processing on the grid when Tier-0 resources experience a backlog during high data-taking periods. The introduction of the data lifetime model, where each dataset is assigned a finite lifetime (with extensions possible for frequently accessed data), was made possible by Rucio. Thanks to this the storage crises experienced in Run-1 have not reappeared during Run-2. In addition, the distinction between Tier-1 and Tier-2 disk storage, now largely artificial given the quality of Tier-2 resources and their networking, has been removed through the introduction of dynamic ATLAS clouds that group the storage endpoint nucleus and its close-by execution satellite sites. All stable

  1. ATLAS

    CERN Multimedia

    2002-01-01

    Barrel and END-CAP Toroids In order to produce a powerful magnetic field to bend the paths of the muons, the ATLAS detector uses an exceptionally large system of air-core toroids arranged outside the calorimeter volumes. The large volume magnetic field has a wide angular coverage and strengths of up to 4.7tesla. The toroids system contains over 100km of superconducting wire and has a design current of 20 500 amperes. (ATLAS brochure: The Technical Challenges)

  2. ATLAS FTK a - very complex - custom super computer

    Science.gov (United States)

    Kimura, N.; ATLAS Collaboration

    2016-10-01

    In the LHC environment for high interaction pile-up, advanced techniques of analysing the data in real time are required in order to maximize the rate of physics processes of interest with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at the hardware level that is designed to deliver full-scan tracks with pT above 1 GeV to the ATLAS trigger system for events passing the Level-1 accept (at a maximum rate of 100 kHz). In order to achieve this performance, a highly parallel system was designed and currently it is being commissioned within in ATLAS. Starting in 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against one billion patterns stored in custom ASIC chips (Associative memory chip - AM06). In a first stage, coarse resolution hits are matched against the patterns and the accepted hits undergo track fitting implemented in FPGAs. Tracks with pT > 1GeV are delivered to the High Level Trigger within about 100 ps. Resolution of the tracks coming from FTK is close to the offline tracking and it will allow for reliable detection of primary and secondary vertexes at trigger level and improved trigger performance for b-jets and tau leptons. This contribution will give an overview of the FTK system and present the status of commissioning of the system. Additionally, the expected FTK performance will be briefly described.

  3. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2017-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of the...

  4. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of the Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of...

  5. PanDA for ATLAS Distributed Computing in the Next Decade

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2016-01-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarde...

  6. PanDA for ATLAS distributed computing in the next decade

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarde...

  7. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  8. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  9. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  10. Monitoring of computing resource use of active software releases at ATLAS

    Science.gov (United States)

    Limosani, Antonio; ATLAS Collaboration

    2017-10-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed in plots generated using Python visualization libraries and collected into pre-formatted auto-generated Web pages, which allow the ATLAS developer community to track the performance of their algorithms. This information is however preferentially filtered to domain leaders and developers through the use of JIRA and via reports given at ATLAS software meetings. Finally, we take a glimpse of the future by reporting on the expected CPU and RAM usage in benchmark workflows associated with the High Luminosity LHC and anticipate the ways performance monitoring will evolve to understand and benchmark future workflows.

  11. VARIATION IN THE MORPHOLOGY OF ATLAS VERTEBRAE IN DIFFERENT SKELETAL PATTERNS: A THREE – DIMENSIONAL COMPUTED TOMOGRAPHY EVALUATION

    OpenAIRE

    Prajakta; Sunita,; Ranjit H; Narendra

    2015-01-01

    OBJECTIVE : The morphology of the atlas vertebrae seems to be affected by the head posture , age , congenital anomalies and the skeletal growth pattern. The present study was carried out to assess the variation in the morphology of atlas vertebrae in different vertical skeletal patterns MATERIAL AND METHOD: Cone - beam computed tomography images of 45 adu lt subjects aged 18 to 35 years were evaluated. Subjects constituted three groups: group 1; avera...

  12. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  13. [Roles of computed tomography in the diagnosis and treatment of complex atlas pillow deformity].

    Science.gov (United States)

    Chen, Hang; Li, Hai-yang; Shi, Xi-wen; Gao, Yan-zheng; Gao, Kun

    2012-07-17

    To explore the roles of computed tomography (CT) in the diagnosis and treatment of complex atlas pillow deformity. From January 2010 to February 2012, the preoperative and postoperative CT imaging findings were collected from 32 cases of complicated atlas pillow deformity undergoing surgical treatment at Henan Provincial People's Hospital. There were 18 males and 14 females with a mean age of 36.8 years (range: 23 - 65). The average duration of disease was 4.5 years (range: 0.25 - 10). In 32 cases, a definite diagnosis was established preoperatively by coronary sagittal CT scans and 3-dimensional reconstruction. And CT re-examinations were performed to review the postoperative curative efficacies. CT imaging examination is of vital importance in the diagnosis, personalized surgical procedures and prognostic evaluation of complex craniocervical junction deformity.

  14. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00389536; The ATLAS collaboration; Brasolin, Franco; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2017-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4100 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  15. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration; Fazio, Daniel; Gament, Costin-Eugen; Lee, Christopher; Scannicchio, Diana; Twomey, Matthew Shaun

    2016-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  16. Computation of a high-resolution MRI 3D stereotaxic atlas of the sheep brain.

    Science.gov (United States)

    Ella, Arsène; Delgadillo, José A; Chemineau, Philippe; Keller, Matthieu

    2017-02-15

    The sheep model was first used in the fields of animal reproduction and veterinary sciences and then was utilized in fundamental and preclinical studies. For more than a decade, magnetic resonance (MR) studies performed on this model have been increasingly reported, especially in the field of neuroscience. To contribute to MR translational neuroscience research, a brain template and an atlas are necessary. We have recently generated the first complete T1-weighted (T1W) and T2W MR population average images (or templates) of in vivo sheep brains. In this study, we 1) defined a 3D stereotaxic coordinate system for previously established in vivo population average templates; 2) used deformation fields obtained during optimized nonlinear registrations to compute nonlinear tissues or prior probability maps (nlTPMs) of cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) tissues; 3) delineated 25 external and 28 internal sheep brain structures by segmenting both templates and nlTPMs; and 4) annotated and labeled these structures using an existing histological atlas. We built a quality high-resolution 3D atlas of average in vivo sheep brains linked to a reference stereotaxic space. The atlas and nlTPMs, associated with previously computed T1W and T2W in vivo sheep brain templates and nlTPMs, provide a complete set of imaging space that are able to be imported into other imaging software programs and could be used as standardized tools for neuroimaging studies or other neuroscience methods, such as image registration, image segmentation, identification of brain structures, implementation of recording devices, or neuronavigation. J. Comp. Neurol. 525:676-692, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  18. A step towards a computing grid for the LHC experiments: ATLAS Data Challenge 1

    Energy Technology Data Exchange (ETDEWEB)

    Sturrock, R.; Bischof, R.; Epp, B.; Ghete, V.M.; Kuhn, D.; Mello, A.G.; Caron, B.; Vetterli, M.C.; Karapetian, G.; Martens, K.; Agarwal, A.; Poffenberger, P.; McPherson, R.A.; Sobie, R.J.; Armstrong, S.; Benekos, N.; Boisvert, V.; Boonekamp, M.; Brandt, S.; Casado, P.; Elsing, M.; Gianotti, F.; Goossens, L.; Grote, M.; Jansen, J.B.; Mair, K.; Nairz, A.; Padilla, C.; Poppleton, A.; Poulard, G.; Richter-Was, E.; Rosati, S.; Schoerner-Sadenius, T.; Wengler, T.; Xu, G.F.; Ping, J.L.; Chudoba, J.; Kosina, J.; Lokajicek, M.; Svec, J.; Tas, P.; Hansen, J.R.; Lytken, E.; Nielsen, J.L.; Waananen, A.; Tapprogge, S.; Calvet, D.; Albrand, S.; Collot, J.; Fulachier, J.; Ledroit-Guillon, F.; Ohlsson-Malek, S.; Viret, S.; Wielers, M.; Bernardet, K.; Correard, S.; Rozanov, A.; de Vivie de Regie, J-B.; Arnault, C.; Bourdarios, C.; Hrivnac, J.; Lechowski, M.; Parrour, G.; Perus, A.; Rousseau, D.; Schaffer, A.; Unal, G.; Derue, F.; Chevalier, L.; Hassani, S.; Laporte, J-F.; Nicolaidou, R.; Pomarede, D.; Virchaux, M.; Nesvadba, N.; Baranov, Sergei; Putzer, A.; Khonich, A.; Duckeck, G.; Schieferdecker, P.; Kiryunin, A.; Schieck, J.; Lagouri, Th.; Duchovni, E.; Levinson, L.; Schrager, D.; Negri, G.; Bilokon, H.; Spogli, L.; Barberis, D.; Parodi, F.; Cataldi, G.; Gorini, E.; Primavera, M.; Spagnolo, S.; Cavalli, D.; Heldmann, M.; Lari, T.; Perini, L.; Rebatto, D.; Resconi, S.; Tartarelli, F.; Vaccarossa, L.; Biglietti, M.; Carlino, G.; Conventi, F.; Doria, A.; Merola, L.; Polesello, G.; Vercesi, V.; De Salvo, A.; Di Mattia, A.; Luminari, L.; Nisati, A.; Reale, M.; Testa, M.; Farilla, A.; Verducci, M.; Cobal, M.; Santi, L.; Hasegawa, Y.; Ishino, M.; Mashimo, T.; Matsumoto, H.; Sakamoto, H.; Tanaka, J.; Ueda, I.; Bentvelsen, S.; Fornaini, A.; Gorfine, G.; Groep, D.; Templon, J.; Koster, J.; Konstantinov, A.; Myklebust, T.; Ould-Saada, F.; Bold, T.; Kaczmarska, A.; Malecki, P.; Szymocha, T.; Turala, M.; Kulchitsky, Y.; Khoreauli, G.; Gromova, N.; Tsulaia, V.; et al.

    2004-04-23

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge was the preparation and the deployment of the software required for the production of large event samples as a worldwide-distributed activity. It should be noted that it was not an option to ''run everything at CERN'' even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organizing and then carrying out this large-scale production at a significant number of sites around the world had the refore to be faced. However, the benefits of this are manifold: apart from realizing the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned from them as a step towards a computing Grid for the LHC experiments.

  19. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  20. Monitoring of computing resource use of active software releases at ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00219183; The ATLAS collaboration

    2017-01-01

    The LHC is the world’s most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the TierO at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end-user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as “MemoryMonitor”, to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and dis...

  1. Computer Simulation of the Cool Down of the ATLAS Liquid Argon Barrel Calorimeter

    CERN Document Server

    Korperud, N; Fabre, C; Owren, G; Passardi, Giorgio

    2002-01-01

    The ATLAS electromagnetic barrel calorimeter consists of a liquid argon detector with a total mass of 120 tonnes. This highly complicated structure, fabricated from copper, lead, stainless steel and glass-fiber reinforced epoxy will be placed in an aluminum cryostat. The cool down process of the detector will be limited by the maximum temperature differences accepted by the composite structure so as to avoid critical mechanical stresses. A computer program simulating the cool down of the detector by calculating the local heat transfer throughout a simplified model has been developed. The program evaluates the cool down time as a function of different contact gasses filling the spaces within the detector.

  2. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  3. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    The ATLAS Distributed Computing activities have so far concentrated in the "central" part of the experiment computing system, namely the first 3 tiers (the CERN Tier0, 10 Tier1 centers and over 60 Tier2 sites). Many ATLAS Institutes and National Communities have deployed (or intend to) deploy Tier-3 facilities. Tier-3 centers consist of non-pledged resources, which are usually dedicated to data analysis tasks by the geographically close or local scientific groups, and which usually comprise a range of architectures without Grid middleware. Therefore a substantial part of the ATLAS monitoring tools which make use of Grid middleware, cannot be used for a large fraction of Tier3 sites. The presentation will describe the T3mon project, which aims to develop a software suite for monitoring the Tier3 sites, both from the perspective of the local site administrator and that of the ATLAS VO, thereby enabling the global view of the contribution from Tier3 sites to the ATLAS computing activities. Special attention in p...

  4. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  5. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  6. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  7. The Evolving role of Tier2s in ATLAS with the new Computing and Data Distribution Model

    CERN Document Server

    Gonzalez de la Hoz, S; The ATLAS collaboration

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  8. The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

    CERN Document Server

    Gonzalez de la Hoz, S

    2012-01-01

    Originally the ATLAS computing model assumed that the Tier2s of each of the 10 clouds should keep on disk collectively at least one copy of all "active" AOD and DPD datasets. Evolution of ATLAS computing and data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. Tier2 operations take place completely asynchronously with respect to data taking. Tier2s do simulation and user analysis. Large-scale reprocessing jobs on real data are at first taking place mostly at Tier1s but will progressively move to Tier2s as well. The availability of disk space at Tier2s is extremely important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used mo...

  9. PanDA for ATLAS distributed computing in the next decade

    Science.gov (United States)

    Barreiro Megino, F. H.; De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such

  10. A Step Towards A Computing Grid For The LHC Experiments ATLAS Data Challenge 1

    CERN Document Server

    Sturrock, R; Epp, B; Ghete, V M; Kuhn, D; Mello, A G; Caron, B; Vetterli, M C; Karapetian, G V; Martens, K; Agarwal, A; Poffenberger, P R; McPherson, R A; Sobie, R J; Amstrong, S; Benekos, N C; Boisvert, V; Boonekamp, M; Brandt, S; Casado, M P; Elsing, M; Gianotti, F; Goossens, L; Grote, M; Hansen, J B; Mair, K; Nairz, A; Padilla, C; Poppleton, A; Poulard, G; Richter-Was, Elzbieta; Rosati, S; Schörner-Sadenius, T; Wengler, T; Xu, G F; Ping, J L; Chudoba, J; Kosina, J; Lokajícek, M; Svec, J; Tas, P; Hansen, J R; Lytken, E; Nielsen, J L; Wäänänen, A; Tapprogge, Stefan; Calvet, D; Albrand, S; Collot, J; Fulachier, J; Ledroit-Guillon, F; Ohlsson-Malek, F; Viret, S; Wielers, M; Bernardet, K; Corréard, S; Rozanov, A; De Vivie de Régie, J B; Arnault, C; Bourdarios, C; Hrivnác, J; Lechowski, M; Parrour, G; Perus, A; Rousseau, D; Schaffer, A; Unal, G; Derue, F; Chevalier, L; Hassani, S; Laporte, J F; Nicolaidou, R; Pomarède, D; Virchaux, M; Nesvadba, N; Baranov, S; Putzer, A; Khonich, A; Duckeck, G; Schieferdecker, P; Kiryunin, A E; Schieck, J; Lagouri, T; Duchovni, E; Levinson, L; Schrager, D; Negri, G; Bilokon, H; Spogli, L; Barberis, D; Parodi, F; Cataldi, G; Gorini, E; Primavera, M; Spagnolo, S; Cavalli, D; Heldmann, M; Lari, T; Perini, L; Rebatto, D; Resconi, S; Tatarelli, F; Vaccarossa, L; Biglietti, M; Carlino, G; Conventi, F; Doria, A; Merola, L; Polesello, G; Vercesi, V; De Salvo, A; Di Mattia, A; Luminari, L; Nisati, A; Reale, M; Testa, M; Farilla, A; Verducci, M; Cobal, M; Santi, L; Hasegawa, Y; Ishino, M; Mashimo, T; Matsumoto, H; Sakamoto, H; Tanaka, J; Ueda, I; Bentvelsen, Stanislaus Cornelius Maria; Fornaini, A; Gorfine, G; Groep, D; Templon, J; Köster, L J; Konstantinov, A; Myklebust, T; Ould-Saada, F; Bold, T; Kaczmarska, A; Malecki, P; Szymocha, T; Turala, M; Kulchitskii, Yu A; Khoreauli, G; Gromova, N; Tsulaia, V; Minaenko, A A; Rudenko, R; Slabospitskaya, E; Solodkov, A; Gavrilenko, I; Nikitine, N; Sivoklokov, S Yu; Toms, K; Zalite, A; Zalite, Yu; Kervesan, B; Bosman, M; González, S; Sánchez, J; Salt, J; Andersson, N; Nixon, L; Eerola, Paule Anna Mari; Kónya, B; Smirnova, O G; Sandgren, A; Ekelöf, T J C; Ellert, M; Gollub, N; Hellman, S; Lipniacka, A; Corso-Radu, A; Pérez-Réale, V; Lee, S C; CLin, S C; Ren, Z L; Teng, P K; Faulkner, P J W; O'Neale, S W; Watson, A; Brochu, F; Lester, C; Thompson, S; Kennedy, J; Bouhova-Thacker, E; Henderson, R; Jones, R; Kartvelishvili, V G; Smizanska, M; Washbrook, A J; Drohan, J; Konstantinidis, N P; Moyse, E; Salih, S; Loken, J; Baines, J T M; Candlin, D; Candlin, R; Clifft, R; Li, W; McCubbin, N A; George, S; Lowe, A; Buttar, C; Dawson, I; Moraes, A; Tovey, Daniel R; Gieraltowski, J; Malon, D; May, E; LeCompte, T J; Vaniachine, A; Adams, D L; Assamagan, Ketevi A; Baker, R; Deng, W; Fine, V; Fisyak, Yu; Gibbard, B; Ma, H; Nevski, P; Paige, F; Rajagopalan, S; Smith, J; Undrus, A; Wenaus, T; Yu, D; Calafiura, P; Canon, S; Costanzo, D; Hinchliffe, Ian; Lavrijsen, W; Leggett, C; Marino, M; Quarrie, D R; Sakrejda, I; Stravopoulos, G; Tull, C; Loch, P; Youssef, S; Shank, J T; Engh, D; Frank, E; Sen-Gupta, A; Gardner, R; Meritt, F; Smirnov, Y; Huth, J; Grundhoefer, L; Luehring, F C; Goldfarb, S; Severini, H; Skubic, P L; Gao, Y; Ryan, T; De, K; Sosebee, M; McGuigan, P; Ozturk, N

    2004-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity. It should be noted that it was not an option to "run the complete production at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organising and carrying out this large-scale production at a significant number of sites around the world had therefore to be faced. However, the benefits of this are manifold: apart from realising the require...

  11. ATLAS off-Grid sites (Tier 3) monitoring. From local fabric monitoring to global overview of the VO computing activities

    CERN Document Server

    PETROSYAN, A; The ATLAS collaboration; BELOV, S; ANDREEVA, J; KADOCHNIKOV, I

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centres to be able to operate such large volume of data. With the formation of small computing centres, usually based at universities, the model was expanded to include them as Tier3 sites. The experiment supplies all necessary software to operate typical Grid-site, but Tier3 sites do not support Grid services of the experiment or support them partially. Tier3 centres comprise a range of architectures and many do not possess Grid middleware, thus, monitoring of storage and analysis software used on Tier2 sites becomes unavailable for Tier3 site system administrator and, also, Tier3 sites activity becomes unavailable for virtual organization of the experiment. In this paper we present ATLAS off-Grid sites monitoring software suite, which enables monitoring on sites, which are not unde...

  12. PanDA: A New Paradigm for Distributed Computing in HEP Through the Lens of ATLAS and other Experiments

    CERN Document Server

    De, K; The ATLAS collaboration; Maeno, T; Nilsson, P; Wenaus, T

    2014-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites, the volume of processed data is beyond the exabyte scale, and data processing requires more than a billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of computing in HEP was discarded in favor of a far more flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at a million computing jobs per day, and processing over an exabyte of data in 2013. We will describe the design and implementation of PanDA, present data on the performance of PanDA a...

  13. Making Thinking Visible with Atlas.ti: Computer Assisted Qualitative Analysis as Textual Practices

    Directory of Open Access Journals (Sweden)

    Zdeněk Konopásek

    2008-05-01

    Full Text Available How is a new quality of reading, which we call "sociological understanding", created during the process of qualitative analysis? A methodological (conventional answer to this question usually speaks of mental processes and conceptual work. This paper suggests a different view—sociological rather than methodological; or more precisely a view inspired by a contemporary sociology of science. It describes qualitative analysis as a set of material practices. Taking grounded theory methodology and the work with the computer programme Atlas.ti as an example, it is argued that thinking is inseparable from doing even in this domain. It is argued that by adopting the suggested perspective we might be better able to speak of otherwise hardly graspable processes of qualitative analysis in more accountable and instructable ways. Further, software packages would be better understood not only as "mere tools" for coding and retrieving, but also as complex virtual environments for embodied and practice-based knowledge making. Finally, grounded theory methodology might appear in a somewhat different light: when described not in terms of methodological or theoretical concepts but rather in terms of what we practically do with the analysed data, it becomes perfectly compatible with the radical constructivist, textualist, or even post-structuralist paradigms of interpretation (from which it has allegedly departed by a long way. URN: urn:nbn:de:0114-fqs0802124

  14. Analysis of Metabolomics Datasets with High-Performance Computing and Metabolite Atlases

    Directory of Open Access Journals (Sweden)

    Yushu Yao

    2015-07-01

    Full Text Available Even with the widespread use of liquid chromatography mass spectrometry (LC/MS based metabolomics, there are still a number of challenges facing this promising technique. Many, diverse experimental workflows exist; yet there is a lack of infrastructure and systems for tracking and sharing of information. Here, we describe the Metabolite Atlas framework and interface that provides highly-efficient, web-based access to raw mass spectrometry data in concert with assertions about chemicals detected to help address some of these challenges. This integration, by design, enables experimentalists to explore their raw data, specify and refine features annotations such that they can be leveraged for future experiments. Fast queries of the data through the web using SciDB, a parallelized database for high performance computing, make this process operate quickly. By using scripting containers, such as IPython or Jupyter, to analyze the data, scientists can utilize a wide variety of freely available graphing, statistics, and information management resources. In addition, the interfaces facilitate integration with systems biology tools to ultimately link metabolomics data with biological models.

  15. Analysis of Metabolomics Datasets with High-Performance Computing and Metabolite Atlases.

    Science.gov (United States)

    Yao, Yushu; Sun, Terence; Wang, Tony; Ruebel, Oliver; Northen, Trent; Bowen, Benjamin P

    2015-07-20

    Even with the widespread use of liquid chromatography mass spectrometry (LC/MS) based metabolomics, there are still a number of challenges facing this promising technique. Many, diverse experimental workflows exist; yet there is a lack of infrastructure and systems for tracking and sharing of information. Here, we describe the Metabolite Atlas framework and interface that provides highly-efficient, web-based access to raw mass spectrometry data in concert with assertions about chemicals detected to help address some of these challenges. This integration, by design, enables experimentalists to explore their raw data, specify and refine features annotations such that they can be leveraged for future experiments. Fast queries of the data through the web using SciDB, a parallelized database for high performance computing, make this process operate quickly. By using scripting containers, such as IPython or Jupyter, to analyze the data, scientists can utilize a wide variety of freely available graphing, statistics, and information management resources. In addition, the interfaces facilitate integration with systems biology tools to ultimately link metabolomics data with biological models.

  16. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  17. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Caballero, Jose; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  18. Frontier use in ATLAS

    CERN Document Server

    Smith, D A; The ATLAS collaboration; DeStefano, J; Dewhurst, A; Donno, F; Dykstra, D; Front, D; Gallas, E; Hawkings, R; Luehring, F; Walker, R

    2010-01-01

    Frontier is a distributed database access system, including data caching, that was developed originally for the CMS experiment. This system has been in production for CMS for some time, providing world-wide access to the experiment's conditions data for all user jobs. The ATLAS experiment, which has had similar problems with global data distribution, investigated the use of the system for ATLAS jobs. After months of trials and verification, ATLAS put the Frontier system into production late in 2009. Frontier now supplies database access for ATLAS jobs at over 50 computing sites. This successful deployment of Frontier in ATLAS will be described, along with the scope of the system and necessary resources.

  19. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Augustinack, Jean C.; Nguyen, Khoa

    2015-01-01

    an algorithm that can analyze multimodal data and adapt to variations in MRI contrast due to differences in acquisition hardware or pulse sequences. The applicability of the atlas, which we are releasing as part of FreeSurfer (version 6.0), is demonstrated with experiments on three different publicly available......Automated analysis of MRI data of the subregions of the hippocampus requires computational atlases built at a higher resolution than those that are typically used in current neuroimaging studies. Here we describe the construction of a statistical atlas of the hippocampal formation at the subregion...... level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise...

  20. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  1. Computation of an MRI brain atlas from a population of Parkinson’s disease patients

    Science.gov (United States)

    Angelidakis, L.; Papageorgiou, I. E.; Damianou, C.; Psychogios, M. N.; Lingor, P.; von Eckardstein, K.; Hadjidemetriou, S.

    2017-11-01

    Parkinson’s Disease (PD) is a degenerative disorder of the brain. This study presents an MRI-based brain atlas of PD to characterize associated alterations for diagnostic and interventional purposes. The atlas standardizes primarily the implicated subcortical regions such as the globus pallidus (GP), substantia nigra (SN), subthalamic nucleus (STN), caudate nucleus (CN), thalamus (TH), putamen (PUT), and red nucleus (RN). The data were 3.0 T MRI brain images from 16 PD patients and 10 matched controls. The images used were T1-weighted (T 1 w), T2-weighted (T 2 w) images, and Susceptibility Weighted Images (SWI). The T1w images were the reference for the inter-subject non-rigid registration available from 3DSlicer. Anatomic labeling was achieved with BrainSuite and regions were refined with the level sets segmentation of ITK-Snap. The subcortical centers were analyzed for their volume and signal intensity. Comparison with an age-matched control group unravels a significant PD-related T1w signal loss in the striatum (CN and PUT) centers, but approximately a constant volume. The results in this study improve MRI based PD localization and can lead to the development of novel biomarkers.

  2. Pocket atlas of sectional anatomy: computed tomography and magnetic resonance imaging. Vol. 3. Spine, extremities, joints

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B.; Reif, E. [Caritas Hospital, Dillingen (Germany). Dept. of Radiology

    2007-07-01

    Magnetic resonance imaging (MRI) of the musculoskeletal system is an established and important component in the diagnosis of diseases of the joints, soft tissues, bones, and bone marrow. We are therefore pleased to collect together images of the joints and the spinal column in a separate volume on the musculoskeletal system. Demonstrating the growing importance of new developments in MRI in recent years, with ever-increasing resolution, many images were acquired with 3-tesla units. We are deeply grateful to the manufacturers, Siemens and Philips, for making this possible. We believe that colored atlases are the ideal medium to represent the highly detailed images achieved nowadays with improved resolution techniques. Volume 3 of the Pocket Atlas of Sectional Anatomay provides a color illustration facing each magnetic resonance image, as in the preceding volumes on the skull, thorax, and abdomen. To ensure the greatest possible precision in details, we still produce these illustrations ourselves. Each is accompanied by a sectional image and an orientation aid. Uniform color schemes ensure optimal clarity, as similar structures, such as arteries, veins, nerves, tendons, etc., are consistently represented in the same color. Individual muscle groups are represented uniformly, but differentiated from other muscle groups, so that classification is possible even when numerous groups of muscles are shown in the same image. Maximal lucidity prevails even in highly detailed representations. This is made possible by the high quality of the production and printing process that are characteristic of Thieme International. (orig.)

  3. An atlas of the (near) future: cognitive computing applications for medical imaging (Conference Presentation)

    Science.gov (United States)

    LeGrand, Anne

    2017-02-01

    The role of medical imaging in global health systems is literally fundamental. Like labs, medical images are used at one point or another in almost every high cost, high value episode of care. CT scans, mammograms, and x-rays, for example, "atlas" the body and help chart a course forward for a patient's care team. Imaging precision has improved as a result of technological advancements and breakthroughs in related medical research. Those advancements also bring with them exponential growth in medical imaging data. As IBM trains Watson to "see" medical images, Ms. Le Grand will discuss recent advances made by Watson Health and explore the potential value of "augmented intelligence" to assist healthcare providers like radiologists and cardiologists, as well as the patients they serve.

  4. A novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its use on the anterior upper cervical screw fixation

    Directory of Open Access Journals (Sweden)

    Ai-Min Wu

    2016-02-01

    Full Text Available Purpose. To investigate a novel computed method to reconstruct the bilateral digital interarticular channel of atlas and its potential use on the anterior upper cervical screw fixation. Methods. We have used the reverse engineering software (image-processing software and computer-aided design software to create the approximate and optimal digital interarticular channel of atlas for 60 participants. Angles of channels, diameters of inscribed circles, long and short axes of ellipses were measured and recorded, and gender-specific analysis was also performed. Results. The channels provided sufficient space for one or two screws, and the parameters of channels are described. While the channels of females were smaller than that of males, no significant difference of angles between males and females were observed. Conclusion. Our study demonstrates the radiological features of approximate digital interarticular channels, optimal digital interarticular channels of atlas, and provides the reference trajectory of anterior transarticular screws and anterior occiput-to-axis screws. Additionally, we provide a protocol that can help make a pre-operative plan for accurate placement of anterior transarticular screws and anterior occiput-to-axis screws.

  5. A GRID-like computing proposal for the Tile calorimeter of the ATLAS experiment

    CERN Document Server

    Maidantchik, C; Lanza, M L D; Santelli, R; Damazio, D O

    2004-01-01

    For the hadronic calorimeter of the ATLAS detector, the TileTransfer has been developed as a Web system to facilitate the transferring of data that are produced during calibration testbeam periods. It automatically searches, stages and provides a link to download the selected data stored at a remote file center. The system has an interface with the Run Info Database, which contains the description of all test beam runs. In order to optimize the file transmission, the system is connected to a central repository that stores information of the latest accesses. Once a client host connects to the TileTransfer, it can become a file server to other users. At the servers, the selected file is split into several pieces and each piece is sent in parallel and built up together in the final destination. TileTransfer allows that the rile administration be geographically distributed, avoiding an overloaded at the central repository. We also foresee the integration with analysis tools by remote Web access and the publicatio...

  6. 26th February 2009 - US Google Vice President and Chief Internet Evangelist V. Cerf signing the guest book with Director for research and Computing S. Bertolucci; visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    HI-0902038 05: IT Department Head, F. Hemmer; US Google Vice President and Chief Internet Evangelist V. Cerf; Computing Security Officer and Colloquium Convenor D. R. Myers; Member of the Internet Society Advisory Council F. Flückiger; Director for Research and Scientific Computing, S. Bertolucci ; Honorary Staff Member, B. Segal. HI-0902038 16: Computing Security Officer and Colloquium Convenor D. R. Myers; UC Irvine, ATLAS Deputy Spokesperson elect A. J. Lankford; US Google Vice President and Chief Internet Evangelist V. Cerf; ATLAS Collaboration Spokesperson P. Jenni; IT Department Head, F. Hemmer.

  7. Atlas Fractures and Atlas Osteosynthesis: A Comprehensive Narrative Review.

    Science.gov (United States)

    Kandziora, Frank; Chapman, Jens R; Vaccaro, Alexander R; Schroeder, Gregory D; Scholz, Matti

    2017-09-01

    Most atlas fractures are the result of compression forces. They are often combined with fractures of the axis and especially with the odontoid process. Multiple classification systems for atlas fractures have been described. For an adequate diagnosis, a computed tomography is mandatory. To distinguish between stable and unstable atlas injury, it is necessary to evaluate the integrity of the transverse atlantal ligament (TAL) by magnetic resonance imaging and to classify the TAL lesion. Studies comparing conservative and operative management of unstable atlas fractures are unfortunately not available in the literature; neither are studies comparing different operative treatment strategies. Hence all treatment recommendations are based on low level evidence. Most of atlas fractures are stable and will be successfully managed by immobilization in a soft/hard collar. Unstable atlas fractures may be treated conservatively by halo-fixation, but nowadays more and more surgeons prefer surgery because of the potential discomfort and complications of halo-traction. Atlas fractures with a midsubstance ligamentous disruption of TAL or severe bony ligamentous avulsion can be treated by a C1/2 fusion. Unstable atlas fractures with moderate bony ligamentous avulsion may be treated by atlas osteosynthesis. Although the evidence for the different treatment strategies of atlas fractures is low, atlas osteosynthesis has the potential to change treatment philosophies. The reasons for this are described in this review.

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid; Reconstruction et identification des electrons dans l'experience Atlas. Participation a la mise en place d'un Tier 2 de la grille de calcul

    Energy Technology Data Exchange (ETDEWEB)

    Derue, F

    2008-03-15

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  10. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  11. Methods and computing challenges of the realistic simulation of physics events in the presence of pile-up in the ATLAS experiment

    CERN Document Server

    Chapman, J D; The ATLAS collaboration

    2014-01-01

    We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a description of the standard approach used by the ATLAS experiment and details of how we manage the conflicting demands of keeping the background dataset size as small as possible while minimizing the effect of background event re-use. We also present details of the methods used to minimize the memory footprint of these digitization jobs, to keep them within the grid limit, despite combining the information from thousands of simulated events at once. We also describe an alternative approach, known as Overlay. Here, the actual detector conditions are sampled from raw data using a special zero-bias trigger, and the simulated physi...

  12. A GRID-type computation for the tile calorimeter of the ATLAS experiment at CERN; Computacao tipo grid para o calorimetro de telhas do experimento ATLAS do CERN

    Energy Technology Data Exchange (ETDEWEB)

    Maidantchik, Carmen; Seixas, Jose Manoel de [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia]. E-mail: lodi@lps.ufrj.br; seixas@lps.ufrj.br; Lanza, Marcelo Luiz Drumond; Santelli, Rafael [Universidade Federal, Rio de Janeiro, RJ (Brazil). Escola de Engenharia]. E-mail: lanza@del.ufrj.br; santelli@lps.ufrj.br

    2002-07-01

    For the hadronic calorimeter of ATLAS, the tile transfer has been developed as a Web system to facilitate the transferring of data to that are produced during calibration test beam periods. It automatically searches, stages and provides a link to download the selected data stored at a remote file center. The system has an interface with the Run Info Database, which contains the description of all test beam runs. It is also possible to receive a link to the files by e-mail, avoiding waiting time until the process is finished. In order to optimize the file transmission, the system is connected to a central repository that stores information of the latest accesses. Once a user connects to the tile transfer, he/she can become a file server to other users. Thus, at different servers, the selected file is split into several pieces. Each piece is sent from one server in parallel and built up together in the final destination. We are currently working in the 2.0 version, dealing with security and efficiency requirements. The whole system runs under the Web and it was developed in C language, Php and Java Script. Tile Transfer allows that the file administration be geographically distributed, avoiding an overloaded at the central repository. We also foresee the integration with analysis tools by remote Web access and the publication of the results to the whole community. Among the benefits of this proposal, one can underline an effective management of data across the Net of users. (author)

  13. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  14. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  15. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, G.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Abreu, H.; Acharya, B.S.; Adams, D.L.; Addy, T.N.; Adelman, J.; Adorisio, C.; Adragna, P.; Adye, T.; Aefsky, S.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; Ahmed, H.; Ahsan, M.; Aielli, G.; Akdogan, T.; Akesson, T.P.A.; Akimoto, G.; Akimov, A.V.; Aktas, A.; Alam, M.S.; Alam, M.A.; Albrand, S.; Aleksa, M.; Aleksandrov, I.N.; Alexa, C.; Alexander, G.; Alexandre, G.; Alexopoulos, T.; Alhroob, M.; Aliev, M.; Alimonti, G.; Alison, J.; Aliyev, M.; Allport, P.P.; Allwood-Spiers, S.E.; Almond, J.; Aloisio, A.; Alon, R.; Alonso, A.; Alviggi, M.G.; Amako, K.; Amelung, C.; Amorim, A.; Amoros, G.; Amram, N.; Anastopoulos, C.; Andeen, T.; Anders, C.F.; Anderson, K.J.; Andreazza, A.; Andrei, V.; Anduaga, X.S.; Angerami, A.; Anghinolfi, F.; Anjos, N.; Annovi, A.; Antonaki, A.; Antonelli, M.; Antonelli, S.; Antos, J.; Antunovic, B.; Anulli, F.; Aoun, S.; Arabidze, G.; Aracena, I.; Arai, Y.; Arce, A.T.H.; Archambault, J.P.; Arfaoui, S.; Arguin, J-F.; Argyropoulos, T.; Arik, M.; Armbruster, A.J.; Arnaez, O.; Arnault, C.; Artamonov, A.; Arutinov, D.; Asai, M.; Asai, S.; Asfandiyarov, R.; Ask, S.; Asman, B.; Asner, D.; Asquith, L.; Assamagan, K.; Astbury, A.; Astvatsatourov, A.; Atoian, G.; Auerbach, B.; Augsten, K.; Aurousseau, M.; Austin, N.; Avolio, G.; Avramidou, R.; Axen, D.; Ay, C.; Azuelos, G.; Azuma, Y.; Baak, M.A.; Bach, A.M.; Bachacou, H.; Bachas, K.; Backes, M.; Badescu, E.; Bagnaia, P.; Bai, Y.; Bain, T.; Baines, J.T.; Baker, O.K.; Baker, M.D.; Baker, S; Baltasar Dos Santos Pedrosa, F.; Banas, E.; Banerjee, P.; Banerjee, S.; Banfi, D.; Bangert, A.; Bansal, V.; Baranov, S.P.; Baranov, S.; Barashkou, A.; Barber, T.; Barberio, E.L.; Barberis, D.; Barbero, M.; Bardin, D.Y.; Barillari, T.; Barisonzi, M.; Barklow, T.; Barlow, N.; Barnett, B.M.; Barnett, R.M.; Baroncelli, A.; Barr, A.J.; Barreiro, F.; Barreiro Guimaraes da Costa, J.; Barrillon, P.; Bartoldus, R.; Bartsch, D.; Bates, R.L.; Batkova, L.; Batley, J.R.; Battaglia, A.; Battistin, M.; Bauer, F.; Bawa, H.S.; Bazalova, M.; Beare, B.; Beau, T.; Beauchemin, P.H.; Beccherle, R.; Becerici, N.; Bechtle, P.; Beck, G.A.; Beck, H.P.; Beckingham, M.; Becks, K.H.; Beddall, A.J.; Beddall, A.; Bednyakov, V.A.; Bee, C.; Begel, M.; Behar Harpaz, S.; Behera, P.K.; Beimforde, M.; Belanger-Champagne, C.; Bell, P.J.; Bell, W.H.; Bella, G.; Bellagamba, L.; Bellina, F.; Bellomo, M.; Belloni, A.; Belotskiy, K.; Beltramello, O.; Ben Ami, S.; Benary, O.; Benchekroun, D.; Bendel, M.; Benedict, B.H.; Benekos, N.; Benhammou, Y.; Benincasa, G.P.; Benjamin, D.P.; Benoit, M.; Bensinger, J.R.; Benslama, K.; Bentvelsen, S.; Beretta, M.; Berge, D.; Bergeaas Kuutmann, E.; Berger, N.; Berghaus, F.; Berglund, E.; Beringer, J.; Bernat, P.; Bernhard, R.; Bernius, C.; Berry, T.; Bertin, A.; Besana, M.I.; Besson, N.; Bethke, S.; Bianchi, R.M.; Bianco, M.; Biebel, O.; Biesiada, J.; Biglietti, M.; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biscarat, C.; Bitenc, U.; Black, K.M.; Blair, R.E.; Blanchard, J-B; Blanchot, G.; Blocker, C.; Blondel, A.; Blum, W.; Blumenschein, U.; Bobbink, G.J.; Bocci, A.; Boehler, M.; Boek, J.; Boelaert, N.; Boser, S.; Bogaerts, J.A.; Bogouch, A.; Bohm, C.; Bohm, J.; Boisvert, V.; Bold, T.; Boldea, V.; Bondarenko, V.G.; Bondioli, M.; Boonekamp, M.; Bordoni, S.; Borer, C.; Borisov, A.; Borissov, G.; Borjanovic, I.; Borroni, S.; Bos, K.; Boscherini, D.; Bosman, M.; Boterenbrood, H.; Bouchami, J.; Boudreau, J.; Bouhova-Thacker, E.V.; Boulahouache, C.; Bourdarios, C.; Boveia, A.; Boyd, J.; Boyko, I.R.; Bozovic-Jelisavcic, I.; Bracinik, J.; Braem, A.; Branchini, P.; Brandenburg, G.W.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J.E.; Braun, H.M.; Brelier, B.; Bremer, J.; Brenner, R.; Bressler, S.; Britton, D.; Brochu, F.M.; Brock, I.; Brock, R.; Brodet, E.; Bromberg, C.; Brooijmans, G.; Brooks, W.K.; Brown, G.; Bruckman de Renstrom, P.A.; Bruncko, D.; Bruneliere, R.; Brunet, S.; Bruni, A.; Bruni, G.; Bruschi, M.; Bucci, F.; Buchanan, J.; Buchholz, P.; Buckley, A.G.; Budagov, I.A.; Budick, B.; Buscher, V.; Bugge, L.; Bulekov, O.; Bunse, M.; Buran, T.; Burckhart, H.; Burdin, S.; Burgess, T.; Burke, S.; Busato, E.; Bussey, P.; Buszello, C.P.; Butin, F.; Butler, B.; Butler, J.M.; Buttar, C.M.; Butterworth, J.M.; Byatt, T.; Caballero, J.; Cabrera Urban, S.; Caforio, D.; Cakir, O.; Calafiura, P.; Calderini, G.; Calfayan, P.; Calkins, R.; Caloba, L.P.; Calvet, D.; Camarri, P.; Cameron, D.; Campana, S.; Campanelli, M.; Canale, V.; Canelli, F.; Canepa, A.; Cantero, J.; Capasso, L.; Capeans Garrido, M.D.M.; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Caramarcu, C.; Cardarelli, R.; Carli, T.; Carlino, G.; Carminati, L.; Caron, B.; Caron, S.; Carrillo Montoya, G.D.; Carron Montero, S.; Carter, A.A.; Carter, J.R.; Carvalho, J.; Casadei, D.; Casado, M.P.; Cascella, M.; Castaneda Hernandez, A.M.; Castaneda-Miranda, E.; Castillo Gimenez, V.; Castro, N.F.; Cataldi, G.; Catinaccio, A.; Catmore, J.R.; Cattai, A.; Cattani, G.; Caughron, S.; Cauz, D.; Cavalleri, P.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Cerqueira, A.S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cetin, S.A.; Chafaq, A.; Chakraborty, D.; Chan, K.; Chapman, J.D.; Chapman, J.W.; Chareyre, E.; Charlton, D.G.; Chavda, V.; Cheatham, S.; Chekanov, S.; Chekulaev, S.V.; Chelkov, G.A.; Chen, H.; Chen, S.; Chen, X.; Cheplakov, A.; Chepurnov, V.F.; Cherkaoui El Moursli, R.; Tcherniatine, V.; Chesneanu, D.; Cheu, E.; Cheung, S.L.; Chevalier, L.; Chevallier, F.; Chiarella, V.; Chiefari, G.; Chikovani, L.; Childers, J.T.; Chilingarov, A.; Chiodini, G.; Chizhov, V.; Choudalakis, G.; Chouridou, S.; Christidi, I.A.; Christov, A.; Chromek-Burckhart, D.; Chu, M.L.; Chudoba, J.; Ciapetti, G.; Ciftci, A.K.; Ciftci, R.; Cinca, D.; Cindro, V.; Ciobotaru, M.D.; Ciocca, C.; Ciocio, A.; Cirilli, M.; Citterio, M.; Clark, A.; Clark, P.J.; Cleland, W.; Clemens, J.C.; Clement, B.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coggeshall, J.; Cogneras, E.; Colijn, A.P.; Collard, C.; Collins, N.J.; Collins-Tooth, C.; Collot, J.; Colon, G.; Conde Muino, P.; Coniavitis, E.; Consonni, M.; Constantinescu, S.; Conta, C.; Conventi, F.; Cooke, M.; Cooper, B.D.; Cooper-Sarkar, A.M.; Cooper-Smith, N.J.; Copic, K.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M.J.; Costanzo, D.; Costin, T.; Cote, D.; Coura Torres, R.; Courneyea, L.; Cowan, G.; Cowden, C.; Cox, B.E.; Cranmer, K.; Cranshaw, J.; Cristinziani, M.; Crosetti, G.; Crupi, R.; Crepe-Renaudin, S.; Cuenca Almenar, C.; Cuhadar Donszelmann, T.; Curatolo, M.; Curtis, C.J.; Cwetanski, P.; Czyczula, Z.; D'Auria, S.; D'Onofrio, M.; D'Orazio, A.; Da Via, C; Dabrowski, W.; Dai, T.; Dallapiccola, C.; Dallison, S.J.; Daly, C.H.; Dam, M.; Danielsson, H.O.; Dannheim, D.; Dao, V.; Darbo, G.; Darlea, G.L.; Davey, W.; Davidek, T.; Davidson, N.; Davidson, R.; Davies, M.; Davison, A.R.; Dawson, I.; Daya, R.K.; De, K.; de Asmundis, R.; De Castro, S.; De Castro Faria Salgado, P.E.; De Cecco, S.; de Graat, J.; De Groot, N.; de Jong, P.; De Mora, L.; De Oliveira Branco, M.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Vivie De Regie, J.B.; De Zorzi, G.; Dean, S.; Dedovich, D.V.; Degenhardt, J.; Dehchar, M.; Del Papa, C.; Del Peso, J.; Del Prete, T.; Dell'Acqua, A.; Dell'Asta, L.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P.A.; Deluca, C.; Demers, S.; Demichev, M.; Demirkoz, B.; Deng, J.; Deng, W.; Denisov, S.P.; Derkaoui, J.E.; Derue, F.; Dervan, P.; Desch, K.; Deviveiros, P.O.; Dewhurst, A.; DeWilde, B.; Dhaliwal, S.; Dhullipudi, R.; Di Ciaccio, A.; Di Ciaccio, L.; Di Domenico, A.; Di Girolamo, A.; Di Girolamo, B.; Di Luise, S.; Di Mattia, A.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Diaz, M.A.; Diblen, F.; Diehl, E.B.; Dietrich, J.; Dietzsch, T.A.; Diglio, S.; Dindar Yagci, K.; Dingfelder, J.; Dionisi, C.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djilkibaev, R.; Djobava, T.; do Vale, M.A.B.; Do Valle Wemans, A.; Doan, T.K.O.; Dobos, D.; Dobson, E.; Dobson, M.; Doglioni, C.; Doherty, T.; Dolejsi, J.; Dolenc, I.; Dolezal, Z.; Dolgoshein, B.A.; Dohmae, T.; Donega, M.; Donini, J.; Dopke, J.; Doria, A.; Dos Anjos, A.; Dotti, A.; Dova, M.T.; Doxiadis, A.; Doyle, A.T.; Drasal, Z.; Dris, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Dudarev, A.; Dudziak, F.; Duhrssen, M.; Duflot, L.; Dufour, M-A.; Dunford, M.; Duran Yildiz, H.; Dushkin, A.; Duxfield, R.; Dwuznik, M.; Duren, M.; Ebenstein, W.L.; Ebke, J.; Eckweiler, S.; Edmonds, K.; Edwards, C.A.; Egorov, K.; Ehrenfeld, W.; Ehrich, T.; Eifert, T.; Eigen, G.; Einsweiler, K.; Eisenhandler, E.; Ekelof, T.; El Kacimi, M.; Ellert, M.; Elles, S.; Ellinghaus, F.; Ellis, K.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Engelmann, R.; Engl, A.; Epp, B.; Eppig, A.; Erdmann, J.; Ereditato, A.; Eriksson, D.; Ermoline, I.; Ernst, J.; Ernst, M.; Ernwein, J.; Errede, D.; Errede, S.; Ertel, E.; Escalier, M.; Escobar, C.; Espinal Curull, X.; Esposito, B.; Etienvre, A.I.; Etzion, E.; Evans, H.; Fabbri, L.; Fabre, C.; Facius, K.; Fakhrutdinov, R.M.; Falciano, S.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farley, J.; Farooque, T.; Farrington, S.M.; Farthouat, P.; Fassnacht, P.; Fassouliotis, D.; Fatholahzadeh, B.; Fayard, L.; Fayette, F.; Febbraro, R.; Federic, P.; Fedin, O.L.; Fedorko, W.; Feligioni, L.; Felzmann, C.U.; Feng, C.; Feng, E.J.; Fenyuk, A.B.; Ferencei, J.; Ferland, J.; Fernandes, B.; Fernando, W.; Ferrag, S.; Ferrando, J.; Ferrara, V.; Ferrari, A.; Ferrari, P.; Ferrari, R.; Ferrer, A.; Ferrer, M.L.; Ferrere, D.; Ferretti, C.; Fiascaris, M.; Fiedler, F.; Filipcic, A.; Filippas, A.; Filthaut, F.; Fincke-Keeler, M.; Fiolhais, M.C.N.; Fiorini, L.; Firan, A.; Fischer, G.; Fisher, M.J.; Flechl, M.; Fleck, I.; Fleckner, J.; Fleischmann, P.; Fleischmann, S.; Flick, T.; Flores Castillo, L.R.; Flowerdew, M.J.; Fonseca Martin, T.; Formica, A.; Forti, A.; Fortin, D.; Fournier, D.; Fowler, A.J.; Fowler, K.; Fox, H.; Francavilla, P.; Franchino, S.; Francis, D.; Franklin, M.; Franz, S.; Fraternali, M.; Fratina, S.; Freestone, J.; French, S.T.; Froeschl, R.; Froidevaux, D.; Frost, J.A.; Fukunaga, C.; Fullana Torregrosa, E.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gadfort, T.; Gadomski, S.; Gagliardi, G.; Gagnon, P.; Galea, C.; Gallas, E.J.; Gallo, V.; Gallop, B.J.; Gallus, P.; Galyaev, E.; Gan, K.K.; Gao, Y.S.; Gaponenko, A.; Garcia-Sciveres, M.; Garcia, C.; Garcia Navarro, J.E.; Gardner, R.W.; Garelli, N.; Garitaonandia, H.; Garonne, V.; Gatti, C.; Gaudio, G.; Gautard, V.; Gauzzi, P.; Gavrilenko, I.L.; Gay, C.; Gaycken, G.; Gazis, E.N.; Ge, P.; Gee, C.N.P.; Geich-Gimbel, Ch.; Gellerstedt, K.; Gemme, C.; Genest, M.H.; Gentile, S.; Georgatos, F.; George, S.; Gershon, A.; Ghazlane, H.; Ghodbane, N.; Giacobbe, B.; Giagu, S.; Giakoumopoulou, V.; Giangiobbe, V.; Gianotti, F.; Gibbard, B.; Gibson, A.; Gibson, S.M.; Gilbert, L.M.; Gilchriese, M.; Gilewsky, V.; Gingrich, D.M.; Ginzburg, J.; Giokaris, N.; Giordani, M.P.; Giordano, R.; Giorgi, F.M.; Giovannini, P.; Giraud, P.F.; Girtler, P.; Giugni, D.; Giusti, P.; Gjelsten, B.K.; Gladilin, L.K.; Glasman, C.; Glazov, A.; Glitza, K.W.; Glonti, G.L.; Godfrey, J.; Godlewski, J.; Goebel, M.; Gopfert, T.; Goeringer, C.; Gossling, C.; Gottfert, T.; Goggi, V.; Goldfarb, S.; Goldin, D.; Golling, T.; Gomes, A.; Gomez Fajardo, L.S.; Goncalo, R.; Gonella, L.; Gong, C.; Gonzalez de la Hoz, S.; Gonzalez Silva, M.L.; Gonzalez-Sevilla, S.; Goodson, J.J.; Goossens, L.; Gordon, H.A.; Gorelov, I.; Gorfine, G.; Gorini, B.; Gorini, E.; Gorisek, A.; Gornicki, E.; Gosdzik, B.; Gosselink, M.; Gostkin, M.I.; Gough Eschrich, I.; Gouighri, M.; Goujdami, D.; Goulette, M.P.; Goussiou, A.G.; Goy, C.; Grabowska-Bold, I.; Grafstrom, P.; Grahn, K-J.; Grancagnolo, S.; Grassi, V.; Gratchev, V.; Grau, N.; Gray, H.M.; Gray, J.A.; Graziani, E.; Green, B.; Greenshaw, T.; Greenwood, Z.D.; Gregor, I.M.; Grenier, P.; Griesmayer, E.; Griffiths, J.; Grigalashvili, N.; Grillo, A.A.; Grimm, K.; Grinstein, S.; Grishkevich, Y.V.; Groh, M.; Groll, M.; Gross, E.; Grosse-Knetter, J.; Groth-Jensen, J.; Grybel, K.; Guicheney, C.; Guida, A.; Guillemin, T.; Guler, H.; Gunther, J.; Guo, B.; Gupta, A.; Gusakov, Y.; Gutierrez, A.; Gutierrez, P.; Guttman, N.; Gutzwiller, O.; Guyot, C.; Gwenlan, C.; Gwilliam, C.B.; Haas, A.; Haas, S.; Haber, C.; Hadavand, H.K.; Hadley, D.R.; Haefner, P.; Hartel, R.; Hajduk, Z.; Hakobyan, H.; Haller, J.; Hamacher, K.; Hamilton, A.; Hamilton, S.; Han, L.; Hanagaki, K.; Hance, M.; Handel, C.; Hanke, P.; Hansen, J.R.; Hansen, J.B.; Hansen, J.D.; Hansen, P.H.; Hansl-Kozanecka, T.; Hansson, P.; Hara, K.; Hare, G.A.; Harenberg, T.; Harrington, R.D.; Harris, O.M.; Harrison, K; Hartert, J.; Hartjes, F.; Harvey, A.; Hasegawa, S.; Hasegawa, Y.; Hashemi, K.; Hassani, S.; Haug, S.; Hauschild, M.; Hauser, R.; Havranek, M.; Hawkes, C.M.; Hawkings, R.J.; Hayakawa, T.; Hayward, H.S.; Haywood, S.J.; Head, S.J.; Hedberg, V.; Heelan, L.; Heim, S.; Heinemann, B.; Heisterkamp, S.; Helary, L.; Heller, M.; Hellman, S.; Helsens, C.; Hemperek, T.; Henderson, R.C.W.; Henke, M.; Henrichs, A.; Henriques Correia, A.M.; Henrot-Versille, S.; Hensel, C.; Henss, T.; Hernandez Jimenez, Y.; Hershenhorn, A.D.; Herten, G.; Hertenberger, R.; Hervas, L.; Hessey, N.P.; Higon-Rodriguez, E.; Hill, J.C.; Hiller, K.H.; Hillert, S.; Hillier, S.J.; Hinchliffe, I.; Hines, E.; Hirose, M.; Hirsch, F.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M.C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M.R.; Hoffman, J.; Hoffmann, D.; Hohlfeld, M.; Holy, T.; Holzbauer, J.L.; Homma, Y.; Horazdovsky, T.; Hori, T.; Horn, C.; Horner, S.; Horvat, S.; Hostachy, J-Y.; Hou, S.; Hoummada, A.; Howe, T.; Hrivnac, J.; Hryn'ova, T.; Hsu, P.J.; Hsu, S.C.; Huang, G.S.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Hughes, E.W.; Hughes, G.; Hurwitz, M.; Husemann, U.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Idarraga, J.; Iengo, P.; Igonkina, O.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ince, T.; Ioannou, P.; Iodice, M.; Irles Quiles, A.; Ishikawa, A.; Ishino, M.; Ishmukhametov, R.; Isobe, T.; Issakov, V.; Issever, C.; Istin, S.; Itoh, Y.; Ivashin, A.V.; Iwanski, W.; Iwasaki, H.; Izen, J.M.; Izzo, V.; Jackson, B.; Jackson, J.N.; Jackson, P.; Jaekel, M.R.; Jain, V.; Jakobs, K.; Jakobsen, S.; Jakubek, J.; Jana, D.K.; Jansen, E.; Jantsch, A.; Janus, M.; Jared, R.C.; Jarlskog, G.; Jeanty, L.; Jen-La Plante, I.; Jenni, P.; Jez, P.; Jezequel, S.; Ji, W.; Jia, J.; Jiang, Y.; Jimenez Belenguer, M.; Jin, S.; Jinnouchi, O.; Joffe, D.; Johansen, M.; Johansson, K.E.; Johansson, P.; Johnert, S; Johns, K.A.; Jon-And, K.; Jones, G.; Jones, R.W.L.; Jones, T.J.; Jorge, P.M.; Joseph, J.; Juranek, V.; Jussel, P.; Kabachenko, V.V.; Kaci, M.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kaiser, S.; Kajomovitz, E.; Kalinin, S.; Kalinovskaya, L.V.; Kalinowski, A.; Kama, S.; Kanaya, N.; Kaneda, M.; Kantserov, V.A.; Kanzaki, J.; Kaplan, B.; Kapliy, A.; Kaplon, J.; Kar, D.; Karagounis, M.; Karagoz Unel, M.; Kartvelishvili, V.; Karyukhin, A.N.; Kashif, L.; Kasmi, A.; Kass, R.D.; Kastanas, A.; Kastoryano, M.; Kataoka, M.; Kataoka, Y.; Katsoufis, E.; Katzy, J.; Kaushik, V.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kayl, M.S.; Kayumov, F.; Kazanin, V.A.; Kazarinov, M.Y.; Keates, J.R.; Keeler, R.; Keener, P.T.; Kehoe, R.; Keil, M.; Kekelidze, G.D.; Kelly, M.; Kenyon, M.; Kepka, O.; Kerschen, N.; Kersevan, B.P.; Kersten, S.; Kessoku, K.; Khakzad, M.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharchenko, D.; Khodinov, A.; Khomich, A.; Khoriauli, G.; Khovanskiy, N.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kim, H.; Kim, M.S.; Kim, P.C.; Kim, S.H.; Kind, O.; Kind, P.; King, B.T.; Kirk, J.; Kirsch, G.P.; Kirsch, L.E.; Kiryunin, A.E.; Kisielewska, D.; Kittelmann, T.; Kiyamura, H.; Kladiva, E.; Klein, M.; Klein, U.; Kleinknecht, K.; Klemetti, M.; Klier, A.; Klimentov, A.; Klingenberg, R.; Klinkby, E.B.; Klioutchnikova, T.; Klok, P.F.; Klous, S.; Kluge, E.E.; Kluge, T.; Kluit, P.; Klute, M.; Kluth, S.; Knecht, N.S.; Kneringer, E.; Ko, B.R.; Kobayashi, T.; Kobel, M.; Koblitz, B.; Kocian, M.; Kocnar, A.; Kodys, P.; Koneke, K.; Konig, A.C.; Koenig, S.; Kopke, L.; Koetsveld, F.; Koevesarki, P.; Koffas, T.; Koffeman, E.; Kohn, F.; Kohout, Z.; Kohriki, T.; Kolanoski, H.; Kolesnikov, V.; Koletsou, I.; Koll, J.; Kollar, D.; Kolos, S.; Kolya, S.D.; Komar, A.A.; Komaragiri, J.R.; Kondo, T.; Kono, T.; Konoplich, R.; Konovalov, S.P.; Konstantinidis, N.; Koperny, S.; Korcyl, K.; Kordas, K.; Korn, A.; Korolkov, I.; Korolkova, E.V.; Korotkov, V.A.; Kortner, O.; Kostka, P.; Kostyukhin, V.V.; Kotov, S.; Kotov, V.M.; Kotov, K.Y.; Kourkoumelis, C.; Koutsman, A.; Kowalewski, R.; Kowalski, H.; Kowalski, T.Z.; Kozanecki, W.; Kozhin, A.S.; Kral, V.; Kramarenko, V.A.; Kramberger, G.; Krasny, M.W.; Krasznahorkay, A.; Kreisel, A.; Krejci, F.; Kretzschmar, J.; Krieger, N.; Krieger, P.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Kruger, H.; Krumshteyn, Z.V.; Kubota, T.; Kuehn, S.; Kugel, A.; Kuhl, T.; Kuhn, D.; Kukhtin, V.; Kulchitsky, Y.; Kuleshov, S.; Kummer, C.; Kuna, M.; Kunkle, J.; Kupco, A.; Kurashige, H.; Kurata, M.; Kurchaninov, L.L.; Kurochkin, Y.A.; Kus, V.; Kwee, R.; La Rotonda, L.; Labbe, J.; Lacasta, C.; Lacava, F.; Lacker, H.; Lacour, D.; Lacuesta, V.R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lamanna, M.; Lampen, C.L.; Lampl, W.; Lancon, E.; Landgraf, U.; Landon, M.P.J.; Lane, J.L.; Lankford, A.J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J.F.; Lari, T.; Larner, A.; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Laycock, P.; Lazarev, A.B.; Lazzaro, A.; Le Dortz, O.; Le Guirriec, E.; Le Menedeu, E.; Le Vine, M.; Lebedev, A.; Lebel, C.; LeCompte, T.; Ledroit-Guillon, F.; Lee, H.; Lee, J.S.H.; Lee, S.C.; Lefebvre, M.; Legendre, M.; LeGeyt, B.C.; Legger, F.; Leggett, C.; Lehmacher, M.; Lehmann Miotto, G.; Lei, X.; Leitner, R.; Lellouch, D.; Lellouch, J.; Lendermann, V.; Leney, K.J.C.; Lenz, T.; Lenzen, G.; Lenzi, B.; Leonhardt, K.; Leroy, C.; Lessard, J-R.; Lester, C.G.; Leung Fook Cheong, A.; Leveque, J.; Levin, D.; Levinson, L.J.; Leyton, M.; Li, H.; Li, S.; Li, X.; Liang, Z.; Liang, Z.; Liberti, B.; Lichard, P.; Lichtnecker, M.; Lie, K.; Liebig, W.; Lilley, J.N.; Lim, H.; Limosani, A.; Limper, M.; Lin, S.C.; Linnemann, J.T.; Lipeles, E.; Lipinsky, L.; Lipniacka, A.; Liss, T.M.; Lissauer, D.; Lister, A.; Litke, A.M.; Liu, C.; Liu, D.; Liu, H.; Liu, J.B.; Liu, M.; Liu, T.; Liu, Y.; Livan, M.; Lleres, A.; Lloyd, S.L.; Lobodzinska, E.; Loch, P.; Lockman, W.S.; Lockwitz, S.; Loddenkoetter, T.; Loebinger, F.K.; Loginov, A.; Loh, C.W.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, R.E.; Lopes, L.; Lopez Mateos, D.; Losada, M.; Loscutoff, P.; Lou, X.; Lounis, A.; Loureiro, K.F.; Lovas, L.; Love, J.; Love, P.A.; Lowe, A.J.; Lu, F.; Lubatti, H.J.; Luci, C.; Lucotte, A.; Ludwig, A.; Ludwig, D.; Ludwig, I.; Luehring, F.; Luisa, L.; Lumb, D.; Luminari, L.; Lund, E.; Lund-Jensen, B.; Lundberg, B.; Lundberg, J.; Lundquist, J.; Lynn, D.; Lys, J.; Lytken, E.; Ma, H.; Ma, L.L.; Macana Goia, J.A.; Maccarrone, G.; Macchiolo, A.; Macek, B.; Machado Miguens, J.; Mackeprang, R.; Madaras, R.J.; Mader, W.F.; Maenner, R.; Maeno, T.; Mattig, P.; Mattig, S.; Magalhaes Martins, P.J.; Magradze, E.; Mahalalel, Y.; Mahboubi, K.; Mahmood, A.; Maiani, C.; Maidantchik, C.; Maio, A.; Majewski, S.; Makida, Y.; Makouski, M.; Makovec, N.; Malecki, Pa.; Malecki, P.; Maleev, V.P.; Malek, F.; Mallik, U.; Malon, D.; Maltezos, S.; Malyshev, V.; Malyukov, S.; Mambelli, M.; Mameghani, R.; Mamuzic, J.; Mandelli, L.; Mandic, I.; Mandrysch, R.; Maneira, J.; Mangeard, P.S.; Manjavidze, I.D.; Manning, P.M.; Manousakis-Katsikakis, A.; Mansoulie, B.; Mapelli, A.; Mapelli, L.; March, L.; Marchand, J.F.; Marchese, F.; Marchiori, G.; Marcisovsky, M.; Marino, C.P.; Marroquim, F.; Marshall, Z.; Marti-Garcia, S.; Martin, A.J.; Martin, A.J.; Martin, B.; Martin, B.; Martin, F.F.; Martin, J.P.; Martin, T.A.; Martin dit Latour, B.; Martinez, M.; Martinez Outschoorn, V.; Martini, A.; Martyniuk, A.C.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A.L.; Massa, I.; Massol, N.; Mastroberardino, A.; Masubuchi, T.; Matricon, P.; Matsunaga, H.; Matsushita, T.; Mattravers, C.; Maxfield, S.J.; Mayne, A.; Mazini, R.; Mazur, M.; Mazzanti, M.; Mc Donald, J.; Mc Kee, S.P.; McCarn, A.; McCarthy, R.L.; McCubbin, N.A.; McFarlane, K.W.; McGlone, H.; Mchedlidze, G.; McMahon, S.J.; McPherson, R.A.; Meade, A.; Mechnich, J.; Mechtel, M.; Medinnis, M.; Meera-Lebbai, R.; Meguro, T.M.; Mehlhase, S.; Mehta, A.; Meier, K.; Meirose, B.; Melachrinos, C.; Mellado Garcia, B.R.; Mendoza Navas, L.; Meng, Z.; Menke, S.; Meoni, E.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F.S.; Messina, A.M.; Metcalfe, J.; Mete, A.S.; Meyer, J-P.; Meyer, J.; Meyer, J.; Meyer, T.C.; Meyer, W.T.; Miao, J.; Michal, S.; Micu, L.; Middleton, R.P.; Migas, S.; Mijovic, L.; Mikenberg, G.; Mikestikova, M.; Mikuz, M.; Miller, D.W.; Mills, W.J.; Mills, C.M.; Milov, A.; Milstead, D.A.; Milstein, D.; Minaenko, A.A.; Minano, M.; Minashvili, I.A.; Mincer, A.I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L.M.; Mirabelli, G.; Misawa, S.; Miscetti, S.; Misiejuk, A.; Mitrevski, J.; Mitsou, V.A.; Miyagawa, P.S.; Mjornmark, J.U.; Mladenov, D.; Moa, T.; Moed, S.; Moeller, V.; Monig, K.; Moser, N.; Mohr, W.; Mohrdieck-Mock, S.; Moles-Valls, R.; Molina-Perez, J.; Monk, J.; Monnier, E.; Montesano, S.; Monticelli, F.; Moore, R.W.; Mora Herrera, C.; Moraes, A.; Morais, A.; Morel, J.; Morello, G.; Moreno, D.; Moreno Llacer, M.; Morettini, P.; Morii, M.; Morley, A.K.; Mornacchi, G.; Morozov, S.V.; Morris, J.D.; Moser, H.G.; Mosidze, M.; Moss, J.; Mount, R.; Mountricha, E.; Mouraviev, S.V.; Moyse, E.J.W.; Mudrinic, M.; Mueller, F.; Mueller, J.; Mueller, K.; Muller, T.A.; Muenstermann, D.; Muir, A.; Munwes, Y.; Murillo Garcia, R.; Murray, W.J.; Mussche, I.; Musto, E.; Myagkov, A.G.; Myska, M.; Nadal, J.; Nagai, K.; Nagano, K.; Nagasaka, Y.; Nairz, A.M.; Nakamura, K.; Nakano, I.; Nakatsuka, H.; Nanava, G.; Napier, A.; Nash, M.; Nation, N.R.; Nattermann, T.; Naumann, T.; Navarro, G.; Nderitu, S.K.; Neal, H.A.; Nebot, E.; Nechaeva, P.; Negri, A.; Negri, G.; Nelson, A.; Nelson, T.K.; Nemecek, S.; Nemethy, P.; Nepomuceno, A.A.; Nessi, M.; Neubauer, M.S.; Neusiedl, A.; Neves, R.N.; Nevski, P.; Newcomer, F.M.; Nickerson, R.B.; Nicolaidou, R.; Nicolas, L.; Nicoletti, G.; Nicquevert, B.; Niedercorn, F.; Nielsen, J.; Nikiforov, A.; Nikolaev, K.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, H.; Nilsson, P.; Nisati, A.; Nishiyama, T.; Nisius, R.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nordberg, M.; Nordkvist, B.; Notz, D.; Novakova, J.; Nozaki, M.; Nozicka, M.; Nugent, I.M.; Nuncio-Quiroz, A.E.; Nunes Hanninger, G.; Nunnemann, T.; Nurse, E.; O'Neil, D.C.; O'Shea, V.; Oakham, F.G.; Oberlack, H.; Ochi, A.; Oda, S.; Odaka, S.; Odier, J.; Ogren, H.; Oh, A.; Oh, S.H.; Ohm, C.C.; Ohshima, T.; Ohshita, H.; Ohsugi, T.; Okada, S.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olchevski, A.G.; Oliveira, M.; Oliveira Damazio, D.; Oliver, J.; Oliver Garcia, E.; Olivito, D.; Olszewski, A.; Olszowska, J.; Omachi, C.; Onofre, A.; Onyisi, P.U.E.; Oram, C.J.; Oreglia, M.J.; Oren, Y.; Orestano, D.; Orlov, I.; Oropeza Barrera, C.; Orr, R.S.; Ortega, E.O.; Osculati, B.; Ospanov, R.; Osuna, C.; Ottersbach, J.P; Ould-Saada, F.; Ouraou, A.; Ouyang, Q.; Owen, M.; Owen, S.; Oyarzun, A; Ozcan, V.E.; Ozone, K.; Ozturk, N.; Pacheco Pages, A.; Padilla Aranda, C.; Paganis, E.; Pahl, C.; Paige, F.; Pajchel, K.; Palestini, S.; Pallin, D.; Palma, A.; Palmer, J.D.; Pan, Y.B.; Panagiotopoulou, E.; Panes, B.; Panikashvili, N.; Panitkin, S.; Pantea, D.; Panuskova, M.; Paolone, V.; Papadopoulou, Th.D.; Park, S.J.; Park, W.; Parker, M.A.; Parker, S.I.; Parodi, F.; Parsons, J.A.; Parzefall, U.; Pasqualucci, E.; Passeri, A.; Pastore, F.; Pastore, Fr.; Pasztor, G.; Pataraia, S.; Pater, J.R.; Patricelli, S.; Patwa, A.; Pauly, T.; Peak, L.S.; Pecsy, M.; Pedraza Morales, M.I.; Peleganchuk, S.V.; Peng, H.; Penson, A.; Penwell, J.; Perantoni, M.; Perez, K.; Perez Codina, E.; Perez Garcia-Estan, M.T.; Perez Reale, V.; Perini, L.; Pernegger, H.; Perrino, R.; Persembe, S.; Perus, P.; Peshekhonov, V.D.; Petersen, B.A.; Petersen, T.C.; Petit, E.; Petridou, C.; Petrolo, E.; Petrucci, F.; Petschull, D; Petteni, M.; Pezoa, R.; Phan, A.; Phillips, A.W.; Piacquadio, G.; Piccinini, M.; Piegaia, R.; Pilcher, J.E.; Pilkington, A.D.; Pina, J.; Pinamonti, M.; Pinfold, J.L.; Pinto, B.; Pizio, C.; Placakyte, R.; Plamondon, M.; Pleier, M.A.; Poblaguev, A.; Poddar, S.; Podlyski, F.; Poffenberger, P.; Poggioli, L.; Pohl, M.; Polci, F.; Polesello, G.; Policicchio, A.; Polini, A.; Poll, J.; Polychronakos, V.; Pomeroy, D.; Pommes, K.; Ponsot, P.; Pontecorvo, L.; Pope, B.G.; Popeneciu, G.A.; Popovic, D.S.; Poppleton, A.; Popule, J.; Portell Bueso, X.; Porter, R.; Pospelov, G.E.; Pospisil, S.; Potekhin, M.; Potrap, I.N.; Potter, C.J.; Potter, C.T.; Potter, K.P.; Poulard, G.; Poveda, J.; Prabhu, R.; Pralavorio, P.; Prasad, S.; Pravahan, R.; Pribyl, L.; Price, D.; Price, L.E.; Prichard, P.M.; Prieur, D.; Primavera, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Prudent, X.; Przysiezniak, H.; Psoroulas, S.; Ptacek, E.; Puigdengoles, C.; Purdham, J.; Purohit, M.; Puzo, P.; Pylypchenko, Y.; Qi, M.; Qian, J.; Qian, W.; Qin, Z.; Quadt, A.; Quarrie, D.R.; Quayle, W.B.; Quinonez, F.; Raas, M.; Radeka, V.; Radescu, V.; Radics, B.; Rador, T.; Ragusa, F.; Rahal, G.; Rahimi, A.M.; Rajagopalan, S.; Rammensee, M.; Rammes, M.; Rauscher, F.; Rauter, E.; Raymond, M.; Read, A.L.; Rebuzzi, D.M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Reinherz-Aronis, E.; Reinsch, A; Reisinger, I.; Reljic, D.; Rembser, C.; Ren, Z.L.; Renkel, P.; Rescia, S.; Rescigno, M.; Resconi, S.; Resende, B.; Reznicek, P.; Rezvani, R.; Richards, A.; Richards, R.A.; Richter, R.; Richter-Was, E.; Ridel, M.; Rijpstra, M.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Rios, R.R.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Roa Romero, D.A.; Robertson, S.H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, JEM; Robinson, M.; Robson, A.; Rocha de Lima, J.G.; Roda, C.; Roda Dos Santos, D.; Rodriguez, D.; Rodriguez Garcia, Y.; Roe, S.; Rohne, O.; Rojo, V.; Rolli, S.; Romaniouk, A.; Romanov, V.M.; Romeo, G.; Romero Maltrana, D.; Roos, L.; Ros, E.; Rosati, S.; Rosenbaum, G.A.; Rosselet, L.; Rossetti, V.; Rossi, L.P.; Rotaru, M.; Rothberg, J.; Rousseau, D.; Royon, C.R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Ruckert, B.; Ruckstuhl, N.; Rud, V.I.; Rudolph, G.; Ruhr, F.; Ruggieri, F.; Ruiz-Martinez, A.; Rumyantsev, L.; Rurikova, Z.; Rusakovich, N.A.; Rutherfoord, J.P.; Ruwiedel, C.; Ruzicka, P.; Ryabov, Y.F.; Ryan, P.; Rybkin, G.; Rzaeva, S.; Saavedra, A.F.; Sadrozinski, H.F-W.; Sadykov, R.; Sakamoto, H.; Salamanna, G.; Salamon, A.; Saleem, M.S.; Salihagic, D.; Salnikov, A.; Salt, J.; Salvachua Ferrando, B.M.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sampsonidis, D.; Samset, B.H.; Sandaker, H.; Sander, H.G.; Sanders, M.P.; Sandhoff, M.; Sandhu, P.; Sandstroem, R.; Sandvoss, S.; Sankey, D.P.C.; Sanny, B.; Sansoni, A.; Santamarina Rios, C.; Santoni, C.; Santonico, R.; Saraiva, J.G.; Sarangi, T.; Sarkisyan-Grinbaum, E.; Sarri, F.; Sasaki, O.; Sasao, N.; Satsounkevitch, I.; Sauvage, G.; Savard, P.; Savine, A.Y.; Savinov, V.; Sawyer, L.; Saxon, D.H.; Says, L.P.; Sbarra, C.; Sbrizzi, A.; Scannicchio, D.A.; Schaarschmidt, J.; Schacht, P.; Schafer, U.; Schaetzel, S.; Schaffer, A.C.; Schaile, D.; Schamberger, R.D.; Schamov, A.G.; Schegelsky, V.A.; Scheirich, D.; Schernau, M.; Scherzer, M.I.; Schiavi, C.; Schieck, J.; Schioppa, M.; Schlenker, S.; Schmidt, E.; Schmieden, K.; Schmitt, C.; Schmitz, M.; Schott, M.; Schouten, D.; Schovancova, J.; Schram, M.; Schreiner, A.; Schroeder, C.; Schroer, N.; Schroers, M.; Schultes, J.; Schultz-Coulon, H.C.; Schumacher, J.W.; Schumacher, M.; Schumm, B.A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwemling, Ph.; Schwienhorst, R.; Schwierz, R.; Schwindling, J.; Scott, W.G.; Searcy, J.; Sedykh, E.; Segura, E.; Seidel, S.C.; Seiden, A.; Seifert, F.; Seixas, J.M.; Sekhniaidze, G.; Seliverstov, D.M.; Sellden, B.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Seuster, R.; Severini, H.; Sevior, M.E.; Sfyrla, A.; Shabalina, E.; Shamim, M.; Shan, L.Y.; Shank, J.T.; Shao, Q.T.; Shapiro, M.; Shatalov, P.B.; Shaw, K.; Sherman, D.; Sherwood, P.; Shibata, A.; Shimojima, M.; Shin, T.; Shmeleva, A.; Shochet, M.J.; Shupe, M.A.; Sicho, P.; Sidoti, A.; Siegert, F; Siegrist, J.; Sijacki, Dj.; Silbert, O.; Silva, J.; Silver, Y.; Silverstein, D.; Silverstein, S.B.; Simak, V.; Simic, Lj.; Simion, S.; Simmons, B.; Simonyan, M.; Sinervo, P.; Sinev, N.B.; Sipica, V.; Siragusa, G.; Sisakyan, A.N.; Sivoklokov, S.Yu.; Sjoelin, J.; Sjursen, T.B.; Skovpen, K.; Skubic, P.; Slater, M.; Slavicek, T.; Sliwa, K.; Sloper, J.; Sluka, T.; Smakhtin, V.; Smirnov, S.Yu.; Smirnov, Y.; Smirnova, L.N.; Smirnova, O.; Smith, B.C.; Smith, D.; Smith, K.M.; Smizanska, M.; Smolek, K.; Snesarev, A.A.; Snow, S.W.; Snow, J.; Snuverink, J.; Snyder, S.; Soares, M.; Sobie, R.; Sodomka, J.; Soffer, A.; Solans, C.A.; Solar, M.; Solc, J.; Solfaroli Camillocci, E.; Solodkov, A.A.; Solovyanov, O.V.; Soluk, R.; Sondericker, J.; Sopko, V.; Sopko, B.; Sosebee, M.; Soukharev, A.; Spagnolo, S.; Spano, F.; Spencer, E.; Spighi, R.; Spigo, G.; Spila, F.; Spiwoks, R.; Spousta, M.; Spreitzer, T.; Spurlock, B.; St. Denis, R.D.; Stahl, T.; Stahlman, J.; Stamen, R.; Stancu, S.N.; Stanecka, E.; Stanek, R.W.; Stanescu, C.; Stapnes, S.; Starchenko, E.A.; Stark, J.; Staroba, P.; Starovoitov, P.; Stastny, J.; Stavina, P.; Steele, G.; Steinbach, P.; Steinberg, P.; Stekl, I.; Stelzer, B.; Stelzer, H.J.; Stelzer-Chilton, O.; Stenzel, H.; Stevenson, K.; Stewart, G.A.; Stockton, M.C.; Stoerig, K.; Stoicea, G.; Stonjek, S.; Strachota, P.; Stradling, A.R.; Straessner, A.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Strohmer, R.; Strom, D.M.; Stroynowski, R.; Strube, J.; Stugu, B.; Soh, D.A.; Su, D.; Sugaya, Y.; Sugimoto, T.; Suhr, C.; Suk, M.; Sulin, V.V.; Sultansoy, S.; Sumida, T.; Sun, X.H.; Sundermann, J.E.; Suruliz, K.; Sushkov, S.; Susinno, G.; Sutton, M.R.; Suzuki, T.; Suzuki, Y.; Sykora, I.; Sykora, T.; Szymocha, T.; Sanchez, J.; Ta, D.; Tackmann, K.; Taffard, A.; Tafirout, R.; Taga, A.; Takahashi, Y.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Talby, M.; Talyshev, A.; Tamsett, M.C.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tanaka, S.; Tapprogge, S.; Tardif, D.; Tarem, S.; Tarrade, F.; Tartarelli, G.F.; Tas, P.; Tasevsky, M.; Tassi, E.; Tatarkhanov, M.; Taylor, C.; Taylor, F.E.; Taylor, G.N.; Taylor, R.P.; Taylor, W.; Teixeira-Dias, P.; Ten Kate, H.; Teng, P.K.; Tennenbaum-Katan, Y.D.; Terada, S.; Terashi, K.; Terron, J.; Terwort, M.; Testa, M.; Teuscher, R.J.; Thioye, M.; Thoma, S.; Thomas, J.P.; Thompson, E.N.; Thompson, P.D.; Thompson, P.D.; Thompson, R.J.; Thompson, A.S.; Thomson, E.; Thun, R.P.; Tic, T.; Tikhomirov, V.O.; Tikhonov, Y.A.; Tipton, P.; Tique Aires Viegas, F.J.; Tisserant, S.; Toczek, B.; Todorov, T.; Todorova-Nova, S.; Toggerson, B.; Tojo, J.; Tokar, S.; Tokushuku, K.; Tollefson, K.; Tomasek, L.; Tomasek, M.; Tomoto, M.; Tompkins, L.; Toms, K.; Tonoyan, A.; Topfel, C.; Topilin, N.D.; Torrence, E.; Torro Pastor, E.; Toth, J.; Touchard, F.; Tovey, D.R.; Trefzger, T.; Tremblet, L.; Tricoli, A.; Trigger, I.M.; Trincaz-Duvoid, S.; Trinh, T.N.; Tripiana, M.F.; Triplett, N.; Trischuk, W.; Trivedi, A.; Trocme, B.; Troncon, C.; Trzupek, A.; Tsarouchas, C.; Tseng, J.C-L.; Tsiakiris, M.; Tsiareshka, P.V.; Tsionou, D.; Tsipolitis, G.; Tsiskaridze, V.; Tskhadadze, E.G.; Tsukerman, I.I.; Tsulaia, V.; Tsung, J.W.; Tsuno, S.; Tsybychev, D.; Tuggle, J.M.; Turecek, D.; Turk Cakir, I.; Turlay, E.; Tuts, P.M.; Twomey, M.S.; Tylmad, M.; Tyndel, M.; Uchida, K.; Ueda, I.; Ugland, M.; Uhlenbrock, M.; Uhrmacher, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Unno, Y.; Urbaniec, D.; Urkovsky, E.; Urquijo, P.; Urrejola, P.; Usai, G.; Uslenghi, M.; Vacavant, L.; Vacek, V.; Vachon, B.; Vahsen, S.; Valente, P.; Valentinetti, S.; Valkar, S.; Valladolid Gallego, E.; Vallecorsa, S.; Valls Ferrer, J.A.; Van Berg, R.; van der Graaf, H.; van der Kraaij, E.; van der Poel, E.; van der Ster, D.; van Eldik, N.; van Gemmeren, P.; van Kesteren, Z.; van Vulpen, I.; Vandelli, W.; Vaniachine, A.; Vankov, P.; Vannucci, F.; Vari, R.; Varnes, E.W.; Varouchas, D.; Vartapetian, A.; Varvell, K.E.; Vasilyeva, L.; Vassilakopoulos, V.I.; Vazeille, F.; Vellidis, C.; Veloso, F.; Veneziano, S.; Ventura, A.; Ventura, D.; Venturi, M.; Venturi, N.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J.C.; Vetterli, M.C.; Vichou, I.; Vickey, T.; Viehhauser, G.H.A.; Villa, M.; Villani, E.G.; Villaplana Perez, M.; Vilucchi, E.; Vincter, M.G.; Vinek, E.; Vinogradov, V.B.; Viret, S.; Virzi, J.; Vitale, A.; Vitells, O.; Vivarelli, I.; Vives Vaque, F.; Vlachos, S.; Vlasak, M.; Vlasov, N.; Vogel, A.; Vokac, P.; Volpi, M.; von der Schmitt, H.; von Loeben, J.; von Radziewski, H.; von Toerne, E.; Vorobel, V.; Vorwerk, V.; Vos, M.; Voss, R.; Voss, T.T.; Vossebeld, J.H.; Vranjes, N.; Vranjes Milosavljevic, M.; Vrba, V.; Vreeswijk, M.; Vu Anh, T.; Vudragovic, D.; Vuillermet, R.; Vukotic, I.; Wagner, P.; Walbersloh, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wall, R.; Wang, C.; Wang, H.; Wang, J.; Wang, S.M.; Warburton, A.; Ward, C.P.; Warsinsky, M.; Wastie, R.; Watkins, P.M.; Watson, A.T.; Watson, M.F.; Watts, G.; Watts, S.; Waugh, A.T.; Waugh, B.M.; Weber, M.D.; Weber, M.; Weber, M.S.; Weber, P.; Weidberg, A.R.; Weingarten, J.; Weiser, C.; Wellenstein, H.; Wells, P.S.; Wen, M.; Wenaus, T.; Wendler, S.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Werth, M.; Werthenbach, U.; Wessels, M.; Whalen, K.; White, A.; White, M.J.; White, S.; Whitehead, S.R.; Whiteson, D.; Whittington, D.; Wicek, F.; Wicke, D.; Wickens, F.J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik, L.A.M.; Wildauer, A.; Wildt, M.A.; Wilkens, H.G.; Williams, E.; Williams, H.H.; Willocq, S.; Wilson, J.A.; Wilson, M.G.; Wilson, A.; Wingerter-Seez, I.; Winklmeier, F.; Wittgen, M.; Wolter, M.W.; Wolters, H.; Wosiek, B.K.; Wotschack, J.; Woudstra, M.J.; Wraight, K.; Wright, C.; Wright, D.; Wrona, B.; Wu, S.L.; Wu, X.; Wulf, E.; Wynne, B.M.; Xaplanteris, L.; Xella, S.; Xie, S.; Xu, D.; Xu, N.; Yamada, M.; Yamamoto, A.; Yamamoto, K.; Yamamoto, S.; Yamamura, T.; Yamaoka, J.; Yamazaki, T.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, U.K.; Yang, Z.; Yao, W-M.; Yao, Y.; Yasu, Y.; Ye, J.; Ye, S.; Yilmaz, M.; Yoosoofmiya, R.; Yorita, K.; Yoshida, R.; Young, C.; Youssef, S.P.; Yu, D.; Yu, J.; Yuan, L.; Yurkewicz, A.; Zaidan, R.; Zaitsev, A.M.; Zajacova, Z.; Zambrano, V.; Zanello, L.; Zaytsev, A.; Zeitnitz, C.; Zeller, M.; Zemla, A.; Zendler, C.; Zenin, O.; Zenis, T.; Zenonos, Z.; Zenz, S.; Zerwas, D.; Zevi della Porta, G.; Zhan, Z.; Zhang, H.; Zhang, J.; Zhang, Q.; Zhang, X.; Zhao, L.; Zhao, T.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, N.; Zhou, Y.; Zhu, C.G.; Zhu, H.; Zhu, Y.; Zhuang, X.; Zhuravlov, V.; Zimmermann, R.; Zimmermann, S.; Zimmermann, S.; Ziolkowski, M.; Zivkovic, L.; Zobernig, G.; Zoccoli, A.; zur Nedden, M.; Zutshi, V.

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  16. An atlas of functions: with equator, the atlas function calculator

    National Research Council Canada - National Science Library

    Oldham, Keith

    2008-01-01

    ... of arguments. The first edition of An Atlas of Functions, the product of collaboration between a mathematician and a chemist, appeared during an era when the programmable calculator was the workhorse for the numerical evaluation of functions. That role has now been taken over by the omnipresent computer, and therefore the second edition delegates this duty to Equator, the Atlas function calculator. This is a software program that, as well as carrying out other tasks, will calculate va...

  17. Event visualization in ATLAS

    Science.gov (United States)

    Bianchi, R. M.; Boudreau, J.; Konstantinidis, N.; Martyniuk, A. C.; Moyse, E.; Thomas, J.; Waugh, B. M.; Yallup, D. P.; ATLAS Collaboration

    2017-10-01

    At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.

  18. Event visualization in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00211497; The ATLAS collaboration; Boudreau, Joseph; Konstantinidis, Nikolaos; Martyniuk, Alex; Moyse, Edward; Thomas, Juergen; Waugh, Ben; Yallup, David

    2017-01-01

    At the beginning, HEP experiments made use of photographical images both to record and store experimental data and to illustrate their findings. Then the experiments evolved and needed to find ways to visualize their data. With the availability of computer graphics, software packages to display event data and the detector geometry started to be developed. Here, an overview of the usage of event display tools in HEP is presented. Then the case of the ATLAS experiment is considered in more detail and two widely used event display packages are presented, Atlantis and VP1, focusing on the software technologies they employ, as well as their strengths, differences and their usage in the experiment: from physics analysis to detector development, and from online monitoring to outreach and communication. Towards the end, the other ATLAS visualization tools will be briefly presented as well. Future development plans and improvements in the ATLAS event display packages will also be discussed.

  19. A Time for Atlases and Atlases for Time

    Science.gov (United States)

    Livneh, Yoav; Mizrahi, Adi

    2009-01-01

    Advances in neuroanatomy and computational power are leading to the construction of new digital brain atlases. Atlases are rising as indispensable tools for comparing anatomical data as well as being stimulators of new hypotheses and experimental designs. Brain atlases describe nervous systems which are inherently plastic and variable. Thus, the levels of brain plasticity and stereotypy would be important to evaluate as limiting factors in the context of static brain atlases. In this review, we discuss the extent of structural changes which neurons undergo over time, and how these changes would impact the static nature of atlases. We describe the anatomical stereotypy between neurons of the same type, highlighting the differences between invertebrates and vertebrates. We review some recent experimental advances in our understanding of anatomical dynamics in adult neural circuits, and how these are modulated by the organism's experience. In this respect, we discuss some analogies between brain atlases and the sequenced genome and the emerging epigenome. We argue that variability and plasticity of neurons are substantially high, and should thus be considered as integral features of high-resolution digital brain atlases. PMID:20204142

  20. A time for atlases and atlases for time

    Directory of Open Access Journals (Sweden)

    Yoav Livneh

    2010-02-01

    Full Text Available Advances in neuroanatomy and computational power are leading to the construction of new digital brain atlases. Atlases are rising as indispensable tools for comparing anatomical data as well as being stimulators of new hypotheses and experimental designs. Brain atlases describe nervous systems which are inherently plastic and variable. Thus, the levels of brain plasticity and stereotypy would be important to evaluate as limiting factors in the context of static brain atlases. In this review, we discuss the extent of structural changes which neurons undergo over time, and how these changes would impact the static nature of atlases. We describe the anatomical stereotypy between neurons of the same type, highlighting the differences between invertebrates and vertebrates. We review some recent experimental advances in our understanding of anatomical dynamics in adult neural circuits, and how these are modulated by the organism’s experience. In this respect, we discuss some analogies between brain atlases and the sequenced genome and the emerging epigenome. We argue that variability and plasticity of neurons are substantially high, and should thus be considered as integral features of high-resolution digital brain atlases.

  1. The ATLAS distributed analysis system

    Science.gov (United States)

    Legger, F.; Atlas Collaboration

    2014-06-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  2. Large scale digital atlases in neuroscience

    Science.gov (United States)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  3. Automatic Testing and Assessment of Neuroanatomy Using a Digital Brain Atlas: Method and Development of Computer- and Mobile-Based Applications

    Science.gov (United States)

    Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Ananthasubramaniam, Anand; Chua, Beng Choon; Qian, Guoyu; Nowinska, Natalia G.; Marchenko, Yevgen; Volkau, Ihar

    2009-01-01

    Preparation of tests and student's assessment by the instructor are time consuming. We address these two tasks in neuroanatomy education by employing a digital media application with a three-dimensional (3D), interactive, fully segmented, and labeled brain atlas. The anatomical and vascular models in the atlas are linked to "Terminologia…

  4. Evolution of the ATLAS Nightly Build System

    CERN Document Server

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Builds and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, test...

  5. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  6. Computer-aided evaluation as an adjunct to revised BI-RADS Atlas: improvement in positive predictive value at screening breast MRI

    Energy Technology Data Exchange (ETDEWEB)

    Gweon, Hye Mi; Cho, Nariya; Seo, Mirinae; Chu, A. Jung; Moon, Woo Kyung [Seoul National University College of Medicine and Seoul National University Hospital, Department of Radiology, Seoul (Korea, Republic of)

    2014-08-15

    To investigate whether kinetic features via magnetic resonance (MR)-computer-aided evaluation (CAE) can improve the positive predictive value (PPV) of morphological descriptors for suspicious lesions at screening breast MRI. One hundred and sixteen consecutive, suspiciously enhancing lesions detected at contralateral breast MRI screening in 116 women with newly-diagnosed breast cancers were included. Morphological descriptors according to the revised BI-RADS Atlas and kinetic features from MR-CAE were analysed. The PPV of each descriptor was analysed to identify subgroups in which PPV could be improved by the addition of MR-CAE. When biopsy recommendations were downgraded to follow-up in cases where there were both the absence of enhancement at a 50 % threshold and the absence of delayed washout, PPV increased from 0.328 (95 % CI, 0.249-0.417) to 0.500 (95 % CI, 0.387- 0.613). Two ductal carcinoma in situ (DCIS) non-mass enhancement (NME) lesions were missed. Application of downgrading criteria to foci or masses led to increased PPV from 0.310 (95 % CI, 0.216-0.419) to 0.437 (95 % CI, 0.331-0.547) without missing cancers. MR-CAE has the potential to improve the PPV of breast MR imaging by reducing the number of false positives. When suspicious mass lesions do not show enhancement at a 50 % threshold nor delayed washout, follow-up rather than biopsy can be considered. (orig.)

  7. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Scandinavia and other countries. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed...... by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the LHC Computing Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous...... environment. Also, the service used for cataloging the location of data files is different from other Grids but must still be useable by DQ2 and ATLAS users to locate data within NDGF. This paper presents in detail how we solve these issues to allow seamless access worldwide to data within NDGF....

  8. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  9. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  10. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  11. Networks in ATLAS

    Science.gov (United States)

    McKee, Shawn; ATLAS Collaboration

    2017-10-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks. We will report on a number of networking initiatives in ATLAS including participation in the global perfSONAR network monitoring and measuring efforts of WLCG and OSG, the collaboration with the LHCOPN/LHCONE effort, the integration of network awareness into PanDA, the use of the evolving ATLAS analytics framework to better understand our networks and the changes in our DDM system to allow remote access to data. We will also discuss new efforts underway that are exploring the inclusion and use of software defined networks (SDN) and how ATLAS might benefit from: • Orchestration and optimization of distributed data access and data movement. • Better control of workflows, end to end. • Enabling prioritization of time-critical vs normal tasks • Improvements in the efficiency of resource usage

  12. Networks in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00260714; The ATLAS collaboration

    2017-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  13. Networks in ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2016-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  14. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  15. Mongolian Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climatic atlas dated 1985, in Mongolian, with introductory material also in Russian and English. One hundred eight pages in single page PDFs.

  16. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  17. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  18. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  19. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  20. Alignment of the ATLAS Inner Detector tracking system

    CERN Document Server

    Moles-Valls, R; The ATLAS collaboration

    2009-01-01

    The ATLAS experiment is equipped with a tracking system for charged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking system requires the determination of almost 36000 degrees of freedom. From the tracking point of view, the alignment parameters should be know to a few microns precision. This permits to attain optimal measurements of the parameters of the charged particles trajectories, thus enabling ATLAS to achieve its physics goals. The implementation of the alignment software, its framework and the data flow will be discussed. Special attention will be paid to the recent challenges where large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS operation, which is going to be very important for the LHC startup scenario. The alignment result for several challenges (real cosmic ray data taking and computing system commissioning) will be also rep...

  1. Using the Hadoop/MapReduce approach for monitoring the CERN storage system and improving the ATLAS computing model

    CERN Document Server

    Russo, Stefano Alberto; Lamanna, M

    The processing of huge amounts of data, an already fundamental task for the research in the elementary particle physics field, is becoming more and more important also for companies operating in the Information Technology (IT) industry. In this context, if conventional approaches are adopted several problems arise, starting from the congestion of the communication channels. In the IT sector, one of the approaches designed to minimize this congestion on is to exploit the data locality, or in other words, to bring the computation as closer as possible to where the data resides. The most common implementation of this concept is the Hadoop/MapReduce framework. In this thesis work I evaluate the usage of Hadoop/MapReduce in two areas: a standard one similar to typical IT analyses, and an innovative one related to high energy physics analyses. The first consists in monitoring the history of the storage cluster which stores the data generated by the LHC experiments, the second in the physics analysis of the latter, ...

  2. The ATLAS Glasgow Overview Week

    CERN Multimedia

    Richard Hawkings

    2007-01-01

    The ATLAS Overview Weeks always provide a good opportunity to see the status and progress throughout the experiment, and the July week at Glasgow University was no exception. The setting, amidst the traditional buildings of one of the UK's oldest universities, provided a nice counterpoint to all the cutting-edge research and technology being discussed. And despite predictions to the contrary, the weather at these northern latitudes was actually a great improvement on the previous few weeks in Geneva. The meeting sessions comprehensively covered the whole ATLAS project, from the subdetector and TDAQ systems and their commissioning, through to offline computing, analysis and physics. As a long-time ATLAS member who remembers plenary meetings in 1991 with 30 people drawing detector layouts on a whiteboard, the hardware and installation sessions were particularly impressive - to see how these dreams have been translated into 7000 tons of reality (and with attendant cabling, supports and services, which certainly...

  3. 28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

  4. 23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

  5. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  6. 14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

    CERN Multimedia

    Jean-claude Gadmer

    2011-01-01

    14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

  7. 13th March 2009 - Comenius University Bratislava Rector F. Gaher visiting ALICE exhibition at Point 2 with Collaboration Spokesperson J. Schukraft and Senior physicist K. Safarik; visiting the LHC tunnel at Point 1 with ATLAS Collaboration Former Spokesperson P. Jenni; signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    13th March 2009 - Comenius University Bratislava Rector F. Gaher visiting ALICE exhibition at Point 2 with Collaboration Spokesperson J. Schukraft and Senior physicist K. Safarik; visiting the LHC tunnel at Point 1 with ATLAS Collaboration Former Spokesperson P. Jenni; signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

  8. 30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

  9. 11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Jean-Claude Gadmer

    2011-01-01

    11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

  10. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to `transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  11. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to 'transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  12. Nova Scotia wind atlas

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2008-07-01

    In order to stimulate growth of the wind energy sector in the province of Nova Scotia and to optimize the development of an important renewable energy source in the province, the Nova Scotia Department of Energy has launched the Nova Scotia wind atlas project. The atlas provides valuable information regarding the identification of the optimal locations to install wind farm turbines, both at the large utility scale level and at the private or small business levels. This article presented information on the wind atlas website and on wind resource maps. Background information on the project was presented. The wind resource maps were developed in partnership by the K.C, Irving Chair in Sustainable Development at Moncton University and the Applied Geomatics Research Group at the Nova Scotia Community College. The wind resource maps are available for viewing on the website where users can click on tile section to obtain enlarged versions of wind resource maps for different parts of the province of Nova Scotia. The maps were developed using computer modelling. 7 figs.

  13. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    Science.gov (United States)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set

  14. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2016-01-01

    The ATLAS Event Service (ES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the ES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Goggle Comput...

  16. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  17. The ATLAS Production System Evolution

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration

    2017-01-01

    The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS-specific workflows, across more than a hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based upon many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kinds of computational resources, such as GRID, clouds, supercomputers and volunteer computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resource utilization is one of the major features of the system. The Production System has a sophisticated job fault recovery mechanism, which efficiently allows running multi-terabyte tasks without human intervention. We have implemented new features which allow automatic task submission and chaining of differe...

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. SUSY (ATLAS)

    CERN Document Server

    Sopczak, Andre; The ATLAS collaboration

    2017-01-01

    During the data-taking period at LHC (Run-II), several searches for supersymmetric particles were performed. The results from searches by the ATLAS collaborations are concisely reviewed. Model-independent and model-dependent limits on new particle production are set, and interpretations in supersymmetric models are given.

  2. ATLAS Story

    CERN Multimedia

    AUTHOR|(CDS)2108663

    2012-01-01

    This film produced in July 2012 explains how fundamental research connects to Society and what benefits collaborative way of working can and may generate in the future, using ATLAS Collaboration as a case study. The film is intellectually inspired by the book "Collisions and Collaboration" (OUP) by Max Boisot (ed.), see: collisionsandcollaboration.com. The film is directed by Andrew Millington (OMNI Communications)

  3. SUSY (ATLAS)

    CERN Document Server

    Sopczak, Andre; The ATLAS collaboration

    2017-01-01

    During the LHC Run-II data-taking period, several searches for supersymmetric particles were performed by the ATLAS collaboration. The results from these searches are concisely reviewed. Model-independent and model-dependent limits on new particle production are set, and interpretations in supersymmetric models are given.

  4. ATLAS Thesis Award 2017

    CERN Multimedia

    Anthony, Katarina

    2018-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on 22 February, 2018. They are pictured here with Karl Jakobs (ATLAS Spokesperson), Max Klein (ATLAS Collaboration Board Chair) and Katsuo Tokushuku (ATLAS Collaboration Board Deputy Chair).

  5. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  6. ATLAS Data Preservation Policy

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The principal intent of this document is to describe the ATLAS policy ensuring that its data are maintained reliably in a form accessible to ATLAS members. A separate document describes the ATLAS policy for making its data available, and potentially useful, to scientists who are not members of ATLAS.

  7. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2017-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC, are followed by adjustments to the ATLAS trigger monitoring systems. During Run 1, and so far in Run 2, ATLAS has deployed monitoring updates with the installation of new software releases at Tier-0, the first level of the ATLAS computing grid. Having to wait for a new software release to be installed at Tier-0, in order to update ATLAS offline trigger monitoring configurations, results in a lag with respect to the modification of the trigger menu. We present the design and implementation of a `trigger menu-aware' monitoring system that aims to simplify the ATLAS operational workflows by allowing monitoring configuration changes to be made at the Tier-0 site by utilising an Oracle SQL database.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  9. Simulation strategies for the LHC ATLAS experiment

    CERN Document Server

    Buckley, A; The ATLAS collaboration

    2010-01-01

    The ATLAS experiment, operational at the new LHC collider, is fully simulated using the Geant4 tool. The simulation program has been built within the ATLAS common framework Athena. The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. The latest developments went into the direction to better represent the reality of the detector in all the possible details. The latest developments provide increased functionality and robustness. The full process is constantly monitored and profiled. Increased performance guarantee the best use of available resources without any degradation in the quality and accuracy of the simulation itself. In the presentation emphasis is...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  11. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  12. Production experience with the ATLAS Event Service

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00066086; The ATLAS collaboration; Calafiura, Paolo; Childers, John Taylor; De, Kaushik; Guan, Wen; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Wenaus, Torre

    2017-01-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Comp...

  13. ATLAS Distributed Analysis Tools

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Liko, Dietrich

    2008-01-01

    The ATLAS production system has been successfully used to run production of simulation data at an unprecedented scale. Up to 10000 jobs were processed in one day. The experiences obtained operating the system on several grid flavours was essential to perform a user analysis using grid resources. First tests of the distributed analysis system were then performed. In the preparation phase data was registered in the LHC File Catalog (LFC) and replicated in external sites. For the main test, few resources were used. All these tests are only a first step towards the validation of the computing model. The ATLAS management computing board decided to integrate the collaboration efforts in distributed analysis in only one project, GANGA. The goal is to test the reconstruction and analysis software in a large scale Data production using Grid flavors in several sites. GANGA allows trivial switching between running test jobs on a local batch system and running large-scale analyses on the Grid; it provides job splitting a...

  14. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  15. ATLAS Recordings

    CERN Multimedia

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you.Enjoy the Lectures!

  16. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  1. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  2. Algorithm Acceleration from GPGPUs for the ATLAS Upgrade

    CERN Document Server

    Washbrook, A; The ATLAS collaboration

    2010-01-01

    The future upgrades to the LHC are expected to increase the design luminosity by an order of magnitude leading to new computational challenges for the ATLAS experiment. One such challenge will be the ability to handle a much higher rate of interesting physics events by the ATLAS High Level Trigger system. We will present results from the adoption of General Purpose Graphics Processing Units (GPGPUs) to provide computational acceleration for key algorithms in the ATLAS Inner Detector Trigger. The z-finder algorithm - used to determine the accurate z position of primary interactions - and the Kalman Filter based track reconstruction routine have been adapted for GPGPU execution using the CUDA parallel computing architecture. We describe the programming and benchmarking methods used and demonstrate the relative throughput performance for different trigger scenarios. Where significant performance boost is found we will outline how GPGPU acceleration could be exploited and incorporated into the future ATLAS comput...

  3. Simulation of the heat transfer around the ATLAS muon chambers

    CERN Multimedia

    2005-01-01

    This 2D simulation recently carried out on the ATLAS muon chambers by a small team of CERN engineers specialises in the numerical computation of fluid dynamics, in other words the flow of fluids and heat.

  4. ATLAS PhD Grants 2015

    CERN Multimedia

    Marcelloni De Oliveira, Claudia

    2015-01-01

    ATLAS PHd Grants - We are excited to announce the creation of a dedicated grant scheme (thanks to a donation from Fabiola Gianotti and Peter Jenni following their award from the Fundamental Physics Prize foundation) to encourage young and high-caliber doctoral students in particle physics research (including computing for physics) and permit them to obtain world class exposure, supervision and training within the ATLAS collaboration. This special PhD Grant is aimed at graduate students preparing a doctoral thesis in particle physics (incl. computing for physics) to spend one year at CERN followed by one year support also at the home Institute.

  5. ATLAS Fast Tracker Simulation Challenges

    CERN Document Server

    Adelman, Jahred; The ATLAS collaboration; Borodin, Mikhail; Chakraborty, Dhiman; García Navarro, José Enrique; Golubkov, Dmitry; Kama, Sami; Panitkin, Sergey; Smirnov, Yuri; Stewart, Graeme; Tompkins, Lauren; Vaniachine, Alexandre; Volpi, Guido

    2015-01-01

    To deal with Big Data flood from the ATLAS detector most events have to be rejected in the trigger system. the trigger rejection is complicated by the presence of a large number of minimum-bias events – the pileup. To limit pileup effects in the high luminosity environment of the LHC Run-2, ATLAS relies on full tracking provided by the Fast TracKer (FTK) implemented with custom electronics. The FTK data processing pipeline has to be simulated in preparation for LHC upgrades to support electronics design and develop trigger strategies at high luminosity. The simulation of the FTK - a highly parallelized system - has inherent performance bottlenecks on general-purpose CPUs. To take advantage of the Grid Computing power, the FTK simulation is integrated with Monte Carlo simulations at the Production System level above the ATLAS workload management system PanDA. We report on ATLAS experience with FTK simulations on the Grid and next steps for accommodating the growing requirements for resources during the LHC R...

  6. A Lego version of ATLAS

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    There's nothing very unusual about a small child making simple objects out of Lego. But wouldn't you be surprised to learn that one six-year old has just made a life-like model of the ATLAS detector?   Bastian with his Lego ATLAS detector. © Photo provided by Kai Nicklas, Bastian's father. It all began a month ago when the boy's father was watching a video about the construction of the ATLAS detector on the Internet. He hadn't noticed that his son was watching it over his shoulder. The small boy was fascinated by what he was seeing on the computer screen and his first reaction was to exclaim: "Wow! That's a terrific machine! I think the people who built it must be really clever." The detector must have really fired his imagination because, after asking his father a few questions, he decided to make a Lego model of it. Look at the photo and you will see how closely the model he produced resembles the actual ATLAS detector. Is the little boy in question, Bastia...

  7. ATLAS Distributed Data Analysis: performance and challenges

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  8. ATLAS Distributed Data Analysis: challenges and performance

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  11. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  12. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  13. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  14. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  15. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  17. Energy Frontier Research With ATLAS: Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Butler, John [Boston Univ., MA (United States); Black, Kevin [Boston Univ., MA (United States); Ahlen, Steve [Boston Univ., MA (United States)

    2016-06-14

    The Boston University (BU) group is playing key roles across the ATLAS experiment: in detector operations, the online trigger, the upgrade, computing, and physics analysis. Our team has been critical to the maintenance and operations of the muon system since its installation. During Run 1 we led the muon trigger group and that responsibility continues into Run 2. BU maintains and operates the ATLAS Northeast Tier 2 computing center. We are actively engaged in the analysis of ATLAS data from Run 1 and Run 2. Physics analyses we have contributed to include Standard Model measurements (W and Z cross sections, t\\bar{t} differential cross sections, WWW^* production), evidence for the Higgs decaying to \\tau^+\\tau^-, and searches for new phenomena (technicolor, Z' and W', vector-like quarks, dark matter).

  18. EnviroAtlas

    Data.gov (United States)

    City and County of Durham, North Carolina — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  19. ATLAS experimentet

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    Filmen innehåller mycket information om fysik och varför LHC behövs tilsammans med stora detektorer och specielt om behovet av ATLAS Experimentet. Mycket bra film för att förklara det okända- som man undersöker i CERN för att ge svar på frågor som människor har försökt förklara under flere tusen år.

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  1. Global Data Grid Efforts for ATLAS

    CERN Multimedia

    Gardner, R.

    2001-01-01

    Over the past two years computational data grids have emerged as a promising new technology for large scale, data-intensive computing required by the LHC experiments, as outlined by the recent "Hoffman" review panel that addressed the LHC computing challenge. The problem essentially is to seamlessly link physicists to petabyte-scale data and computing resources, distributed worldwide, and connected by high-bandwidth research networks. Several new collaborative initiatives in Europe, the United States, and Asia have formed to address the problem. These projects are of great interest to ATLAS physicists and software developers since their objective is to offer tools that can be integrated into the core ATLAS application framework for distributed event reconstruction, Monte Carlo simulation, and data analysis, making it possible for individuals and groups of physicists to share information, data, and computing resources in new ways and at scales not previously attempted. In addition, much of the distributed IT...

  2. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  3. Berliner Philarmoniker ATLAS visit

    CERN Multimedia

    ATLAS Collaboration

    2017-01-01

    The Berliner Philarmoniker in on tour through Europe. They stopped on June 27th in Geneva, for a concert at the Victoria Hall. An ATLAS visit was organised the morning after, lead by the ATLAS spokesperson Karl Jakobs (welcome and overview talk) and two ATLAS guides (AVC visit and 3D movie).

  4. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  5. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  6. Big Data tools as applied to ATLAS event data

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225336; The ATLAS collaboration; Gardner, Robert; Bryant, Lincoln

    2017-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Logfiles, database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and associated analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data. Such modes would simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning environments and to...

  7. Big Data Analytics Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of big data, statistical and machine learning tools...

  8. ATLAS TDAQ application gateway upgrade during LS1

    CERN Document Server

    KOROL, A; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, A C; DUBROV, S; HAFEEZ, M; LEE, C J; SCANNICCHIO, D A; TWOMEY, M; VORONKOV, A; ZAYTSEV, A

    2014-01-01

    The ATLAS Gateway service is implemented with a set of dedicated computer nodes to provide a fine-grained access control between CERN General Public Network (GPN) and ATLAS Technical Control Network (ATCN). ATCN connects the ATLAS online farm used for ATLAS Operations and data taking, including the ATLAS TDAQ (Trigger and Data Aquisition) and DCS (Detector Control System) nodes. In particular, it provides restricted access to the web services (proxy), general login sessions (via SSH and RDP protocols), NAT and mail relay from ATCN. At the Operating System level the implementation is based on virtualization technologies. Here we report on the Gateway upgrade during Long Shutdown 1 (LS1) period: it includes the transition to the last production release of the CERN Linux distribution (SLC6), the migration to the centralized configuration management system (based on Puppet) and the redesign of the internal system architecture.

  9. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  10. ATLAS Recordings

    CERN Multimedia

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  11. EnviroAtlas - Green Bay, WI - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Green Bay, WI Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  12. EnviroAtlas - Paterson, NJ - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Paterson, NJ Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  13. EnviroAtlas - Portland, ME - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Portland, ME Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  14. ATLAS25: Facebook Live Events

    CERN Multimedia

    CERN

    2017-01-01

    This video is a montage of the 5 Facebook Live events that were broadcast on 2nd October 2017, to celebrate ATLAS25. For more details visit: http://atlas.cern/updates/atlas-news/celebrating-25-years-discovery

  15. ATLAS & Google — "Data Ocean" R&D Project

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    ATLAS is facing several challenges with respect to their computing requirements for LHC Run-3 (2020-2023) and HL-LHC runs (2025-2034). The challenges are not specific for ATLAS or/and LHC, but common for HENP computing community. Most importantly, storage continues to be the driving cost factor and at the current growth rate cannot absorb the increased physics output of the experiment. Novel computing models with a more dynamic use of storage and computing resources need to be considered. This project aims to start an R&D project for evaluating and adopting novel IT technologies for HENP computing. ATLAS and Google plan to launch an R&D project to integrate Google cloud resources (Storage and Compute) to the ATLAS distributed computing environment. After a series of teleconferences, a face-to-face brainstorming meeting in Denver, CO at the Supercomputing 2017 conference resulted in this proposal for a first prototype of the "Data Ocean" project. The idea is threefold: (a) to allow ATLAS to explore the...

  16. Alignment of the ATLAS Inner Detector

    CERN Document Server

    Haertel, R

    2007-01-01

    The ATLAS experiment at the LHC is currently under construction at CERN and will start operation in summer 2008. The Inner Detector of ATLAS is designed to measure the momentum of charged particles and to reconstruct primary and secondary vertices. It consists of a silicon pixel detector, a silicon strip detector and a straw tube detector. For optimal performance of the Inner Detector the position of all active detector elements must be known with a precision of a few microns. The ultimate precision will be reached with a trackbased alignment algorithm. The different alignment methods currently investigated for the ATLAS Inner Detector are presented, as well as the various computational aspects regarding track-based alignment. Results from simulation studies as well as results from testbeam and cosmic ray detector setups are shown and discussed.

  17. Using containers with ATLAS offline software

    CERN Document Server

    Vogel, Marcelo; The ATLAS collaboration

    2017-01-01

    This paper describes the deployment of ATLAS offline software in containers for software development. For this we are using Docker, which is a lightweight virtualization technology that encapsulates a piece of software inside a complete file system. The deployment of offline releases via containers removes the strict requirement of compatibility between the runtime environment needed for job execution and the configuration of worker nodes at computing sites. If these two are decoupled from each other, sites can upgrade their nodes whenever and however they see fit. In this work, ATLAS software is distributed in containers either via the CernVM File System (CVMFS) or by means of a full ATLAS offline release installation. In software development, separating the build and runtime environment from the development environment allows users to take advantage of many modern code development tools that may not be available in production runtime setups like SLC6. It also frees developers from depending on resources lik...

  18. The SysteMHC Atlas project

    DEFF Research Database (Denmark)

    Shao, Wenguang; Pedrioli, Patrick G. A.; Wolski, Witold

    2018-01-01

    to enable better collaborations among researchers, to advance the field more efficiently and to establish quality measures required for the meaningful comparison of datasets. Here we present the SysteMHC Atlas (https://systemhcatlas.org), a public database that aims at collecting, organizing, sharing......, visualizing and exploring immunopeptidomic data generated by MS. The Atlas includes raw mass spectrometer output files collected from several laboratories around the globe, a catalog of context-specific datasets of MHC class I and class II peptides, standardized MHC allele-specific peptide spectral libraries...... consisting of consensus spectra calculated from repeat measurements of the same peptide sequence, and links to other proteomics and immunology databases. The SysteMHC Atlas project was created and will be further expanded using a uniform and open computational pipeline that controls the quality of peptide...

  19. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  20. Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    CERN Document Server

    Lopez-Perez, Juan Antonio; Salt, Jose; Ros, Eduardo

    2008-01-01

    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about $1 TeV^{-}1$, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the cou...

  1. Use of Anisotropy, 3D Segmented Atlas, and Computational Analysis to Identify Gray Matter Subcortical Lesions Common to Concussive Injury from Different Sites on the Cortex.

    Directory of Open Access Journals (Sweden)

    Praveen Kulkarni

    Full Text Available Traumatic brain injury (TBI can occur anywhere along the cortical mantel. While the cortical contusions may be random and disparate in their locations, the clinical outcomes are often similar and difficult to explain. Thus a question that arises is, do concussions at different sites on the cortex affect similar subcortical brain regions? To address this question we used a fluid percussion model to concuss the right caudal or rostral cortices in rats. Five days later, diffusion tensor MRI data were acquired for indices of anisotropy (IA for use in a novel method of analysis to detect changes in gray matter microarchitecture. IA values from over 20,000 voxels were registered into a 3D segmented, annotated rat atlas covering 150 brain areas. Comparisons between left and right hemispheres revealed a small population of subcortical sites with altered IA values. Rostral and caudal concussions were of striking similarity in the impacted subcortical locations, particularly the central nucleus of the amygdala, laterodorsal thalamus, and hippocampal complex. Subsequent immunohistochemical analysis of these sites showed significant neuroinflammation. This study presents three significant findings that advance our understanding and evaluation of TBI: 1 the introduction of a new method to identify highly localized disturbances in discrete gray matter, subcortical brain nuclei without postmortem histology, 2 the use of this method to demonstrate that separate injuries to the rostral and caudal cortex produce the same subcortical, disturbances, and 3 the central nucleus of the amygdala, critical in the regulation of emotion, is vulnerable to concussion.

  2. Probabilistic liver atlas construction.

    Science.gov (United States)

    Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E

    2017-01-13

    Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.

  3. Renewable energy atlas of the United States.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J.A.; Hlava, K.Greenwood, H.; Carr, A. (Environmental Science Division)

    2012-05-01

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. It is designed for the U.S. Department of Agriculture Forest Service (USFS) and other federal land management agencies to evaluate existing and proposed renewable energy projects. Much of the content of the Atlas was compiled at Argonne National Laboratory (Argonne) to support recent and current energy-related Environmental Impact Statements and studies, including the following projects: (1) West-wide Energy Corridor Programmatic Environmental Impact Statement (PEIS) (BLM 2008); (2) Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2010); (3) Supplement to the Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2011); (4) Upper Great Plains Wind Energy PEIS (WAPA/USFWS 2012, in progress); and (5) Energy Transport Corridors: The Potential Role of Federal Lands in States Identified by the Energy Policy Act of 2005, Section 368(b) (in progress). This report explains how to add the Atlas to your computer and install the associated software; describes each of the components of the Atlas; lists the Geographic Information System (GIS) database content and sources; and provides a brief introduction to the major renewable energy technologies.

  4. All 2006 ATLAS Tutorials online

    CERN Multimedia

    Steven Goldfarb,; Mitch McLachlan,; Homer A. Neal

    The University of Michigan has completed its full agenda of Web Lecture recording for ATLAS for 2006. The archives include all three ATLAS Week Plenary Sessions, as well as a large variety of tutorials. They are accessible at target="_top" this location. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. This is the first year our group has been asked to provide this complete service to the collaboration, so any and all feedback is welcome. We would especially like to know if you had any difficulties viewing the lectures, if you found the selection of material to be useful, and/or if you think there are any other specific events we ought to cover in 2007. Please send you comments to wlap@umich.edu. We look forward to bringing you a rich variety of new lectures in 2007, starting with the ATLAS Distributed Computing Tutorial on Feb 1, 2 in Edinburgh and concluding with the Higgs discovery talk (of course). Enjoy the Lec...

  5. Three-dimensional anatomical atlas of the human body

    OpenAIRE

    Barbeito, António Manuel Teixeira

    2016-01-01

    Anatomical atlases allow mapping the anatomical structures of the human body. Early versions of these systems consisted of analogic representations with informative text and labelled images of the human body. With the advent of computer systems, digital versions emerged and the third dimension was introduced. Consequently, these systems increased their efficiency, allowing more realistic visualizations with improved interactivity. The development of anatomical atlases in geographic informatio...

  6. Canadian ATLAS data center to support CERN's LHC

    CERN Multimedia

    2006-01-01

    "The biggest science experiment in history is currently underway at the world-famous CERN labs in Switzerland, and Canada is poised to play a critical role in its success. Thanks to a $10.5 million investment announced by the Canada Foundation for Innovation (CFI), an ultra-sophisticated computing facility -- the ATLAS Data Center -- will be created to support the ATLAS project at CERN's Large Hadron Collider (LHC)." (1 page)

  7. Methods for the computation of templates from quantitative magnetic susceptibility maps (QSM): Toward improved atlas- and voxel-based analyses (VBA).

    Science.gov (United States)

    Hanspach, Jannis; Dwyer, Michael G; Bergsland, Niels P; Feng, Xiang; Hagemeier, Jesper; Bertolino, Nicola; Polak, Paul; Reichenbach, Jürgen R; Zivadinov, Robert; Schweser, Ferdinand

    2017-11-01

    To develop and assess a method for the creation of templates for voxel-based analysis (VBA) and atlas-based approaches using quantitative magnetic susceptibility mapping (QSM). We studied four strategies for the creation of magnetic susceptibility brain templates, derived as successive extensions of the conventional template generation (CONV) based on only T 1 -weighted (T 1 w) images. One method that used only T 1 w images involved a minor improvement of CONV (U-CONV). One method used only magnetic susceptibility maps as input for template generation (DIRECT), and the other two used a linear combination of susceptibility and T 1 w images (HYBRID) and an algorithm that directly used both image modalities (MULTI), respectively. The strategies were evaluated in a group of N = 10 healthy human subjects and semiquantitatively assessed by three experienced raters. Template quality was compared statistically via worth estimates (WEs) obtained with a log-linear Bradley-Terry model. The overall quality of the templates was better for strategies including both susceptibility and T 1 w contrast (MULTI: WE = 0.62; HYBRID: WE = 0.21), but the best method depended on the anatomical region of interest. While methods using only one modality resulted in lower WEs, lowest overall WEs were obtained when only T 1 w images were used (DIRECT: WE = 0.12; U-CONV: WE = 0.05). Template generation strategies that employ only magnetic susceptibility contrast or both magnetic susceptibility and T 1 w contrast produce templates with the highest quality. The optimal approach depends on the anatomical structures of interest. The established approach of using only T 1 w images (CONV) results in reduced image quality compared to all other approaches studied. 2 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2017;46:1474-1484. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Distributed analysis challenges in ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Legger, Federica; Mitterer, Christoph Anton; Walker, Rodney [Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2016-07-01

    The ATLAS computing model has undergone massive changes to meet the high luminosity challenge of the second run of the Large Hadron Collider (LHC) at CERN. The production system and distributed data management have been redesigned, a new data format and event model for analysis have been introduced, and common reduction and derivation frameworks have been developed. We report on the impact these changes have on the distributed analysis system, study the various patterns of grid usage for user analysis, focusing on the differences between the first and th e second LHC runs, and measure performances of user jobs.

  9. The development of a GIS atlas of southern African freshwater fish ...

    African Journals Online (AJOL)

    The development of advanced computing and GIS technology has increased the scope of atlas projects by facilitating the integration of large amounts of spatial data to produce derived databases for many specific applications. The atlas has been developed from a database of freshwater fish, hydrological, topographical ...

  10. Bayesian longitudinal segmentation of hippocampal substructures in brain MRI using subject-specific atlases

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Van Leemput, Koen; Augustinack, Jean

    2016-01-01

    images and computational atlases, automatic segmentation of hippocampal subregions is becoming feasible in MRI scans. Here we introduce a generative model for dedicated longitudinal segmentation that relies on subject-specific atlases. The segmentations of the scans at the different time points...

  11. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R. [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L. [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  12. Future ATLAS Higgs Studies

    CERN Document Server

    Smart, Ben; The ATLAS collaboration

    2017-01-01

    The High-Luminosity LHC will prove a challenging environment to work in, with for example $=200$ expected. It will however also provide great opportunities for advancing studies of the Higgs boson. The ATLAS detector will be upgraded, and Higgs prospects analyses have been performed to assess the reach of ATLAS Higgs studies in the HL-LHC era. These analyses are presented, as are Run-2 ATLAS di-Higgs analyses for comparison.

  13. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  14. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  15. Multi-atlas segmentation of subcortical brain structures via the AutoSeg software pipeline

    Science.gov (United States)

    Wang, Jiahui; Vachet, Clement; Rumple, Ashley; Gouttard, Sylvain; Ouziel, Clémentine; Perrot, Emilie; Du, Guangwei; Huang, Xuemei; Gerig, Guido; Styner, Martin

    2014-01-01

    Automated segmenting and labeling of individual brain anatomical regions, in MRI are challenging, due to the issue of individual structural variability. Although atlas-based segmentation has shown its potential for both tissue and structure segmentation, due to the inherent natural variability as well as disease-related changes in MR appearance, a single atlas image is often inappropriate to represent the full population of datasets processed in a given neuroimaging study. As an alternative for the case of single atlas segmentation, the use of multiple atlases alongside label fusion techniques has been introduced using a set of individual “atlases” that encompasses the expected variability in the studied population. In our study, we proposed a multi-atlas segmentation scheme with a novel graph-based atlas selection technique. We first paired and co-registered all atlases and the subject MR scans. A directed graph with edge weights based on intensity and shape similarity between all MR scans is then computed. The set of neighboring templates is selected via clustering of the graph. Finally, weighted majority voting is employed to create the final segmentation over the selected atlases. This multi-atlas segmentation scheme is used to extend a single-atlas-based segmentation toolkit entitled AutoSeg, which is an open-source, extensible C++ based software pipeline employing BatchMake for its pipeline scripting, developed at the Neuro Image Research and Analysis Laboratories of the University of North Carolina at Chapel Hill. AutoSeg performs N4 intensity inhomogeneity correction, rigid registration to a common template space, automated brain tissue classification based skull-stripping, and the multi-atlas segmentation. The multi-atlas-based AutoSeg has been evaluated on subcortical structure segmentation with a testing dataset of 20 adult brain MRI scans and 15 atlas MRI scans. The AutoSeg achieved mean Dice coefficients of 81.73% for the subcortical structures

  16. Evolution of the ATLAS Nightly Build System

    Science.gov (United States)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  17. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  18. The last ATLAS overview week now available on Web Lectures

    CERN Multimedia

    Jeremy Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the lectures and send us a note at wlap@umich.edu to tell us what you think. The newly available WLAP items relating to ATLAS is the following: ATLAS Week Plenary, CERN, 2-3 October 2006 All previous WLAP lectures are also avilable on the web.

  19. Simulation Strategies for the ATLAS Experiment at LHC

    CERN Document Server

    Rimoldi, A; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment, operational at the new LHC collider, is fully simulated using the Geant4 tool. The simulation program has been built within the ATLAS common framework Athena. The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. The latest developments went into the direction to better represent the reality of the detector in all the possible details. The latest developments provide increased functionality and robustness. The full process is constantly monitored and profiled. Increased performance guarantee the best use of available resources without any degradation in the quality and accuracy of the simulation itself. In the presentation emphasis is...

  20. Alignment of the Atlas Inner Detector tracking system

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2009-01-01

    The ATLAS experiment is equipped with a charged particle tracking system built on three subdetectors, which provide high precision measurements made from a fine detector granularity. The pixel and microstrip subdetectors, which use the silicon technology, are complemented with the transition radiation tracker. The alignment of the ATLAS Inner Detector tracking system requires the determination of its almost 36000 degrees of freedom. From the tracking point of view, the alignment parameters should be known with few microns accuracy. This permits to attain an optimal measurement of the parameters of the charged particles trajectories, thus enabling ATLAS to achieve its ambitious physics goals. The implementation of the alignment software, its framework and the data flow will be discussed, including the selection of an alignment and calibration stream at the ATLAS Event Filter stage. The results obtained on the recent computing challenges, where large scale simulation samples have been used in order to mimic the...

  1. Migration of ATLAS PanDA to CERN

    Science.gov (United States)

    Stewart, Graeme Andrew; Klimentov, Alexei; Koblitz, Birger; Lamanna, Massimo; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; Emanuel De Castro Faria Salgado, Pedro; Wenaus, Torre

    2010-04-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  2. Production experience with the ATLAS Event Service

    Science.gov (United States)

    Benjamin, D.; Calafiura, P.; Childers, T.; De, K.; Guan, W.; Maeno, T.; Nilsson, P.; Tsulaia, V.; Van Gemmeren, P.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.

  3. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  4. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  5. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  6. ATLAS brochure (Polish version)

    CERN Document Server

    Lefevre, C

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  7. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2005-01-01

    La Givrine near St Cergue Cross Country Skiing and Fondue at Basse Ruche with M Nordberg, P Jenni, M Nessi, F Gianotti and Co. ATLAS Management Fondu dinner, reviewing state of play of the experiment Many fun scenes from cross country skiing and after 41 minutes of the film starts the fondue dinner in a nice chalet with many persons working for ATLAS experiment

  8. ATLAS-Hadronic Calorimeter

    CERN Multimedia

    2003-01-01

    Hall 180 work on Hadronic Calorimeter The ATLAS hadronic tile calorimeter The Tile Calorimeter, which constitutes the central section of the ATLAS hadronic calorimeter, is a non-compensating sampling device made of iron and scintillating tiles. (IEEE Trans. Nucl. Sci. 53 (2006) 1275-81)

  9. ATLAS brochure (Catalan version)

    CERN Document Server

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  10. ATLAS Colouring Book

    CERN Multimedia

    Anthony, Katarina

    2016-01-01

    The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  11. ATLAS Thesis Awards 2015

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on Thursday 25 February. The winners also presented their work in front of members of the ATLAS Collaboration. Winners: Javier Montejo Berlingen, Barcelona (Spain), Ruth Pöttgen, Mainz (Germany), Nils Ruthmann, Freiburg (Germany), and Steven Schramm, Toronto (Canada).

  12. ATLAS brochure (Danish version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  13. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  14. ATLAS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  15. ATLAS Brochure (french version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  16. ATLAS Brochure (english version)

    CERN Multimedia

    2004-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  17. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  18. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  19. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Shots of aircraft factory where machining for ATLAS is done Shots of aircraft Work on components for ATLAS big wheel Discussions between Tikhonov and Nordberg in workshop Sequence 2 Shots of downtown Novosibirsk, including little church which is mid-point of Russian Federation Sequence 3 Interview of Yuri Tikhonov by Andrew Millington

  20. A Slice of ATLAS

    CERN Multimedia

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  1. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  2. The ATLAS tile calorimeter

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Louis Rose-Dulcina, a technician from the ATLAS collaboration, works on the ATLAS tile calorimeter. Special manufacturing techniques were developed to mass produce the thousands of elements in this detector. Tile detectors are made in a sandwich-like structure where these scintillator tiles are placed between metal sheets.

  3. ATLAS rewards industry

    CERN Document Server

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  4. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  5. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Said Said, Usama; Badger, Jake

    2006-01-01

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  6. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  7. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  8. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00025195; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whole trackin...

  9. Poster for the paper "A Log Service Package for the ATLAS TDAQ/DCS Group"

    CERN Document Server

    Murillo García, R; The ATLAS collaboration

    2010-01-01

    This is the poster for the paper "A new design and implementation of the ATLAS Log Service package", which has been accepted in the International Conference on Computing in High Energy and Nuclear Physics (CHEP) 2010.

  10. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; De, K; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2014-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  11. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  12. ATLAS Forward Detectors and Physics

    CERN Document Server

    Soni, N

    2010-01-01

    In this communication I describe the ATLAS forward physics program and the detectors, LUCID, ZDC and ALFA that have been designed to meet this experimental challenge. In addition to their primary role in the determination of ATLAS luminosity these detectors - in conjunction with the main ATLAS detector - will be used to study soft QCD and diffractive physics in the initial low luminosity phase of ATLAS running. Finally, I will briefly describe the ATLAS Forward Proton (AFP) project that currently represents the future of the ATLAS forward physics program.

  13. 18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

    CERN Document Server

    Samuel Morier-Genoud

    2012-01-01

    18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

  14. 10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

  15. TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, X [Elekta Inc., Maryland Heights, MO (United States)

    2016-06-15

    Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When a new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.

  16. An ATLAS Virtual Visit connects physicists at the Town Square of Cracow and physicists of the LHC Experiment in the ATLAS control room; special participation of CERN's General Director, Rolf Heuer and the Director for Research and Scientific Computing, Sergio Bertolucci.

    CERN Multimedia

    2012-01-01

    he 12 Festival of Science "Theory-knowledge-experience...". Fest will be located on the traditional Main Square, which is visited by thousands of citizens and tourists. The Institute of Nuclear Physics as usual participates in this annual event. Our visitors will learn the secrets of the CERN experiments on the Large Hadron Collider - ATLAS, LHCb, ALICE, CMS, find out more about the Higgs particles, antimatter quark-gluon plasma (beeing guided by our scientists and PhD students). One of the attractions will be ATLAS Control Room Virtual Visit. Visiting people will have an opportunity to see how ATLAS is controlled and operated to collect its exciting data and ask questions to scientists and engineers involved in LHC program at CERN. Institute of Nuclear Physics has prepared also several interactive demonstrations of Atomic Force Microscopy, Magnetic Resonance, Hadron Therapy and Crystal Physics.

  17. EnviroAtlas - Cleveland, OH - EnviroAtlas Community Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Cleveland, OH EnviroAtlas Community. It represents the outside edge of all the block groups included in the...

  18. ATLAS Data Preservation

    CERN Document Server

    Jones, Roger; The ATLAS collaboration

    2015-01-01

    Complementary to parallel open access and analysis preservation initiatives, ATLAS is taking steps to ensure that the data taken by the experiment during run-1 remain accessible and available for future analysis by the collaboration. An evaluation of what is required to achieve this is underway, examining the ATLAS data production chain to establish the effort required and potential problems. Several alternatives are explored, but the favoured solution is to bring the run 1 data and software in line with the equivalent to that which will be used for run 2. This will result in a coherent ATLAS dataset for the data already taken and that to come in the future.

  19. Highlights from ATLAS

    CERN Document Server

    Charlton, D; The ATLAS collaboration

    2013-01-01

    Highlights of recent results from ATLAS were presented. The data collected to date, the detector and physics performance, and measurements of previously established Standard Model processes were reviewed briefly before summarising the latest ATLAS results in the Brout-Englert-Higgs sector, where big progress has been made in the year since the discovery. Finally, selected prospects for measurements including the data from the HL-LHC luminosity upgrade were presented, for both ATLAS and CMS. Many of the results mentioned are preliminary. These proceedings reflect only a brief summary of the material presented, and the status at the time of the conference is reported.

  20. ATLAS data sonification : a new interface for musical expression

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This poster tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed. It is a partnership between the ATLAS collaboration and the MIT multimedia lab.

  1. ATLAS Event - First Splash of Particles in ATLAS

    CERN Multimedia

    ATLAS Outreach

    2008-01-01

    A simulated event. September 10, 2008 - The ATLAS detector lit up as a flood of particles traversed the detector when the beam was occasionally directed at a target near ATLAS. This allowed ATLAS physicists to study how well the various components of the detector were functioning in preparation for the forthcoming collisions. The first ATLAS data recorded on September 10, 2008 is seen here. Running time 24 seconds

  2. Benefits and performance of ATLAS approaches to utilizing opportunistic resources

    CERN Document Server

    Filip\\v{c}i\\v{c}, Andrej; The ATLAS collaboration

    2016-01-01

    ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The difficulties of using such opportunistic resources come from architectural differences such as unavailability of grid services, the absence of network connectivity on worker nodes or inability to use standard authorization protocols. Nevertheless, ATLAS has been extremely successful in running production payloads on a variety of sites, thanks largely to the job execution workflow design in which the job assignment, input data provisioning and execution steps are clearly separated and can be offloaded to custom services. To transparently include the opportunistic sites in the ATLAS central production system, several models with supporting services have been developed to mimic the functionality of a full WLCG site. Some are e...

  3. Evolution of User Analysis on the Grid in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2016-01-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Based on the experience from the first run of the LHC, substantial improvements to the ATLAS computing system have been made to optimize both production and analysis workflows. These include the re-design of the production and data management systems, a new analysis data format and event model, and the development of common reduction and analysis frameworks. The impact of such changes on the distributed analysis system is evaluated. More than 100 mill...

  4. Rate Predictions and Trigger/DAQ Resource Monitoring in ATLAS

    CERN Document Server

    Schaefer, D M; The ATLAS collaboration

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has pro- duced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high eciency and excel- lent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trig- ger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and oine storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring frame- work for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used...

  5. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    Science.gov (United States)

    Farrell, Steven; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea; ATLAS Collaboration

    2017-10-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with details on its multi-threaded design. Then, we will present a performance analysis of the application on KNL devices and compare it to a traditional x86 platform to demonstrate the capabilities of the architecture and evaluate the benefits of utilizing KNL platforms like Cori for ATLAS production.

  6. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    Science.gov (United States)

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-09-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017. Published by Elsevier B.V.

  7. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  8. ATLAS Overview Week at Brookhaven

    CERN Multimedia

    Pilcher, J

    Over 200 ATLAS participants gathered at Brookhaven National Laboratory during the first week of June for our annual overview week. Some system communities arrived early and held meetings on Saturday and Sunday, and the detector interface group (DIG) and Technical Coordination also took advantage of the time to discuss issues of interest for all detector systems. Sunday was also marked by a workshop on the possibilities for heavy ion physics with ATLAS. Beginning on Monday, and for the rest of the week, sessions were held in common in the well equipped Berkner Hall auditorium complex. Laptop computers became the norm for presentations and a wireless network kept laptop owners well connected. Most lunches and dinners were held on the lawn outside Berkner Hall. The weather was very cooperative and it was an extremely pleasant setting. This picture shows most of the participants from a view on the roof of Berkner Hall. Technical Coordination and Integration issues started the reports on Monday and became a...

  9. 24 October 2014 - President of the Republic of Ecuador R. Correa Delgado signing the guest book with Vice President L. Moreno and Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Guillaume, Jeanneret

    2014-01-01

    visiting the ATLAS experimental cavern with Collaboration PSokesperson D. Charlton and ATLAS User F. Monticelli; throughout accompanied by Adviser for Ecuador J. Salicio Diez and Director for Research and Scientific Computing S. Bertolucci.

  10. California Ocean Uses Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset is a result of the California Ocean Uses Atlas Project: a collaboration between NOAA's National Marine Protected Areas Center and Marine Conservation...

  11. ATLAS TV PROJECT

    CERN Multimedia

    2006-01-01

    CERN, Building 40 Interview with theorist Mr. Philip Hinchliffe (Berkeley) as well an interview with his wife Mrs. Hinchliffe who is also Physics Department head at Berkeley. They are both working in ATLAS Experiment.

  12. Lunar Sample Atlas

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Sample Atlas provides pictures of the Apollo samples taken in the Lunar Sample Laboratory, full-color views of the samples in microscopic thin-sections,...

  13. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    ATLAS Physics Workshop at the University of Roma Tre held from Monday 06 June 2005 to Saturday 11 June 2005. Experts establishing workshop, poster, people milling Shots of Peter Jenni introduction Many audience shots Sequences from various talks

  14. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  15. Consolidated Lunar Atlas

    Data.gov (United States)

    National Aeronautics and Space Administration — The Consolidated Lunar Atlas is a collection of the best photographic images of the moon, including low-oblique photography, full-moon photography, and tabular and...

  16. ATLAS Cavern baseplate

    CERN Multimedia

    It-UDS-Audiovisual Services

    2002-01-01

    This video shows the incredible amounth of iron used for ATLAS cavern. Please look at the related links and also videos that are concerning the civil engineering where you can see even more detailed cavern excavation work.

  17. VT Planning Atlas

    Data.gov (United States)

    Vermont Center for Geographic Information — The Planning Atlas provides easy access to commonly requested land use planning data – the status of local planning and regulation, state designation boundaries and...

  18. Apollo Image Atlas

    Data.gov (United States)

    National Aeronautics and Space Administration — The Apollo Image Atlas is a comprehensive collection of Apollo-Saturn mission photography. Included are almost 25,000 lunar images, both from orbit and from the...

  19. ATLAS Metadata Task Force

    Energy Technology Data Exchange (ETDEWEB)

    ATLAS Collaboration; Costanzo, D.; Cranshaw, J.; Gadomski, S.; Jezequel, S.; Klimentov, A.; Lehmann Miotto, G.; Malon, D.; Mornacchi, G.; Nemethy, P.; Pauly, T.; von der Schmitt, H.; Barberis, D.; Gianotti, F.; Hinchliffe, I.; Mapelli, L.; Quarrie, D.; Stapnes, S.

    2007-04-04

    This document provides an overview of the metadata, which are needed to characterizeATLAS event data at different levels (a complete run, data streams within a run, luminosity blocks within a run, individual events).

  20. PeptideAtlas

    Data.gov (United States)

    U.S. Department of Health & Human Services — PeptideAtlas is a multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments. Mass...

  1. ATLAS soft QCD results

    CERN Document Server

    Sykora, Tomas; The ATLAS collaboration

    2018-01-01

    Recent results of soft QCD measurements performed by the ATLAS collaboration are reported. The measurements include total, elastic and inelastic cross sections, inclusive spectra, underlying event and particle correlations in p-p and p-Pb collisions.

  2. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  3. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  4. IT infrastructure design and implementation considerations for the ATLAS TDAQ system

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2010-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  5. Quench modeling of the ATLAS superconducting toroids

    CERN Document Server

    Gavrilin, A V; ten Kate, H H J

    2001-01-01

    Details of the normal zone propagation and the temperature distribution in the coils of ATLAS toroids under quench are presented. A tailor-made mathematical model and corresponding computer code enable obtainment of computational results for the propagation process over the coils in transverse (turn-to-turn) and longitudinal directions. The slow electromagnetic diffusion into the pure aluminum stabilizer of the toroid's conductor, as well as the essentially transient heat transfer through inter-turn insulation, is appropriately included in the model. The effect of nonuniform distribution of the magnetic field and the thermal links to the coil casing on the temperature gradients within the coils is analyzed in full. (5 refs).

  6. ATLAS Transitional Radiation Tracker

    CERN Multimedia

    ATLAS Outreach

    2006-01-01

    This colorful 3D animation is an excerpt from the film "ATLAS-Episode II, The Particles Strike Back." Shot with a bug's eye view of the inside of the detector. The viewer is taken on a tour of the inner workings of the transitional radiation tracker within the ATLAS detector. Subjects covered include what the tracker is used to measure, its structure, what happens when particles pass through the tracker, how it distinguishes between different types of particles within it.

  7. Budker INP in ATLAS

    CERN Multimedia

    2001-01-01

    The Novosibirsk group has proposed a new design for the ATLAS liquid argon electromagnetic end-cap calorimeter with a constant thickness of absorber plates. This design has signifi- cant advantages compared to one in the Technical Proposal and it has been accepted by the ATLAS Collaboration. The Novosibirsk group is responsible for the fabrication of the precision aluminium structure for the e.m.end-cap calorimeter.

  8. ATLAS Status and First Results

    CERN Document Server

    Lankford, AJ; The ATLAS collaboration

    2010-01-01

    The ATLAS Experiment at the CERN Large Hadron Collider will study a broad range of particle physics at the highest available laboratory energies, from measurements of the standard model to searches for new physics beyond the standard model. The status of ATLAS commissioning and the ATLAS physics program will be reported, and physics prospects for the 2010 LHC run will be discussed.

  9. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    1999-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. The film has original working sound.

  10. Overview of ATLAS PanDA Workload Management

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Wenaus, T; Nilsson, P; Stewart, G A; Walker, R; Stradling, A; Caballero, J; Potekhin, M; Smith, D

    2011-01-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase, and plans for ...

  11. Overview of ATLAS PanDA Workload Management

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Wenaus, T; Nilsson, P; Stewart, G; Walker, R; Stradling, A; Caballero, J; Potekhin, M; Smith, D

    2010-01-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how Panda meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase, and plans for ...

  12. Overview of ATLAS PanDA Workload Management

    Science.gov (United States)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  13. A unified framework for cross-modality multi-atlas segmentation of brain MRI

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-01-01

    interdependence between the registrations.We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used......Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented....... These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when...

  14. ATLAS data sonification: a new interface for musical expression and public interaction

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00345031; The ATLAS collaboration; Goldfarb, Steven

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This talk tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed.

  15. 29 March 2011 - Ninth President of Israel S.Peres welcomed by CERN Director-General R. Heuer who introduces Council President M. Spiro, Director for Accelerators and Technology S. Myers, Head of International Relations F. Pauss, Physics Department Head P. Bloch, Technology Department Head F. Bordry, Human Resources Department Head A.-S. Catherin, Beams Department Head P. Collier, Information Technology Department Head F. Hemmer, Adviser for Israel J. Ellis, Legal Counsel E. Gröniger-Voss, ATLAS Collaboration Spokesperson F. Gianotti, Former ATLAS Collaboration Spokesperson P. Jenni, Weizmann Institute G. Mikenberg, CERN VIP and Protocol Officer W. Korda.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    During his visit he toured the ATLAS underground experimental area with Giora Mikenberg of the ATLAS collaboration, Weizmann Institute of Sciences and Israeli industrial liaison office, Rolf Heuer, CERN’s director-general, and Fabiola Gianotti, ATLAS spokesperson. The president also visited the CERN computing centre and met Israeli scientists working at CERN.

  16. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    J. Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the l...

  17. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Goldfarb, S.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project. A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please e...

  18. ATLAS Transition Radiation Tracker - large piece

    CERN Multimedia

    2006-01-01

    The ATLAS transition radiation tracker is made of 300'000 straw tubes, up to 144cm long. Filled with a gas mixture and threaded with a wire, each straw is a complete mini-detector in its own right. An electric field is applied between the wire and the outside wall of the straw. As particles pass through, they collide with atoms in the gas, knocking out electrons. The avalanche of electrons is detected as an electrical signal on the wire in the centre. The tracker plays two important roles. Firstly, it makes more position measurements, giving more dots for the computers to join up to recreate the particle tracks. Also, together with the ATLAS calorimeters, it distinguishes between different types of particles depending on whether they emit radiation as they make the transition from the surrounding foil into the straws.

  19. Test Management Framework for the ATLAS Experiment

    CERN Document Server

    Kazarov, Andrei; The ATLAS collaboration; Avolio, Giuseppe

    2018-01-01

    Test Management Framework for the Data Acquisition of the ATLAS Experiment Data Acquisition (DAQ) of the ATLAS experiment is a large distributed and inhomogeneous system: it consists of thousands of interconnected computers and electronics devices that operate coherently to read out and select relevant physics data. Advanced diagnostics capabilities of the TDAQ control system are a crucial feature which contributes significantly to smooth operation and fast recovery in case of the problems and, finally, to the high efficiency of the whole experiment. The base layer of the verification and diagnostic functionality is a test management framework. We have developed a flexible test management system that allows the experts to define and configure tests for different components, indicate follow-up actions to test failures and describe inter-dependencies between DAQ or detector elements. This development is based on the experience gained with the previous test system that was used during the first three years of th...

  20. ATLAS Transition Radiation Tracker - small piece

    CERN Multimedia

    2006-01-01

    The ATLAS transition radiation tracker is made of 300'000 straw tubes, up to 144cm long. Filled with a gas mixture and threaded with a wire, each straw is a complete mini-detector in its own right. An electric field is applied between the wire and the outside wall of the straw. As particles pass through, they collide with atoms in the gas, knocking out electrons. The avalanche of electrons is detected as an electrical signal on the wire in the centre. The tracker plays two important roles. Firstly, it makes more position measurements, giving more dots for the computers to join up to recreate the particle tracks. Also, together with the ATLAS calorimeters, it distinguishes between different types of particles depending on whether they emit radiation as they make the transition from the surrounding foil into the straws.

  1. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  2. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  3. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  4. Using containers with ATLAS offline software

    CERN Document Server

    Vogel, Marcelo; The ATLAS collaboration; Heinrich, Lukas; Stewart, Graeme

    2017-01-01

    Title: Using containers with ATLAS offline software Marcelo Vogel, Bergische Universitaet Wuppertal Graeme Stewart, University of Glasgow Johannes Elmsheuser, Brookhaven National Laboratory Lukas Heinrich, New York University Abstract: This paper describes the deployment of ATLAS offline software in containers for software development and the use in production jobs on the grid - such as event generation, simulation, reconstruction and physics derivations - and in physics analysis. For this we are using Docker and Singularity which are both lightweight virtualization technologies to encapsulates a piece of software inside a complete file system. The deployment of offline releases via containers removes the interdependence between the runtime environment needed for job execution and the configuration of a computing site’s worker nodes. Once the two are decoupled from each other, sites can upgrade their nodes whenever and however they see fit. Docker or Singularity will provide a uniform runtime environment fo...

  5. Integration of the trigger and data acquisition systems in ATLAS

    NARCIS (Netherlands)

    Riu, I.; et al., [Unknown; Vermeulen, J.

    2008-01-01

    During 2006 and spring 2007, integration and commissioning of trigger and data acquisition (TDAQ) equipment in the ATLAS experimental area has progressed. Much of the work has focused on a final prototype setup consisting of around eighty computers representing a subset of the full TDAQ system.

  6. A Fast Vertex Fitter for ATLAS Level 2 Trigger

    CERN Document Server

    Emeliyanov, D

    2007-01-01

    A vertex fitting algorithm developed for the Level 2 Trigger of the ATLAS experiment is presented. The algorithm features a Kalman filter with a decorrelating measurement transformation which reduces the computational burden of the vertex fit. The algorithm has been tested on data produced using a full Monte Carlo detector simulation. Results regarding the precision and speed of the algorithm are presented.

  7. Discrete Event Simulation of the ATLAS Second Level Trigger

    NARCIS (Netherlands)

    Vermeulen, J.C.; Hunt, S.; Hortnagl, C.; Harris, F.; Erasov, A.; Dankers, R.J.; Bogaerts, A.

    1998-01-01

    Discrete event simulation is applied for determining the computing and networking resources needed for the ATLAS second level trigger. This paper discusses the techniques used and some of the results obtained so far for well defined laboratory configurations and for the full system

  8. ATLAS Offline Data Quality Monitoring

    CERN Document Server

    Adelman, J; Boelaert, N; D'Onofrio, M; Frost, J A; Guyot, C; Hauschild, M; Hoecker, A; Leney, K J C; Lytken, E; Martinez-Perez, M; Masik, J; Nairz, A M; Onyisi, P U E; Roe, S; Schatzel, S; Schaetzel, S; Wilson, M G

    2010-01-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  9. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  10. EnviroAtlas Community Boundaries Web Service

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundaries of all EnviroAtlas Communities. It represents the outside edge of all the block groups included in each EnviroAtlas...

  11. EnviroAtlas - Metrics for Austin, TX

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web...

  12. EnviroAtlas - Metrics for Cleveland, OH

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this web...

  13. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  14. Continuous software quality analysis for the ATLAS experiment

    CERN Document Server

    Washbrook, Andrew; The ATLAS collaboration

    2017-01-01

    The software for the ATLAS experiment on the Large Hadron Collider at CERN has evolved over many years to meet the demands of Monte Carlo simulation, particle detector reconstruction and data analysis. At present over 3.8 million lines of C++ code (and close to 6 million total lines of code) are maintained by an active worldwide developer community. In order to run the experiment software efficiently at hundreds of computing centres it is essential to maintain a high level of software quality standards. The methods proposed to improve software quality practices by incorporating checks into the new ATLAS software build infrastructure.

  15. Evolution of user analysis on the grid in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00218990; The ATLAS collaboration; Dewhurst, Alastair

    2016-01-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  16. Evolution of user analysis on the grid in ATLAS

    Science.gov (United States)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  17. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  18. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  19. New format for ATLAS e-news

    CERN Multimedia

    Pauline Gagnon

    ATLAS e-news got a new look! As of November 30, 2007, we have a new format for ATLAS e-news. Please go to: http://atlas-service-enews.web.cern.ch/atlas-service-enews/index.html . ATLAS e-news will now be published on a weekly basis. If you are not an ATLAS colaboration member but still want to know how the ATLAS experiment is doing, we will soon have a version of ATLAS e-news intended for the general public. Information will be sent out in due time.

  20. High-Performance Scalable Information Service for the ATLAS Experiment

    Science.gov (United States)

    Kolos, S.; Boutsioukis, G.; Hauser, R.

    2012-12-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  1. A case of atlas assimilation: description of bony and soft structures.

    Science.gov (United States)

    Ciołkowski, Maciej K; Krajewski, Paweł; Ciszek, Bogdan

    2014-10-01

    A case of atlas assimilation revealed during serial study of suboccipital region is presented. The specimen was harvested from the body of 31-year-old woman. Images of the computed tomography scans are correlated with classic dissection. Asymmetrical bony assimilation is accompanied by asymmetrical development of the suboccipital musculature. In the presented case, the atlantic segments of both vertebral arteries preserved their usual course between bony elements derived from the atlas and proatlas. Development of the soft tissues must be influenced by similar factors as development of the skeleton. Detailed radiologic studies, possibly with volumetric reconstructions, are necessary in cases of atlas assimilation before surgical interventions in the region of craniovertebral junction.

  2. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies.

  3. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration

    2017-01-01

    BigPanDA monitoring is a web-based application that provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analysing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this work...

  4. ATLAS BigPanDA Monitoring

    CERN Document Server

    Padolski, Siarhei; The ATLAS collaboration; Klimentov, Alexei; Korchuganova, Tatiana

    2017-01-01

    BigPanDA monitoring is a web based application which provides various processing and representation of the Production and Distributed Analysis (PanDA) system objects states. Analyzing hundreds of millions of computation entities such as an event or a job BigPanDA monitoring builds different scale and levels of abstraction reports in real time mode. Provided information allows users to drill down into the reason of a concrete event failure or observe system bigger picture such as tracking the computation nucleus and satellites performance or the progress of whole production campaign. PanDA system was originally developed for the Atlas experiment and today effectively managing more than 2 million jobs per day distributed over 170 computing centers worldwide. BigPanDA is its core component commissioned in the middle of 2014 and now is the primary source of information for ATLAS users about state of their computations and the source of decision support information for shifters, operators and managers. In this wor...

  5. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  6. ATLAS copies its first PetaByte out of CERN

    CERN Document Server

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  7. Spinal canal stenosis at the level of Atlas

    Directory of Open Access Journals (Sweden)

    Suchanda Bhattacharjee

    2011-01-01

    Full Text Available We report here a rare case of high cervical stenosis at the level of atlas who presented with progressively deteriorating quadriparesis and respiratory distress. A 10-year-old boy presented with above symptoms of one-year duration with a preceding history of trivial trauma prior to onset of such symptoms. Cervical spine MRI revealed a significant stenosis at the level of atlas from the posterior side with a syrinx extending above and below. High-resolution computed tomography of the above level yielded an ill-defined osseous bar compressing the canal at the level of C 1 posterior arch, which appeared bifid in the midline. The patient was immediately taken up for surgery in view of his respiratory complaints. The child showed an excellent recovery after excision of the posterior arch of atlas and removal of the compressing osseous structure.

  8. Big Data processing experience in the ATLAS experiment

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2014-01-01

    To improve the data quality for physics analysis, the ATLAS collaboration completed three major data reprocessing campaigns on the Grid during 2010-2012, with up to 2 PB of data being reprocessed every year. The Worldwide LHC Computing Grid provided petabytes of disk storage and tens of thousands of job slots for a faster throughput. High throughput is critical for timely completion of the reprocessing campaigns conducted in preparation for major physics conferences. In 2011 reprocessing the throughput doubled in comparison to the 2010 reprocessing campaign. To deliver new physics results for the 2013 Moriond Conference, ATLAS reprocessed twice more data in November 2012 within the same time period as in 2011 reprocessing, while due to increased LHC pileup, the 2012 pp events required twice more time to reconstruct than 2011 events. For a faster throughput, the number of jobs running concurrently exceeded 33k during ATLAS reprocessing campaign in November 2012. For comparison the daily average number of runni...

  9. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  10. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  11. ATLAS rewards industry

    CERN Multimedia

    2006-01-01

    Showing excellence in mechanics, electronics and cryogenics, three industries are honoured for their contributions to the ATLAS experiment. Representatives of the three award-wining companies after the ceremony. For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Close interaction with CERN was a key factor in the selection of each rewarded company, in addition to the high-quality products they delivered to the experiment. Alu Menziken Industrie AG, of Switzerland, was honoured for the production of 380,000 aluminium tubes for the Monitored Drift Tube Chambers (MDT). As Giora Mikenberg, the Muon System Project Leader stressed, the aluminium tubes were delivered on time with an extraordinary quality and precision. Between October 2000 and Jan...

  12. ATLAS B Physics Reach

    CERN Document Server

    Smizanska, M

    2004-01-01

    The current scope and status of ATLAS B-physics trigger and off-line performance studies are presented. With the initial low-luminosity LHC runnings the high-statistics analyses will allow to make sensitivity tests of possible New physics contributions by searching for additional CP violation effects and for increased probabilities of rare B-decay channels. In physics of Bs meson system there is sensitivity to mass and width differences and to a weak mixing phase beyond SM expectation. ATLAS will be able to access rare B decays using also high-luminosity running. In beauty production ATLAS will perform measurements sensitive to higher order QCD terms providing new data to investigate present inconsistency between theory and experiment.

  13. Analyse d’atlas

    Directory of Open Access Journals (Sweden)

    2009-04-01

    Full Text Available Ouvrages de référence, de lecture, d’actualité, les atlas s’adressent à des publics très divers, de l’école à l’université.La Bibliothèque vient de recevoir des publications intéressantes à faire connaître aux lecteurs d’ EchoGéo. Les exemples choisis et analysés illustrent la variété formelle et thématique de ce type de document. L’atlas des atlas : le Monde vu d’ailleurs200 cartes proposées sous la direction de Philippe Thureau-Dangin, Christine Chameau et al. Paris : Arthaud, 2008. 191 p (...

  14. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2014-01-01

    Physics processes involving tau leptons play a crucial role in understanding particle physics at the high energy frontier. The ability to efficiently trigger on events containing hadronic tau decays is therefore of particular importance to the ATLAS experiment. During the 2012 run, the Large Hadronic Collder (LHC) reached instantaneous luminosities of nearly $10^{34} cm^{-2}s^{-1}$ with bunch crossings occurring every $50 ns$. This resulted in a huge event rate and a high probability of overlapping interactions per bunch crossing (pile-up). With this in mind it was necessary to design an ATLAS tau trigger system that could reduce the event rate to a manageable level, while efficiently extracting the most interesting physics events in a pile-up robust manner. In this poster the ATLAS tau trigger is described, its performance during 2012 is presented, and the outlook for the LHC Run II is briefly summarized.

  15. The ATLAS Trigger System

    CERN Document Server

    Hauser, R

    2004-01-01

    ATLAS is one of two general-purpose detectors at the next generation proton-proton collider, the LHC. The high rate of interactions and the large number of read-out channels make the trigger system for ATLAS a challenging task. The initial bunch crossing rate of 40~MHz has to be reduced to about 200 Hz while preserving the physics signals against a large background. ATLAS uses a three-level trigger system, with the first level implemented in custom hardware, while the high level trigger systems are implemented in software on commodity hardware. This note describes the physics motivation, the various selection strategies for different channels as well as the physical implementation of the trigger system.

  16. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  17. Two ATLAS suppliers honoured

    CERN Multimedia

    2007-01-01

    The ATLAS experiment has recognised the outstanding contribution of two firms to the pixel detector. Recipients of the supplier award with Peter Jenni, ATLAS spokesperson, and Maximilian Metzger, CERN Secretary-General.At a ceremony held at CERN on 28 November, the ATLAS collaboration presented awards to two of its suppliers that had produced sensor wafers for the pixel detector. The CiS Institut für Mikrosensorik of Erfurt in Germany has supplied 655 sensor wafers containing a total of 1652 sensor tiles and the firm ON Semiconductor has supplied 515 sensor wafers (1177 sensor tiles) from its foundry at Roznov in the Czech Republic. Both firms have successfully met the very demanding requirements. ATLAS’s huge pixel detector is very complicated, requiring expertise in highly specialised integrated microelectronics and precision mechanics. Pixel detector project leader Kevin Einsweiler admits that when the project was first propo...

  18. One registration multi-atlas-based pseudo-CT generation for attenuation correction in PET/MRI

    DEFF Research Database (Denmark)

    Arabi, H.; Zaidi, H.

    2016-01-01

    -CT generation approach. Methods: The proposed approach consists of only one online registration between the target and reference images, regardless of the number of atlas images (N), while for the remaining atlas images, the pre-computed transformation matrices to the reference image are used to align them......, thus allowing us to achieve a reasonable compromise between computing time and quantitative performance. © 2016, Springer-Verlag Berlin Heidelberg....

  19. Developing an educational curriculum for EnviroAtlas ...

    Science.gov (United States)

    EnviroAtlas is a web-based tool developed by the EPA and its partners, which provides interactive tools and resources for users to explore the benefits that people receive from nature, often referred to as ecosystem goods and services.Ecosystem goods and services are important to human health and well-being. Using EnviroAtlas, users can access, view, and analyze diverse information to better understand the potential impacts of decisions. EnviroAtlas provides two primary tools, the Interactive Map and the Eco-Health Relationship Browser. EnviroAtlas integrates geospatial data from a variety of sources so that users can visualize the impacts of decision-making on ecosystems. The Interactive Map allows users to investigate various ecosystem elements (i.e. land cover, pollution, and community development) and compare them across localities in the United States. The best part of the Interactive Map is that it does not require specialized software for map application; rather, it requires only a computer and an internet connection. As such, it can be used as a powerful educational tool. The Eco-Health Relationship Browser is also a web-based, highly interactive tool that uses existing scientific literature to visually demonstrate the connections between the environment and human health.As an ASPPH/EPA Fellow with a background in environmental science and secondary science education, I am currently developing an educational curriculum to support the EnviroAtlas to

  20. Prompt data reconstruction at the ATLAS experiment

    CERN Document Server

    Stewart, G A; da Costa, JF; Tuggle, J; Unal, G; Boyd, Jamie; Firmino da Costa, Joao; Tuggle, Joseph; Unal, Guillaume

    2012-01-01

    The ATLAS experiment at the LHC collider recorded more than 5~fb$^{-1}$ data of $pp$ collisions at a centre-of-mass energy of 7~TeV during 2011. The recorded data are promptly reconstructed in two steps at a large computing farm at CERN to provide fast access to high quality data for physics analysis. In the first step, a subset of the data corresponding to 10~Hz is processed in parallel with data taking. Data quality, detector calibration constants, and the beam spot position are determined using the reconstructed data within 48 hours. In the second step all recorded data are processed with the updated parameters. The LHC significantly increased the instantaneous luminosity and the number of interactions per bunch crossing in 2011; the data recording rate by ATLAS exceeds 400~Hz. To cope with these challenges the performance and reliability of the ATLAS reconstruction software have been improved. In this paper we describe how the prompt data reconstruction system quickly and stably provides high quality data...

  1. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  2. Analysis of empty ATLAS pilot jobs

    CERN Document Server

    Love, Peter; The ATLAS collaboration

    2016-01-01

    The pilot model used by the ATLAS production system has been in use for many years. The model has proven to be a success with many advantages over push models. However one of the negative side-effects of using a pilot model is the presence of 'empty pilots' running on sites which consume a small amount of walltime and not running a useful payload job. The impact on a site can be significant with previous studies showing a total 0.5% walltime usage with no benefit to either the site or to ATLAS. Another impact is the number of empty pilots being processed by a site's Compute Element and batch system which can be 5% of the total number of pilots being handled. In this paper we review the latest statistics using both ATLAS and site data and highlight edge cases where the number of empty pilots dominate. We also study the effect of tuning the pilot factories to reduce the number of empty pilots.

  3. Distributed Computing Beyond The Grid

    CERN Document Server

    Klimentov, A; The ATLAS collaboration

    2012-01-01

    This note will summarize the Software development and operational experience and improvements of the ATLAS Distributed Computing in the past years. Grid model was successfully deployed for all HEP experiments and after the first two years of very successful LHC data-taking and processing on the Grid we need to assess our experience and to find a good balance between stability and innovation. Several Research and Development (RnD) pilot projects were launched by ATLAS (and HEP) computing communities, namely 'cloud computing', data storage federation. HEP experiments are also adopted data popularity model and it allows to migrate from planned data placement to the dynamic model. This talk will also present an overview of HEP experiments computing model evolution and increasing role of networking as one of major resources (in addition to storage and CPU) which should be taken into account by workload management and data management systems.

  4. A generative probability model of joint label fusion for multi-atlas based brain segmentation.

    Science.gov (United States)

    Wu, Guorong; Wang, Qian; Zhang, Daoqiang; Nie, Feiping; Huang, Heng; Shen, Dinggang

    2014-08-01

    Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling

  5. Jet Physics in ATLAS

    CERN Document Server

    Sandoval, C; The ATLAS collaboration

    2012-01-01

    Measurements of hadronic jets provide tests of strong interactions which are interesting both in their own right and as backgrounds to many New Physics searches. It is also through tests of Quantum Chromodynamics that new physics may be discovered. The extensive dataset recorded with the ATLAS detector throughout the 7 TeV centre-of-mass LHC operation period allows QCD to be probed at distances never reached before. We present a review of selected ATLAS jet physics measurements. These measurements constitute precision tests of QCD in a new energy regime, and show sensitivity to the parton densities in the proton and to the value of the strong coupling, alpha_s.

  6. Analysis Preservation in ATLAS

    CERN Document Server

    Cranmer, Kyle; The ATLAS collaboration; Jones, Roger; South, David

    2015-01-01

    Long before data taking ATLAS established a policy that all analyses need to be preserved. In the initial data-taking period, this has been achieved by various tools and techniques. ATLAS is now reviewing the analysis preservation with the aim to bring coherence and robustness to the process and with a clearer view of the level of reproducibility that is reasonably achievable. The secondary aim is to reduce the load on the analysts. Once complete, this will serve for our internal preservation needs but also provide a basis for any subsequent sharing of analysis results with external parties.

  7. Atlas of Jordan

    OpenAIRE

    Ababsa, Myriam; Al-Bilbisi, Hussam; al-Muheisen, Zeydoun; al-Nahar, Maysoun; Alaime, Mathieu; Augé, Christian; Azizeh, Wael Abu; Bakhit, Adnan; De Bel-Air, Françoise; Bourke, Stephen; Courcier, Rémy; Crouzel, Isabelle; Daher, Rami; Daradkeh, Saleh Musa; Darmame, Khadija

    2014-01-01

    L’ambition de cet atlas est d’offrir au lecteur des clés d’analyse spatiale des dynamiques sociales, économiques et politiques qui animent la Jordanie, pays exemplaire de la complexité du Moyen-Orient. Produit de sept années de coopération scientifique entre l’Ifpo, le Centre Royal Jordanien de Géographie et l’Université de Jordanie, l’atlas réunit les contributions de 48 chercheurs européens, jordaniens et internationaux. La formation des territoires jordaniens sur le temps long est éclairée...

  8. South Baltic Wind Atlas

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Hahmann, Andrea N.; Hasager, Charlotte Bay

    A first version of a wind atlas for the South Baltic Sea has been developed using the WRF mesoscale model and verified by data from tall Danish and German masts. Six different boundary-layer parametrization schemes were evaluated by comparing the WRF results to the observed wind profiles at the m......A first version of a wind atlas for the South Baltic Sea has been developed using the WRF mesoscale model and verified by data from tall Danish and German masts. Six different boundary-layer parametrization schemes were evaluated by comparing the WRF results to the observed wind profiles...

  9. ATLAS forward physics program

    CERN Document Server

    HELLER, M; The ATLAS collaboration

    2010-01-01

    The variety of forward detectors installed in the vicinity of the ATLAS experiment allows to look over a wide range of forward physics topics. They ensure a good information about rapidity gaps, and the installation of very forward detectors (ALFA and AFP) will allow to tag the leading proton(s) remaining from the different processes studied. Most of the studies have to be done at low luminosity to avoid pile-up, but the AFP project offers a really exiting future for the ATLAS forward physics program. We also present how these forward detectors can be used to measure the relative and absolute luminosity.

  10. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2005-01-01

    CAMERA ON TOROID The ATLAS barrel toroid system consists of eight coils, each of axial length 25.3 m, assembled radially and symmetrically around the beam axis. The coils are of a flat racetrack type with two double-pancake windings made of 20.5 kA aluminium-stabilized niobium-titanium superconductor. The video is about the slow lowering of the toroid down to the cavern of ATLAS. It is very demanding task. The camera is placed on top of the toroid.

  11. The Herschel ATLAS

    Science.gov (United States)

    Eales, S.; Dunne, L.; Clements, D.; Cooray, A.; De Zotti, G.; Dye, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Maddox, S.; hide

    2010-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 570 sq deg of the extragalactic sky, 4 times larger than all the other Herschel extragalactic surveys combined, in five far-infrared and submillimeter bands. We describe the survey, the complementary multiwavelength data sets that will be combined with the Herschel data, and the six major science programs we are undertaking. Using new models based on a previous submillimeter survey of galaxies, we present predictions of the properties of the ATLAS sources in other wave bands.

  12. Improving ATLAS reprocessing software

    CERN Document Server

    Novak, Tadej

    2014-01-01

    For my CERN Summer Student programme I have been working with ATLAS reprocessing group. Data taken at ATLAS experiment is not only processed after being taken, but is also reprocessed multiple times afterwards. This allows applying new alignments, calibration of detector and using improved or faster algorithms. Reprocessing is usually done in campaigns for different periods of data or for different interest groups. The idea of my project was to simplify the definition of tasks and monitoring of their progress. I created a LIST configuration files generator script in Python and a monitoring webpage for tracking current reprocessing tasks.

  13. ATLAS Fast Physics Monitoring

    CERN Document Server

    Koeneke, K; The ATLAS collaboration

    2011-01-01

    The ATLAS experiment at the LHC is recording data from proton-proton collisions with 7 TeV center-of-mass energy since spring 2010. The integrated luminosity has grown nearly exponentially since then and continues to rise fast. The ATLAS collaboration has set up a framework to automatically run over the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2-3 days after data taking). Hints of potentially interesting physics signals obtained this way are followed up by physics groups.

  14. Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G.; Abat, E.; Abbott, B.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Acharya, Bobby Samir; Adams, D.L.; Addy, T.N.; Adorisio, C.; Adragna, P.; Adye, T.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; /SUNY, Albany /Alberta U. /Ankara U. /Annecy, LAPP /Argonne /Arizona U. /Texas U., Arlington /Athens U. /Natl. Tech. U., Athens /Baku, Inst. Phys. /Barcelona, IFAE /Belgrade U. /VINCA Inst. Nucl. Sci., Belgrade /Bergen U. /LBL, Berkeley /Humboldt U., Berlin /Bern U., LHEP /Birmingham U. /Bogazici U. /INFN, Bologna /Bologna U.

    2011-11-28

    The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. In this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of

  15. A whole brain atlas with sub-parcellation of cortical gyri using resting fMRI

    Science.gov (United States)

    Joshi, Anand A.; Choi, Soyoung; Sonkar, Gaurav; Chong, Minqi; Gonzalez-Martinez, Jorge; Nair, Dileep; Shattuck, David W.; Damasio, Hanna; Leahy, Richard M.

    2017-02-01

    The new hybrid-BCI-DNI atlas is a high-resolution MPRAGE, single-subject atlas, constructed using both anatomical and functional information to guide the parcellation of the cerebral cortex. Anatomical labeling was performed manually on coronal single-slice images guided by sulcal and gyral landmarks to generate the original (non-hybrid) BCI-DNI atlas. Functional sub-parcellations of the gyral ROIs were then generated from 40 minimally preprocessed resting fMRI datasets from the HCP database. Gyral ROIs were transferred from the BCI-DNI atlas to the 40 subjects using the HCP grayordinate space as a reference. For each subject, each gyral ROI was subdivided using the fMRI data by applying spectral clustering to a similarity matrix computed from the fMRI time-series correlations between each vertex pair. The sub-parcellations were then transferred back to the original cortical mesh to create the subparcellated hBCI-DNI atlas with a total of 67 cortical regions per hemisphere. To assess the stability of the gyral subdivisons, a separate set of 60 HCP datasets were processed as follows: 1) coregistration of the structural scans to the hBCI-DNI atlas; 2) coregistration of the anatomical BCI-DNI atlas without functional subdivisions, followed by sub-parcellation of each subject's resting fMRI data as described above. We then computed consistency between the anatomically-driven delineation of each gyral subdivision and that obtained per subject using individual fMRI data. The gyral sub-parcellations generated by atlas-based registration show variable but generally good overlap of the confidence intervals with the resting fMRI-based subdivisions. These consistency measures will provide a quantitative measure of reliability of each subdivision to users of the atlas.

  16. Application of Grid technologies and search for exotics physics with the ATLAS experiment at the LHC

    CERN Document Server

    March, Luis; Ros, Eduardo

    The work presented in this thesis has been performed within the ATLAS (A Toroidal LHC ApparatuS) collaboration. Two subjects have been investigated. One subject is the Computing System Commissioning (CSC) production using an instance of the Production System (ProdSys), called Lexor, and the test of the ATLAS Distributed Analysis (ADA) using ProdSys. The other subject is the sim- ulation and subsequent analysis of processes involving new particles predicted by the Little Higgs model within the ATLAS detector. An introduction to the Standard Model (SM), the Large Hadron Collider (LHC) and the ATLAS experiment, software and computing is given in chapter 1. The problems of the SM are discussed and some proposed solutions are reviewed. The SM introduction is followed by an overview of LHC and ATLAS. The main AT- LAS subsystems are described and the ATLAS software and computing model is discussed. Many physics processes within and beyond the Standard Model involve b-quark decays. New heavy particles, expected in mo...

  17. What Is the Most Representative Parameter for Describing the Size of the Atlas? CT Morphometric Analysis of the Atlas with Special Reference to Atlas Hypoplasia.

    Science.gov (United States)

    Yamahata, Hitoshi; Hirano, Hirofumi; Yamaguchi, Satoshi; Mori, Masanao; Niiro, Tadaaki; Tokimura, Hiroshi; Arita, Kazunori

    2017-09-15

    The spinal canal diameter (SCD) is one of the most studied factors for the assessment of cervical spinal canal stenosis. The inner anteroposterior diameter (IAP), the SCD, and the cross-sectional area (CSA) of the atlas have been used for the evaluation of the size of the atlas in patients with atlas hypoplasia, a rare form of developmental spinal canal stenosis, however, there is little information on their relationship. The aim of this study was to identify the most useful parameter for depicting the size of the atlas. The CSA, the IAP, and the SCD were measured on computed tomography (CT) images at the C1 level of 213 patients and compared in this retrospective study. These three parameters increased with increasing patient height and weight. There was a strong correlation between IAP and SCD (r = 0.853) or CSA (r = 0.822), while correlation between SCD and CSA (r = 0.695) was weaker than between IAP and CSA. Partial correlation analysis showed that IAP was positively correlated with SCD (r = 0.687) and CSA (r = 0.612) when CSA or SCD were controlled. SCD was negatively correlated with CSA when IAP was controlled (r = -0.21). The IAP can serve as the CSA for the evaluation of the size of the atlas ring, while the SCD does not correlate with the CSA. As the patient height and weight affect the size of the atlas, analysis of the spinal canal at the C1 level should take into account physiologic patient data.

  18. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC

    Directory of Open Access Journals (Sweden)

    Megino Fernando Barreiro

    2016-01-01

    The PanDA (Production and Distributed Analysis system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS, up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.

  19. ATLAS Data Challenges - A Collaborative Worldwide Activity

    CERN Multimedia

    Poulard, G

    The goals of the ATLAS Data Challenges (DC) are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. It is understood that these Data Challenges should be of increasing complexity and that their results will be used as input for a Computing TDR and for preparing an MoU in due time. A major feature of the current computing activities (DC1) in ATLAS is the preparation and deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the actual production of those samples. It should be noted that it is not an option to "run everything at CERN" even if we wanted to; the resources are not available at CERN to carry out the production on a reasonable time-scale. We have therefore had to face the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, th...

  20. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  1. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. When passing throw the walls the succeeding can be heard and seen. The film has original working sound.

  2. Visits to Tier-1 Computing Centres

    CERN Multimedia

    Dario Barberis

    At the beginning of 2007 it became clear that an enhanced level of communication is needed between the ATLAS computing organisation and the Tier-1 centres. Most usual meetings are ATLAS-centric and cannot address the issues of each Tier-1; therefore we decided to organise a series of visits to the Tier-1 centres and focus on site issues. For us, ATLAS computing management, it is most useful to realize how each Tier-1 centre is organised, and its relation to the associated Tier-2s; indeed their presence at these visits is also very useful. We hope it is also useful for sites... at least, we are told so! The usual participation includes, from the ATLAS side: computing management, operations, data placement, resources, accounting and database deployment coordinators; and from the Tier-1 side: computer centre management, system managers, Grid infrastructure people, network, storage and database experts, local ATLAS liaison people and representatives of the associated Tier-2s. Visiting Tier-1 centres (1-4). ...

  3. ATLAS Off-Grid sites (Tier-3) monitoring

    CERN Document Server

    Petrosyan, A S; The ATLAS collaboration

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers to be able to operate such large volume of data. The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier-0, the 10 Tier-1s centers and about 50 Tier-2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. The experiment supplies all necessary software to operate typical Grid-site, ...

  4. Multi-threaded ATLAS simulation on Intel Knights Landing processors

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00014247; The ATLAS collaboration; Calafiura, Paolo; Leggett, Charles; Tsulaia, Vakhtang; Dotti, Andrea

    2017-01-01

    The Knights Landing (KNL) release of the Intel Many Integrated Core (MIC) Xeon Phi line of processors is a potential game changer for HEP computing. With 72 cores and deep vector registers, the KNL cards promise significant performance benefits for highly-parallel, compute-heavy applications. Cori, the newest supercomputer at the National Energy Research Scientific Computing Center (NERSC), was delivered to its users in two phases with the first phase online at the end of 2015 and the second phase now online at the end of 2016. Cori Phase 2 is based on the KNL architecture and contains over 9000 compute nodes with 96GB DDR4 memory. ATLAS simulation with the multithreaded Athena Framework (AthenaMT) is a good potential use-case for the KNL architecture and supercomputers like Cori. ATLAS simulation jobs have a high ratio of CPU computation to disk I/O and have been shown to scale well in multi-threading and across many nodes. In this paper we will give an overview of the ATLAS simulation application with detai...

  5. Prime wires for ATLAS

    CERN Multimedia

    2003-01-01

    In an award ceremony on 3 September, ATLAS honoured the French company Axon Cable for its special coaxial cables, which were purpose-built for the Liquid Argon calorimeter modules. Working for CERN since the 1970s, Axon' Cable received the ATLAS supplier award last week for its contribution to the liquid argon calorimeter cables of ATLAS (LAL/Orsay, France and University of Victoria, Canada), started in 1996. Its two sets of minicoaxial cables, called harnesses "A" and "B", are designed to function in the harsh conditions in the liquid argon (at 90 Kelvin or -183°C) and under extreme radiation (up to several Mrads). The cables are mainly used for the readout of the calorimeters, and are connected to the outside world by 114 signal feedthroughs with 1920 channels each. The signal from the detectors is transmitted directly without any amplification, which imposes tight restrictions on the impedance and on the signal propagation time of the cables. Peter Jenni, ATLAS spokesperson, gives the award for best s...

  6. Taus at ATLAS

    Energy Technology Data Exchange (ETDEWEB)

    Demers, Sarah M. [Yale Univ., New Haven, CT (United States). Dept. of Physics

    2017-12-06

    The grant "Taus at ATLAS" supported the group of Sarah Demers at Yale University over a period of 8.5 months, bridging the time between her Early Career Award and her inclusion on Yale's grant cycle within the Department of Energy's Office of Science. The work supported the functioning of the ATLAS Experiment at CERN's Large Hadron Collider and the analysis of ATLAS data. The work included searching for the Higgs Boson in a particular mode of its production (with a W or Z boson) and decay (to a pair of tau leptons.) This was part of a broad program of characterizing the Higgs boson as we try to understand this recently discovered particle, and whether or not it matches our expectations within the current standard model of particle physics. In addition, group members worked with simulation to understand the physics reach of planned upgrades to the ATLAS experiment. Supported group members include postdoctoral researcher Lotte Thomsen and graduate student Mariel Pettee.

  7. Hard Probes at ATLAS

    CERN Document Server

    Citron, Z; The ATLAS collaboration

    2014-01-01

    The ATLAS collaboration has measured several hard probe observables in Pb+Pb and p+Pb collisions at the LHC. These measurements include jets which show modification in the hot dense medium of heavy ion collisions as well as color neutral electro-weak bosons. Together, they elucidate the nature of heavy ion collisions.

  8. The ATLAS event filter

    CERN Document Server

    Beck, H P; Boissat, C; Davis, R; Duval, P Y; Etienne, F; Fede, E; Francis, D; Green, P; Hemmer, F; Jones, R; MacKinnon, J; Mapelli, Livio P; Meessen, C; Mommsen, R K; Mornacchi, Giuseppe; Nacasch, R; Negri, A; Pinfold, James L; Polesello, G; Qian, Z; Rafflin, C; Scannicchio, D A; Stanescu, C; Touchard, F; Vercesi, V

    1999-01-01

    An overview of the studies for the ATLAS Event Filter is given. The architecture and the high level design of the DAQ-1 prototype is presented. The current status if the prototypes is briefly given. Finally, future plans and milestones are given. (11 refs).

  9. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  10. ATLAS Experiment Brochure

    CERN Multimedia

    AUTHOR|(INSPIRE)INSPIRE-00085461

    2016-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  11. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    The ATLAS Collaboration has set up a framework to automatically process the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2–3 days after data taking).

  12. ATLAS PDF Results

    CERN Document Server

    Stockton, Mark; The ATLAS collaboration

    2015-01-01

    Uncertainties from parton distribution functions can limit our measurements of new cross sections and searches beyond the SM. Results are presented on recent ATLAS measurements which are sensitive to parton distribution functions. These cover a wide range of cross section measurements, including those from: jets, photons, $W$/$Z$ bosons and top quarks.

  13. ATLAS starts moving in

    CERN Multimedia

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  14. Prototype ATLAS straw tracker

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This is an early prototype of the straw tracking device for the ATLAS detector at CERN. This detector will be part of the LHC project, scheduled to start operation in 2008. The straw tracker will consist of thousands of gas-filled straws, each containing a wire, allowing the tracks of particles to be followed.

  15. An Icelandic wind atlas

    Science.gov (United States)

    Nawri, Nikolai; Nína Petersen, Gudrun; Bjornsson, Halldór; Arason, Þórður; Jónasson, Kristján

    2013-04-01

    While Iceland has ample wind, its use for energy production has been limited. Electricity in Iceland is generated from renewable hydro- and geothermal source and adding wind energy has not be considered practical or even necessary. However, adding wind into the energy mix is becoming a more viable options as opportunities for new hydro or geothermal power installation become limited. In order to obtain an estimate of the wind energy potential of Iceland a wind atlas has been developed as a part of the Nordic project "Improved Forecast of Wind, Waves and Icing" (IceWind). The atlas is based on mesoscale model runs produced with the Weather Research and Forecasting (WRF) Model and high-resolution regional analyses obtained through the Wind Atlas Analysis and Application Program (WAsP). The wind atlas shows that the wind energy potential is considerable. The regions with the strongest average wind are nevertheless impractical for wind farms, due to distance from road infrastructure and power grid as well as harsh winter climate. However, even in easily accessible regions wind energy potential in Iceland, as measured by annual average power density, is among the highest in Western Europe. There is a strong seasonal cycle, with wintertime power densities throughout the island being at least a factor of two higher than during summer. Calculations show that a modest wind farm of ten medium size turbines would produce more energy throughout the year than a small hydro power plants making wind energy a viable additional option.

  16. The observer's sky atlas

    CERN Document Server

    Karkoschka, E

    2007-01-01

    This title includes a short introduction to observing, a thorough description of the star charts and tables, a glossary and much more. It is perfect for both the beginner and seasoned observer. It is fully revised edition of a best-selling and highly-praised sky atlas.

  17. ATLAS solenoid operates underground

    CERN Document Server

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  18. ATLAS Experiment Brochure - French

    CERN Document Server

    2018-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  19. ATLAS Experiment Brochure - Serbian

    CERN Document Server

    2018-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  20. ATLAS Experiment Brochure - Italian

    CERN Multimedia

    2018-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  1. Taking ATLAS to new heights

    CERN Document Server

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  2. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  3. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  4. The Hatfield Lunar Atlas Digitally Re-Mastered Edition

    CERN Document Server

    Cook, Anthony Charles

    2012-01-01

    The Hatfield Lunar Atlas has become an amateur lunar observer's bible since it was first published in 1968. A major update of the atlas was made in 1998, using the same wonderful photographs that Commander Henry Hatfield made with his purpose-built 12-inch (300 mm) telescope, but bringing the lunar nomenclature up to date and changing the units from Imperial to S.I. metric. However, with modern telescope optics, digital imaging equipment and computer enhancement new pictures can easily surpass what was achieved with Henry Hatfield's 12-inch telescope and a film camera. This limits the usefulness of the original atlas to visual observing or imaging with rather small amateur telescopes. The new, digitally re-mastered edition vastly improves the clarity and definition of the original photographs - significantly beyond the resolution limits of the photographic grains present in earlier atlas versions - while preserving the layout and style of the original publications. This has been achieved by merging computer-v...

  5. Implementation of the ATLAS trigger within the multi-threaded software framework AthenaMT

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00225867; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger, HLT, that provides parallel execution of trigger algorithms within the ATLAS multithreaded software framework, AthenaMT. This development will enable the ATLAS HLT to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data-taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the HLT input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that each execute algorithms sequentially for different events. AthenaMT will provide a fully multi-threaded environment that will additionally enable concurrent ...

  6. Crustal and lithospheric imaging of the Atlas Mountains of Morocco inferred from magnetotelluric data

    Science.gov (United States)

    Kiyan, D.; Jones, A. G.; Fullea, J.; Hogg, C.; Ledo, J.; Sinischalchi, A.; Campanya, J.; Picasso Phase II Team

    2010-12-01

    The Atlas System of Morocco is an intra-continental mountain belt extending for more than 2,000 km along the NW African plate with a predominant NE-SW trend. The System comprises three main branches: the High Atlas, the Middle Atlas, and the Anti Atlas. We present the results of a very recent multi-institutional magnetotelluric (MT) experiment across the Atlas Mountains region that started in September, 2009 and ended in February, 2010, comprising acquisition of broadband and long-period MT data. The experiment consisted of two profiles: (1) a N-S oriented profile crossing the Middle Atlas through the Central High Atlas to the east and (2) a NE-SW profile crossing the western High Atlas towards the Anti Atlas to the west. The MT measurements are part of the PICASSO (Program to Investigate Convective Alboran Sea System Overturn) and the concomitant TopoMed (Plate re-organization in the western Mediterranean: Lithospheric causes and topographic consequences - an ESF EUROCORES TOPO-EUROPE project) projects, to develop a better understanding of the internal structure and evolution of the crust and lithosphere of the Atlas Mountains. The MT data have been processed with robust remote reference methods and submitted to comprehensive strike and dimensionality analysis. Two clearly depth-differentiated strike directions are apparent for crustal (5-35 km) and lithospheric (50-150 km) depth ranges. These two orientations are roughly consistent with the NW-SE Africa-Eurasia convergence acting since the late Cretaceous, and the NNE-SSW Middle Atlas, where Miocene to recent Alkaline volcanism is present. Two-dimensional (2-D) smooth electrical resistivity models were computed independently for both 50 degrees and 20 degrees E of N strike directions. At the crustal scale, our preliminary results reveal a middle to lower-crustal conductive layer stretching from the Middle Atlas southward towards the High Moulouya basin. The most resistive (and therefore potentially thickest

  7. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  8. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    CERN Document Server

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the co...

  9. Managing ATLAS data on a petabyte-scale with DQ2

    CERN Document Server

    Branco, M; Gaidioz, B; Garonne, V; Koblitz, B; Lassnig, M; Rocha, R; Salgado, P; Wenaus, T

    2008-01-01

    The ATLAS detector at CERN's Large Hadron Collider presents data handling requirements on an unprecedented scale. From 2008 on the ATLAS distributed data management system, Don Quijote2 (DQ2), must manage tens of petabytes of experiment data per year, distributed globally via the LCG, OSG and NDGF computing grids, now commonly known as the WLCG. Since its inception in 2005 DQ2 has continuously managed all experiment data for the ATLAS collaboration, which now comprises over 3000 scientists participating from more than 150 universities and laboratories in 34 countries. Fulfilling its primary requirement of providing a highly distributed, fault-tolerant and scalable architecture DQ2 was successfully upgraded from managing data on a terabyte-scale to managing data on a petabyte-scale. We present improvements and enhancements to DQ2 based on the increasing demands for ATLAS data management. We describe performance issues, architectural changes and implementation decisions, the current state of deployment in test ...

  10. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  11. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  12. Virtualization of the ATLAS software environment on a shared HPC system

    CERN Document Server

    Schnoor, Ulrike; The ATLAS collaboration

    2017-01-01

    High-Performance Computing (HPC) and other research cluster computing resources provided by universities can be useful supplements to the collaboration’s own WLCG computing resources for data analysis and production of simulated event samples. The shared HPC cluster "NEMO" at the University of Freiburg has been made available to local ATLAS users through the provisioning of virtual machines incorporating the ATLAS software environment analogously to a WLCG center. The talk describes the concept and implementation of virtualizing the ATLAS software environment to run both data analysis and production on the HPC host system which is connected to the existing Tier-3 infrastructure. Main challenges include the integration into the NEMO and Tier-3 schedulers in a dynamic, on-demand way, the scalability of the OpenStack infrastructure, as well as the automatic generation of a fully functional virtual machine image providing access to the local user environment, the dCache storage element and the parallel file sys...

  13. Lowering the first ATLAS toroid

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The ATLAS detector on the LHC at CERN will consist of eight toroid magnets, the first of which was lowered into the cavern in these images on 26 October 2004. The coils are supported on platforms where they will be attached to form a giant torus. The platforms will hold about 300 tonnes of ATLAS' muon chambers and will envelop the inner detectors.

  14. ATLAS recognises its best suppliers

    CERN Multimedia

    2002-01-01

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance has just been attributed to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months ago at their headquarters in Japan.

  15. ATLAS: civil engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are busy to finish the different infrastructures for ATLAS. Real underground video. Nice view from the surface to the cavern from the pit side - all the big machines looked very small. The film has original working sound.

  16. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whol...

  17. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Directory of Open Access Journals (Sweden)

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  18. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases.

    Science.gov (United States)

    Feng, Guangjie; Burton, Nick; Hill, Bill; Davidson, Duncan; Kerwin, Janet; Scott, Mark; Lindsay, Susan; Baldock, Richard

    2005-03-09

    Many three-dimensional (3D) images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  19. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  20. Jet physics in ATLAS

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Measurements of hadronic jets provide tests of strong interactions which are interesting both in their own right and as backgrounds to many New Physics searches. It is also through tests of Quantum Chromodynamics that new physics may be discovered. The extensive dataset recorded with the ATLAS detector throughout the 7 TeV centre-of-mass LHC operation period allows QCD to be probed at distances never reached before. We present a review of selected ATLAS jet performance and physics measurements, together with results from new physics searches using the 2011 dataset. They include studies of the underlying event and fragmentation models, measurements of the inclusive jet, dijet and multijet cross sections, parton density functions, heavy flavours, jet shape, mass and substructure. Searches for new physics in monojet, dijet and photon-jet final states are also presented.

  1. Jet Physics in ATLAS

    CERN Document Server

    Sandoval, C; The ATLAS collaboration

    2012-01-01

    Measurements of hadronic jets provide tests of strong interactions which are interesting both in their own right and as backgrounds to many New Physics searches. It is also through tests of Quantum Chromodynamics that new physics may be discovered. The extensive dataset recorded with the ATLAS detector throughout the 7 TeV and 8 TeV centre-of-mass LHC operation periods allows QCD to be probed at distances never reached before. We present a review of selected ATLAS jet physics measurements. These measurements constitute precision tests of QCD in a new energy regime, and show sensitivity to the parton densities in the proton and to the value of the strong coupling, alpha_s.

  2. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2013-01-01

    The tau lepton plays a crucial role in understanding particle physics at the Tera scale. One of the most promising probes of the Higgs boson coupling to fermions is with detector signatures involving taus. In addition, many theories beyond the Standard Model, such as supersymmetry and exotic particles (Wʹ and Zʹ), predict new physics with large couplings to taus. The ability to trigger on hadronic tau decays is therefore critical to achieving the physics goals of the ATLAS experiment. The higher instantaneous luminosities of proton-proton collisions achieved by the Large Hadron Collider (LHC) in 2012 resulted in a larger probability of overlap (pile-up) between bunch crossings, and so it was critical for ATLAS to have an effective tau trigger strategy. The details of this strategy are summarized in this paper, and the results of the latest performance measurements are presented.

  3. The ATLAS Tau Trigger

    CERN Document Server

    Rados, PK; The ATLAS collaboration

    2013-01-01

    The tau lepton plays a crucial role in understanding particle physics at the Tera scale. One of the most promising probes of the Higgs boson coupling to fermions is with detector signatures involving taus. In addition, many theories beyond the Standard Model, such as supersymmetry and exotic particles (Wʹ′ and Zʹ′), predict new physics with large couplings to taus. The ability to trigger on hadronic tau decays is therefore critical to achieving the physics goals of the ATLAS experiment. The higher instantaneous luminosities of proton-proton collisions achieved by the Large Hadron Collider (LHC) in 2012 resulted in a larger probability of overlap (pile-up) between bunch crossings, and so it was critical for ATLAS to have an effective tau trigger strategy. The details of this strategy are summarized in this poster, and the latest performance measurements are presented.

  4. ATLAS IBL operational experience

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00237659; The ATLAS collaboration

    2016-01-01

    The Insertable B-Layer (IBL) is the inner most pixel layer in the ATLAS experiment, which was installed at 3.3 cm radius from the beam axis in 2014 to improve the tracking performance. To cope with the high radiation and hit occupancy due to proximity to the interaction point, a new read-out chip and two different silicon sensor technologies (planar and 3D) have been developed for the IBL. After the long shut-down period over 2013 and 2014, the ATLAS experiment started data-taking in May 2015 for Run-2 of the Large Hadron Collider (LHC). The IBL has been operated successfully since the beginning of Run-2 and shows excellent performance with the low dead module fraction, high data-taking efficiency and improved tracking capability. The experience and challenges in the operation of the IBL is described as well as its performance.

  5. Jet substructure in ATLAS

    CERN Document Server

    Miller, David W

    2011-01-01

    Measurements are presented of the jet invariant mass and substructure in proton-proton collisions at $\\sqrt{s} = 7$ TeV with the ATLAS detector using an integrated luminosity of 37 pb$^{-1}$. These results exercise the tools for distinguishing the signatures of new boosted massive particles in the hadronic final state. Two "fat" jet algorithms are used, along with the filtering jet grooming technique that was pioneered in ATLAS. New jet substructure observables are compared for the first time to data at the LHC. Finally, a sample of candidate boosted top quark events collected in the 2010 data is analyzed in detail for the jet substructure properties of hadronic "top-jets" in the final state. These measurements demonstrate not only our excellent understanding of QCD in a new energy regime but open the path to using complex jet substructure observables in the search for new physics.

  6. ATLAS latest results

    CERN Document Server

    Perez-Reale, V; The ATLAS collaboration

    2010-01-01

    With the LHC start-up and the first runs at 900 GeV, 2.36 TeV and 7 TeV centre-of-mass energy in the years 2009 and 2010, the ATLAS detector started to record its first collision events. The integrated luminosity has now reached one inverse pico barn. These data have been used to perform detailed studies on the performance of the detector, including measuring charged and neutral particle mass resonances and the study of QCD cross-sections. The data have already made it possible to commission and calibrate the various ATLAS subdetectors, and understand their performance in detail. The first observation of Standard Model electroweak processes, in particular mass resonances, is also being used as a benchmark for validating the analysis and simulation tools. The status and performance of the detector will be briefly reviewed, the latest physics results will be summarized and limits on new physics will be given.

  7. Experience with CORBA communication middleware in the ATLAS DAQ.

    CERN Document Server

    Kolos, S; Amorim, A; Badescu, E; Burckhart-Chromek, Doris; Caprini, M; Dobson, M; Fiuza de Barrosb, N; Flammerd, J; Jones, R; Kazarov, A; Klose, D; Korobov, S; Kotov, V; Liko, D; Mapelli, L; Mineev, M; Pedro, L; Ryabov, Yu; Soloviev, I; Computing In High Energy Physics

    2005-01-01

    As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need of an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes experience of using standard Common Object Request Broker Architecture (CORBA) middleware for providing a high performance and scalable software, which will be used for the online control, configuration and monitoring in the ATLAS Data Acquisition (DAQ) system. It also recites the experience, which was gained from using several CORBA implementations together and replacing one CORBA broker with another. Finally the paper presents the results of the large scale tests, demonstrating the performance and scalability of the ATLAS DAQ online services. These results show that the standard CORBA is truly appropriate for the highly efficient online distributed computing in the HEP experiments area.

  8. Exotics searches in ATLAS

    CERN Document Server

    Wang, Renjie; The ATLAS collaboration

    2017-01-01

    Many theories beyond the Standard Model predict new physics accessible by the LHC. The ATLAS experiment all have rigorous search programs ongoing with the aim to find indications for new physics involving state of the art analysis techniques. This talk reports on new results obtained using the pp collision data sample collected in 2015 and 2016 at the LHC with a centre-of-mass energy of 13 TeV.

  9. Highlights from ATLAS

    CERN Document Server

    Bellagamba, Lorenzo; The ATLAS collaboration

    2017-01-01

    This report presents an overview of some of the most recent results obtained by the ATLAS Collaboration using pp and heavy-ion collisions at LHC. The review is not intended to be comprehensive and includes recent updates on the Higgs boson properties, precision Standard Model measurements, as well as searches for new physics. Most of the results exploit the data collected in the last LHC run, providing pp collisions at a centre of mass energy of 13 TeV.

  10. The ATLAS Experiment Movie

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  11. L'esperimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  12. El experimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  13. Supersymmetry searches in ATLAS

    CERN Document Server

    Torro Pastor, Emma; The ATLAS collaboration

    2016-01-01

    Weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures.

  14. Higgs results from ATLAS

    Directory of Open Access Journals (Sweden)

    Chen Xin

    2016-01-01

    Full Text Available The updated Higgs measurements in various search channels with ATLAS Run 1 data are reviewed. Both the Standard Model (SM Higgs results, such as H → γγ, ZZ, WW, ττ, μμ, bb̄, and Beyond Standard Model (BSM results, such as the charged Higgs, Higgs invisible decay and tensor couplings, are summarized. Prospects for future Higgs searches are briefly discussed.

  15. Higgs results from ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00145153; The ATLAS collaboration

    2015-01-01

    The updated Higgs measurements in various search channels with ATLAS Run 1 data are reviewed. Both the Standard Model (SM) Higgs results, such as $H\\to\\gamma\\gamma,ZZ,WW,\\tau\\tau,\\mu\\mu,b\\bar{b}$, and Beyond Standard Model (BSM) results, such as the charged Higgs, Higgs invisible decay and tensor couplings, are summarized. Prospects for future Higgs searches are briefly discussed.

  16. The ATLAS Trigger System

    CERN Document Server

    Owen, Rhys Edward; The ATLAS collaboration

    2018-01-01

    The ATLAS experiment employs a complex trigger system to enable the collaborations physics program. The LHC is now well in to its second running period delivering proton proton collisions at $\\sqrt{s}=13$ TeV with high instantaneous luminosity. This talk will describe the two level hardware and software trigger used to select events in this environment including recent improvements and the latest performance results.

  17. Overview of ATLAS results

    CERN Document Server

    Grabowska-Bold, Iwona; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at the Large Hadron Collider has undertaken a broad physics program to probe and characterize the hot nuclear matter created in relativistic lead-lead collisions. This talk presents recent results based on Run 2 data on production of jet, electroweak bosons and quarkonium, electromagnetic processes in ultra-peripheral collisions, and bulk particle collectivity from PbPb, pPb and pp collisions.

  18. ATLAS overview week highlights

    CERN Document Server

    D. Froidevaux

    2005-01-01

    A warm and early October afternoon saw the beginning of the 2005 ATLAS overview week, which took place Rue de La Montagne Sainte-Geneviève in the heart of the Quartier Latin in Paris. All visitors had been warned many times by the ATLAS management and the organisers that the premises would be the subject of strict security clearance because of the "plan Vigipirate", which remains at some level of alert in all public buildings across France. The public building in question is now part of the Ministère de La Recherche, but used to host one of the so-called French "Grandes Ecoles", called l'Ecole Polytechnique (in France there is only one Ecole Polytechnique, whereas there are two in Switzerland) until the end of the seventies, a little while after it opened its doors also to women. In fact, the setting chosen for this ATLAS overview week by our hosts from LPNHE Paris has turned out to be ideal and the security was never an ordeal. For those seeing Paris for the first time, there we...

  19. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extens...

  20. ATLAS Upgrade Plans

    CERN Document Server

    Hopkins, W; The ATLAS collaboration

    2014-01-01

    After the successful LHC operation at the center-of-mass energies of 7 and 8 TeV in 2010-2012, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The final goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000/fb by around 2035 for ATLAS and CMS. In parallel, the experiments need to be keep lockstep with the accelerator to accommodate running beyond the nominal luminosity this decade. Current planning in ATLAS envisions significant upgrades to the detector during the consolidation of the LHC to reach full LHC energy and further upgrades. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new...

  1. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  2. An image of an event in which a microscopic-black-hole was produced in the collision of two protons in a computer generated image of the ATLAS detector.

    CERN Multimedia

    Joao Pequenao

    2008-01-01

    In some theories, microscopic black holes may be produced in particle collisions that occur when very-high-energy cosmic rays hit particles in our atmosphere. These microscopic-black-holes would decay into ordinary particles in a tiny fraction of a second and would be very difficult to observe in our atmosphere. The ATLAS Experiment offers the exciting possibility to study them in the lab (if they exist). The simulated collision event shown is viewed along the beampipe. The event is one in which a microscopic-black-hole was produced in the collision of two protons (not shown). The microscopic-black-hole decayed immediately into many particles. The colors of the tracks show different types of particles emerging from the collision (at the center).

  3. Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy.

    Science.gov (United States)

    Narayanan, R; Werahera, P N; Barqawi, A; Crawford, E D; Shinohara, K; Simoneau, A R; Suri, J S

    2008-10-21

    Due to lack of imaging modalities to identify prostate cancer in vivo, current TRUS guided prostate biopsies are taken randomly. Consequently, many important cancers are missed during initial biopsies. The purpose of this study was to determine the potential clinical utility of a high-speed registration algorithm for a 3D prostate cancer atlas. This 3D prostate cancer atlas provides voxel-level likelihood of cancer and optimized biopsy locations on a template space (Zhan et al 2007). The atlas was constructed from 158 expert annotated, 3D reconstructed radical prostatectomy specimens outlined for cancers (Shen et al 2004). For successful clinical implementation, the prostate atlas needs to be registered to each patient's TRUS image with high registration accuracy in a time-efficient manner. This is implemented in a two-step procedure, the segmentation of the prostate gland from a patient's TRUS image followed by the registration of the prostate atlas. We have developed a fast registration algorithm suitable for clinical applications of this prostate cancer atlas. The registration algorithm was implemented on a graphical processing unit (GPU) to meet the critical processing speed requirements for atlas guided biopsy. A color overlay of the atlas superposed on the TRUS image was presented to help pick statistically likely regions known to harbor cancer. We validated our fast registration algorithm using computer simulations of two optimized 7- and 12-core biopsy protocols to maximize the overall detection rate. Using a GPU, patient's TRUS image segmentation and atlas registration took less than 12 s. The prostate cancer atlas guided 7- and 12-core biopsy protocols had cancer detection rates of 84.81% and 89.87% respectively when validated on the same set of data. Whereas the sextant biopsy approach without the utility of 3D cancer atlas detected only 70.5% of the cancers using the same histology data. We estimate 10-20% increase in prostate cancer detection rates

  4. Data federation strategies for ATLAS using XRootD

    Science.gov (United States)

    Gardner, Robert; Campana, Simone; Duckeck, Guenter; Elmsheuser, Johannes; Hanushevsky, Andrew; Hönig, Friedrich G.; Iven, Jan; Legger, Federica; Vukotic, Ilija; Yang, Wei; Atlas Collaboration

    2014-06-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  5. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2016-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates with the installation of new software releases at Tier-0. This created unnecessary overhead for developers and operators, and unavoidably led to different releases for the data-taking and the monitoring setup. We present a "trigger menu-aware" monitoring system designed for the ATLAS Run 2 data-taking. The new monitoring system aims to simplify the ATLAS operational workflows, and allows for easy and flexible monitoring configuration changes at the Tier-0 site via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the ne...

  6. Persistent ATLAS Data Structures and Reclustering of Event Data

    CERN Document Server

    Schaller, Martin

    1999-01-01

    The ATLAS experiment will start to take data in the year 2005. The amount of experimental data forms a serious challenge for data processing and data storage. About 1 PB (1015 bytes) per year has to be processed and stored. Currently, a paradigm shift in High-Energy Physics (HEP) computing is taking place. It is planned that software is written in object-oriented languages (mainly C++). For data storage the usage of object-oriented database management systems (ODBMSs) is foreseen. This thesis investigates the usage of an ODBMS in the ATLAS experiment. Work was done in several connected areas. First, we present exhaustive benchmarks of the commercial ODBMS Objectivity/DB that is today the most promising candidate for the storage system. We describe the ATLAS 1 TB milestone that was performed to investigate the reliability and performance of an ODBMS storage solution coupled to a mass storage system. Second, we report about the design and implementation of the persistent ATLAS data structures, both in the detec...

  7. Job optimization in ATLAS TAG-based distributed analysis

    Science.gov (United States)

    Mambelli, M.; Cranshaw, J.; Gardner, R.; Maeno, T.; Malon, D.; Novak, M.

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ("skimming", "slimming" and "thinning") as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  8. Job optimization in ATLAS TAG-based distributed analysis

    Energy Technology Data Exchange (ETDEWEB)

    Mambelli, M; Gardner, R [University of Chicago, 5640 S Ellis, Chicago, IL 60637 (United States); Cranshaw, J; Malon, D [Argonne National Laboratory, Argonne, IL 60439 (United States); Maeno, T; Novak, M, E-mail: marco@hep.uchicago.ed [Brookhaven National Laboratory, Brookhaven, NY 10000 (United States)

    2010-04-01

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ('skimming', 'slimming' and 'thinning') as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  9. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  10. Jet energy calibration in ATLAS

    CERN Document Server

    Schouten, Doug

    A correct energy calibration for jets is essential to the success of the ATLAS experi- ment. In this thesis I study a method for deriving an in situ jet energy calibration for the ATLAS detector. In particular, I show the applicability of the missing transverse energy projection fraction method. This method is shown to set the correct mean energy for jets. Pileup effects due to the high luminosities at ATLAS are also stud- ied. I study the correlations in lateral distributions of pileup energy, as well as the luminosity dependence of the in situ calibration metho

  11. The new European wind atlas

    DEFF Research Database (Denmark)

    Lundtang Petersen, Erik; Troen, Ib; Ejsing Jørgensen, Hans

    2014-01-01

    European Wind Atlas” aiming at reducing overall uncertainties in determining wind conditions; standing on three legs: A data bank from a series of intensive measuring campaigns; a thorough examination and redesign of the model chain from global, mesoscale to microscale models and creation of the wind atlas...... database. Although the project participants will come from the 27 member states it is envisioned that the project will be opened for global participation through test benches for model development and sharing of data – climatologically as well as experimental. Experiences from national wind atlases...... will be utilized, such as the Indian, the South African, the Finnish, the German, the Canadian atlases and others....

  12. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  13. Dynamic updating atlas for heart segmentation with a nonlinear field-based model.

    Science.gov (United States)

    Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng

    2017-09-01

    Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  15. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  16. EnviroAtlas - Cleveland, OH - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Cleveland, OH EnviroAtlas community. The block groups are from the US Census Bureau and are included/excluded...

  17. EnviroAtlas - Metrics for Pittsburgh, PA

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  18. EnviroAtlas - Woodbine, IA - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Woodbine, IA EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  19. EnviroAtlas - Durham, NC - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Durham, NC EnviroAtlas Area. The block groups are from the US Census Bureau and are included/excluded based on...

  20. EnviroAtlas - Austin, TX - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Austin, TX EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  1. Argonne Tandem Linac Accelerator System (ATLAS)

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a national user facility at Argonne National Laboratory in Argonne, Illinois. The ATLAS facility is a leading facility for nuclear structure research in the...

  2. Women of ATLAS - International Women's Day 2016

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Women play key roles in the ATLAS Experiment: from young physicists at the start of their careers to analysis group leaders and spokespersons of the collaboration. Celebrate International Women's Day by meeting a few of these inspiring ATLAS researchers.

  3. EnviroAtlas - Metrics for Portland, OR

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http://www.epa.gov/enviroatlas). The layers in these web...

  4. EnviroAtlas - Metrics for Phoenix, AZ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  5. EnviroAtlas - Metrics for Milwaukee, WI

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http://www.epa.gov/enviroatlas). The layers in these web...

  6. EnviroAtlas - Metrics for Memphis, TN

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  7. EnviroAtlas - Metrics for Tampa, FL

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  8. EnviroAtlas - Metrics for Woodbine, IA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  9. EnviroAtlas - Metrics for Durham, NC

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The layers in these web...

  10. EnviroAtlas - Metrics for Paterson, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  11. EnviroAtlas - Metrics for Fresno, CA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  12. EnviroAtlas - Metrics for Portland, ME

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web service...

  13. ATLAS : civil engineering at Point 1

    CERN Multimedia

    2002-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video.

  14. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...

  15. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Herr, J.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. The WLAP model is spreading. This summer, the CERN's High School Teachers program has used WLAP's system to record several physics lectures directed toward a broad audience. And a new project called MScribe, which is essentially the WLAP system coupled with an infrared tracking camera, is being used by the University of Michigan to record several University courses this academic year. All lectures can be viewed on any major platform with any common internet browser...

  16. The ATLAS data management software engineering process

    Science.gov (United States)

    Lassnig, M.; Garonne, V.; Stewart, G. A.; Barisits, M.; Beermann, T.; Vigne, R.; Serfon, C.; Goossens, L.; Nairz, A.; Molfetas, A.; Atlas Collaboration

    2014-06-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also highlight the social aspects of an environment where every action is subject to detailed scrutiny.

  17. The ATLAS Level-2 Trigger Pilot Project

    CERN Document Server

    Blair, R; Haberichter, W N; Schlereth, J L; Bock, R; Bogaerts, A; Boosten, M; Dobinson, Robert W; Dobson, M; Ellis, Nick; Elsing, M; Giacomini, F; Knezo, E; Martin, B; Shears, T G; Tapprogge, Stefan; Werner, P; Hansen, J R; Wäänänen, A; Korcyl, K; Lokier, J; George, S; Green, B; Strong, J; Clarke, P; Cranfield, R; Crone, G J; Sherwood, P; Wheeler, S; Hughes-Jones, R E; Kolya, S; Mercer, D; Hinkelbein, C; Kornmesser, K; Kugel, A; Männer, R; Müller, M; Sessler, M; Simmler, H; Singpiel, H; Abolins, M; Ermoline, Y; González-Pineiro, B; Hauser, R; Pope, B; Sivoklokov, S Yu; Boterenbrood, H; Jansweijer, P; Kieft, G; Scholte, R; Slopsema, R; Vermeulen, J C; Baines, J T M; Belias, A; Botterill, David R; Middleton, R; Wickens, F J; Falciano, S; Bystrický, J; Calvet, D; Gachelin, O; Huet, M; Le Dû, P; Mandjavidze, I D; Levinson, L; González, S; Wiedenmann, W; Zobernig, H

    2002-01-01

    The Level-2 Trigger Pilot Project of ATLAS, one of the two general purpose LHC experiments, is part of the on-going program to develop the ATLAS high-level triggers (HLT). The Level-2 Trigger will receive events at up to 100 kHz, which has to be reduced to a rate suitable for full event-building of the order of 1 kHz. To reduce the data collection bandwidth and processing power required for the challenging Level-2 task it is planned to use Region of Interest guidance (from Level-1) and sequential processing. The Pilot Project included the construction and use of testbeds of up to 48 processing nodes, development of optimized components and computer simulations of a full system. It has shown how the required performance can be achieved, using largely commodity components and operating systems, and validated an architecture for the Level-2 system. This paper describes the principal achievements and conclusions of this project. (28 refs).

  18. Federating Distributed Storage For Clouds In ATLAS

    CERN Document Server

    Berghaus, Frank; The ATLAS collaboration

    2017-01-01

    Input data for applications that run in cloud computing centres can be stored at distant repositories, often with multiple copies of the popular data stored at many sites. Locating and retrieving the remote data can be challenging, and we believe that federating the storage can address this problem. A federation would locate the closest copy of the data currently on the basis of GeoIP information. Currently we are using the DynaFed data federation software solution developed by CERN IT. DynaFed supports several industry standards for connection protocols like Amazon's S3, Microsofts Azure, as well as WebDav and HTTP. Protocol dependent authentication is hidden from the user by using their X509 certificate. We have setup an instance of DynaFed and integrated it into the ATLAS Data Distribution Management system. We report on the challenges faced during the installation and integration. We have tested ATLAS analysis jobs submitted by the PanDA production system and we report on our first experiences with its op...

  19. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  20. Diffractive measurements in ATLAS

    CERN Document Server

    Grafstrom, P; The ATLAS collaboration

    2011-01-01

    Several diffractive measurements in ATLAS are discussed. Using a diffractive enhanced event sample, the diffractive fraction of the inelastic cross section is determined to be in the range 25-30 % dependent on what model is used. Rapidity gap studies give similar percentages. The differential cross section as a function of the rapidity gap size has been determined at the hadron level. The diffractive cross section is roughly 1 mb per unit of gap size for gap sizes bigger than 3.5 units.