WorldWideScience

Sample records for atlas computers

  1. ATLAS Distributed Computing Automation

    CERN Document Server

    Schovancova, J; The ATLAS collaboration; Borrego, C; Campana, S; Di Girolamo, A; Elmsheuser, J; Hejbal, J; Kouba, T; Legger, F; Magradze, E; Medrano Llamas, R; Negri, G; Rinaldi, L; Sciacca, G; Serfon, C; Van Der Ster, D C

    2012-01-01

    The ATLAS Experiment benefits from computing resources distributed worldwide at more than 100 WLCG sites. The ATLAS Grid sites provide over 100k CPU job slots, over 100 PB of storage space on disk or tape. Monitoring of status of such a complex infrastructure is essential. The ATLAS Grid infrastructure is monitored 24/7 by two teams of shifters distributed world-wide, by the ATLAS Distributed Computing experts, and by site administrators. In this paper we summarize automation efforts performed within the ATLAS Distributed Computing team in order to reduce manpower costs and improve the reliability of the system. Different aspects of the automation process are described: from the ATLAS Grid site topology provided by the ATLAS Grid Information System, via automatic site testing by the HammerCloud, to automatic exclusion from production or analysis activities.

  2. New ATLAS Software & Computing Organization

    CERN Multimedia

    Barberis, D

    Following the election by the ATLAS Collaboration Board of Dario Barberis (Genoa University/INFN) as Computing Coordinator and David Quarrie (LBNL) as Software Project Leader, it was considered necessary to modify the organization of the ATLAS Software & Computing ("S&C") project. The new organization is based upon the following principles: separation of the responsibilities for computing management from those of software development, with the appointment of a Computing Coordinator and a Software Project Leader who are both members of the Executive Board; hierarchical structure of responsibilities and reporting lines; coordination at all levels between TDAQ, S&C and Physics working groups; integration of the subdetector software development groups with the central S&C organization. A schematic diagram of the new organization can be seen in Fig.1. Figure 1: new ATLAS Software & Computing organization. Two Management Boards will help the Computing Coordinator and the Software Project...

  3. Analytics Platform for ATLAS Computing Services

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of machine learning tools like Spark, Jupyter, R, S...

  4. Volunteer Computing Experience with ATLAS@Home

    CERN Document Server

    Cameron, David; The ATLAS collaboration; Bourdarios, Claire; Lan\\c con, Eric

    2016-01-01

    ATLAS@Home is a volunteer computing project which allows the public to contribute to computing for the ATLAS experiment through their home or office computers. The project has grown continuously since its creation in mid-2014 and now counts almost 100,000 volunteers. The combined volunteers' resources make up a sizable fraction of overall resources for ATLAS simulation. This paper takes stock of the experience gained so far and describes the next steps in the evolution of the project. These improvements include running natively on Linux to ease the deployment on for example university clusters, using multiple cores inside one job to reduce the memory requirements and running different types of workload such as event generation. In addition to technical details the success of ATLAS@Home as an outreach tool is evaluated.

  5. ATLAS Cloud Computing R&D project

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2013-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  6. Exploiting Virtualization and Cloud Computing in ATLAS

    Science.gov (United States)

    Harald Barreiro Megino, Fernando; Benjamin, Doug; De, Kaushik; Gable, Ian; Hendrix, Val; Panitkin, Sergey; Paterson, Michael; De Silva, Asoka; van der Ster, Daniel; Taylor, Ryan; Vitillo, Roberto A.; Walker, Rod

    2012-12-01

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R&D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  7. Exploiting Virtualization and Cloud Computing in ATLAS

    International Nuclear Information System (INIS)

    The ATLAS Computing Model was designed around the concept of grid computing; since the start of data-taking, this model has proven very successful in the federated operation of more than one hundred Worldwide LHC Computing Grid (WLCG) sites for offline data distribution, storage, processing and analysis. However, new paradigms in computing, namely virtualization and cloud computing, present improved strategies for managing and provisioning IT resources that could allow ATLAS to more flexibly adapt and scale its storage and processing workloads on varied underlying resources. In particular, ATLAS is developing a “grid-of-clouds” infrastructure in order to utilize WLCG sites that make resources available via a cloud API. This work will present the current status of the Virtualization and Cloud Computing R and D project in ATLAS Distributed Computing. First, strategies for deploying PanDA queues on cloud sites will be discussed, including the introduction of a “cloud factory” for managing cloud VM instances. Next, performance results when running on virtualized/cloud resources at CERN LxCloud, StratusLab, and elsewhere will be presented. Finally, we will present the ATLAS strategies for exploiting cloud-based storage, including remote XROOTD access to input data, management of EC2-based files, and the deployment of cloud-resident LCG storage elements.

  8. ATLAS distributed computing: experience and evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2014-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25/fb of data. The total volume of beam and simulated data products exceeds 100~PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  9. ATLAS Distributed Computing: Experience and Evolution

    CERN Document Server

    Nairz, A; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centers around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics program including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2014 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, e...

  10. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; Berghaus, Frank; Brasolin, Franco; Cordeiro, Cristovao; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; Leblanc, Matthew Edgar; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing infrastructure as a service (IaaS) resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for ma...

  11. The Evolution of Cloud Computing in ATLAS

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00224309; The ATLAS collaboration; Berghaus, Frank; Love, Peter; Leblanc, Matthew Edgar; Di Girolamo, Alessandro; Paterson, Michael; Gable, Ian; Sobie, Randall; Field, Laurence

    2015-01-01

    The ATLAS experiment has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This work will describe the overall evolution of cloud computing in ATLAS. The current status of the VM management systems used for harnessing IAAS resources will be discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, ...

  12. ATLAS Distributed Computing in LHC Run2

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  13. ATLAS distributed computing: experience and evolution

    Science.gov (United States)

    Nairz, A.; Atlas Collaboration

    2014-06-01

    The ATLAS experiment has just concluded its first running period which commenced in 2010. After two years of remarkable performance from the LHC and ATLAS, the experiment has accumulated more than 25 fb-1 of data. The total volume of beam and simulated data products exceeds 100 PB distributed across more than 150 computing centres around the world, managed by the experiment's distributed data management system. These sites have provided up to 150,000 computing cores to ATLAS's global production and analysis processing system, enabling a rich physics programme including the discovery of the Higgs-like boson in 2012. The wealth of accumulated experience in global data-intensive computing at this massive scale, and the considerably more challenging requirements of LHC computing from 2015 when the LHC resumes operation, are driving a comprehensive design and development cycle to prepare a revised computing model together with data processing and management systems able to meet the demands of higher trigger rates, energies and event complexities. An essential requirement will be the efficient utilisation of current and future processor technologies as well as a broad range of computing platforms, including supercomputing and cloud resources. We will report on experience gained thus far and our progress in preparing ATLAS computing for the future.

  14. ATLAS computing on CSCS HPC

    CERN Document Server

    Hostettler, Michael Artur; The ATLAS collaboration; Haug, Sigve; Walker, Rodney; Weber, Michele

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, was in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment have been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Furthermore, some GPU acceleration of the Geant4 detector simulations has been implemented to justify the allocation request for this machine.

  15. ATLAS computing on CSCS HPC

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration; Weber, Michele; Walker, Rodney; Hostettler, Michael Artur

    2015-01-01

    The Piz Daint Cray XC30 HPC system at CSCS, the Swiss National Supercomputing centre, is in 2014 the highest ranked European system on TOP500, also featuring GPU accelerators. Event generation and detector simulation for the ATLAS experiment has been enabled for this machine. We report on the technical solutions, performance, HPC policy challenges and possible future opportunities for HEP on extreme HPC systems. In particular a custom made integration to the ATLAS job submission system has been developed via the Advanced Resource Connector (ARC) middleware. Further, some GPU acceleration of the Geant4 detector simulations were implemented to justify the allocation request for this machine.

  16. The Evolution of Cloud Computing in ATLAS

    Science.gov (United States)

    Taylor, Ryan P.; Berghaus, Frank; Brasolin, Franco; Domingues Cordeiro, Cristovao Jose; Desmarais, Ron; Field, Laurence; Gable, Ian; Giordano, Domenico; Di Girolamo, Alessandro; Hover, John; LeBlanc, Matthew; Love, Peter; Paterson, Michael; Sobie, Randall; Zaytsev, Alexandr

    2015-12-01

    The ATLAS experiment at the LHC has successfully incorporated cloud computing technology and cloud resources into its primarily grid-based model of distributed computing. Cloud R&D activities continue to mature and transition into stable production systems, while ongoing evolutionary changes are still needed to adapt and refine the approaches used, in response to changes in prevailing cloud technology. In addition, completely new developments are needed to handle emerging requirements. This paper describes the overall evolution of cloud computing in ATLAS. The current status of the virtual machine (VM) management systems used for harnessing Infrastructure as a Service resources are discussed. Monitoring and accounting systems tailored for clouds are needed to complete the integration of cloud resources within ATLAS' distributed computing framework. We are developing and deploying new solutions to address the challenge of operation in a geographically distributed multi-cloud scenario, including a system for managing VM images across multiple clouds, a system for dynamic location-based discovery of caching proxy servers, and the usage of a data federation to unify the worldwide grid of storage elements into a single namespace and access point. The usage of the experiment's high level trigger farm for Monte Carlo production, in a specialized cloud environment, is presented. Finally, we evaluate and compare the performance of commercial clouds using several benchmarks.

  17. Automating usability of ATLAS Distributed Computing resources

    CERN Document Server

    "Tupputi, S A; The ATLAS collaboration

    2013-01-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic exclusion/recovery of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources who feature non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes SAM (Site Availability Test) site-by-site SRM tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites.\

  18. ATLAS and LHC computing on CRAY

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Access and exploitation of large scale computing resources, such as those offered by general purpose HPC centres, is one import measure for ATLAS and the other Large Hadron Collider experiments in order to meet the challenge posed by the full exploitation of the future data within the constraints of flat budgets. We report on the effort moving the Swiss WLCG T2 computing, serving ATLAS, CMS and LHCb from a dedicated cluster to the large CRAY systems at the Swiss National Supercomputing Centre CSCS. These systems do not only offer very efficient hardware, cooling and highly competent operators, but also have large backfill potentials due to size and multidisciplinary usage and potential gains due to economy at scale. Technical solutions, performance, expected return and future plans are discussed.

  19. Consolidation of Cloud Computing in ATLAS

    CERN Document Server

    Taylor, Ryan P.; The ATLAS collaboration; Di Girolamo, Alessandro; Hover, John

    2016-01-01

    Throughout the first year of LHC Run 2, ATLAS Cloud Computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS Cloud Computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vac resources, streamlined usage of the High Level Trigger cloud for simulation and reconstruction, extreme scaling on Amazon EC2, and procurement of commercial cloud capacity in Europe. Building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems. ...

  20. ATLAS Distributed Computing in LHC Run2

    Science.gov (United States)

    Campana, Simone

    2015-12-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

  1. Automating usability of ATLAS Distributed Computing resources

    Science.gov (United States)

    Tupputi, S. A.; Di Girolamo, A.; Kouba, T.; Schovancová, J.; Atlas Collaboration

    2014-06-01

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  2. Automating usability of ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    The automation of ATLAS Distributed Computing (ADC) operations is essential to reduce manpower costs and allow performance-enhancing actions, which improve the reliability of the system. In this perspective a crucial case is the automatic handling of outages of ATLAS computing sites storage resources, which are continuously exploited at the edge of their capabilities. It is challenging to adopt unambiguous decision criteria for storage resources of non-homogeneous types, sizes and roles. The recently developed Storage Area Automatic Blacklisting (SAAB) tool has provided a suitable solution, by employing an inference algorithm which processes history of storage monitoring tests outcome. SAAB accomplishes both the tasks of providing global monitoring as well as automatic operations on single sites. The implementation of the SAAB tool has been the first step in a comprehensive review of the storage areas monitoring and central management at all levels. Such review has involved the reordering and optimization of SAM tests deployment and the inclusion of SAAB results in the ATLAS Site Status Board with both dedicated metrics and views. The resulting structure allows monitoring the storage resources status with fine time-granularity and automatic actions to be taken in foreseen cases, like automatic outage handling and notifications to sites. Hence, the human actions are restricted to reporting and following up problems, where and when needed. In this work we show SAAB working principles and features. We present also the decrease of human interactions achieved within the ATLAS Computing Operation team. The automation results in a prompt reaction to failures, which leads to the optimization of resource exploitation.

  3. The ATLAS Distributed Computing: the challenges of the future

    CERN Document Server

    Sakamoto, H; The ATLAS collaboration

    2013-01-01

    The ATLAS experiment has collected more than 25 fb-1 of data since LHC has started it's operation in 2010. Tens of petabytes of collision events and Monte-Carlo simulations are stored over more than 150 computing centers all over the world. The data processing is performed on grid sites providing more than 100.000 computing cores and orchestrated by the ATLAS in-house developed job and data management services. The discovery of the Higgs-like boson in 2012 would not be possible without the excellent performance of the ATLAS Distributed Computing. The future ATLAS experiment operation with increased LHC beam energy and luminosity foreseen for 2014 imposes a significant increase in computing demands the ATLAS Distributed Computing needs to satisfy. Therefore, a development of the new data-processing, storage and data-distribution systems has been started to efficiently use the computing resources exploiting current and future technologies of distributed computing.

  4. Data analytics in the ATLAS Distributed Computing

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Bryant, Lincoln

    2015-01-01

    The ATLAS Data analytics effort is focused on creating systems which provide the ATLAS ADC with new capabilities for understanding distributed systems and overall operational performance. These capabilities include: warehousing information from multiple systems (the production and distributed analysis system - PanDA, the distributed data management system - Rucio, the file transfer system, various monitoring services etc. ); providing a platform to execute arbitrary data mining and machine learning algorithms over aggregated data; satisfy a variety of use cases for different user roles; host new third party analytics services on a scalable compute platform. We describe the implemented system where: data sources are existing RDBMS (Oracle) and Flume collectors; a Hadoop cluster is used to store the data; native Hadoop and Apache Pig scripts are used for data aggregation; and R for in-depth analytics. Part of the data is indexed in ElasticSearch so both simpler investigations and complex dashboards can be made ...

  5. ATLAS Computing on the Swiss Cloud SWITCHengines

    CERN Document Server

    Haug, Sigve; The ATLAS collaboration

    2016-01-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performance used and achieved running ATLAS production on SWITCHengines. SWITCHengines is the new cloud infrastructure offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, which we also report on, are country specific.

  6. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  7. Evolving ATLAS Computing For Today’s Networks

    CERN Document Server

    Campana, S; The ATLAS collaboration; Jezequel, S; Negri, G; Serfon, C; Ueda, I

    2012-01-01

    The ATLAS computing infrastructure was designed many years ago based on the assumption of rather limited network connectivity between computing centres. ATLAS sites have been organized in a hierarchical model, where only a static subset of all possible network links can be exploited and a static subset of well connected sites (CERN and the T1s) can cover important functional roles such as hosting master copies of the data. The pragmatic adoption of such simplified approach, in respect of a more relaxed scenario interconnecting all sites, was very beneficial during the commissioning of the ATLAS distributed computing system and essential in reducing the operational cost during the first two years of LHC data taking. In the mean time, networks evolved far beyond this initial scenario: while a few countries are still poorly connected with the rest of the WLCG infrastructure, most of the ATLAS computing centres are now efficiently interlinked. Our operational experience in running the computing infrastructure in ...

  8. ATLAS@Home: Harnessing Volunteer Computing for HEP

    Science.gov (United States)

    Adam-Bourdarios, C.; Cameron, D.; Filipčič, A.; Lancon, E.; Wu, W.; ATLAS Collaboration

    2015-12-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  9. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Bourdarios, Claire; Filipcic, Andrej; Lancon, Eric; Wu, Wenjing

    2015-01-01

    A recent common theme among HEP computing is exploitation of opportunistic resources in order to provide the maximum statistics possible for Monte-Carlo simulation. Volunteer computing has been used over the last few years in many other scientific fields and by CERN itself to run simulations of the LHC beams. The ATLAS@Home project was started to allow volunteers to run simulations of collisions in the ATLAS detector. So far many thousands of members of the public have signed up to contribute their spare CPU cycles for ATLAS, and there is potential for volunteer computing to provide a significant fraction of ATLAS computing resources. Here we describe the design of the project, the lessons learned so far and the future plans.

  10. The Next Generation ARC Middleware and ATLAS Computing Model

    CERN Document Server

    Filipcic, A; The ATLAS collaboration; Smirnova, O; Konstantinov, A; Karpenko, D

    2012-01-01

    The distributed NDGF Tier-1 and associated Nordugrid clusters are well integrated into the ATLAS computing model but follow a slightly different paradigm than other ATLAS resources. The current strategy does not divide the sites as in the commonly used hierarchical model, but rather treats them as a single storage endpoint and a pool of distributed computing nodes. The next generation ARC middleware with its several new technologies provides new possibilities in development of the ATLAS computing model, such as pilot jobs with pre-cached input files, automatic job migration between the sites, integration of remote sites without connected storage elements, and automatic brokering for jobs with non-standard resource requirements. ARC's data transfer model provides an automatic way for the computing sites to participate in ATLAS' global task management system without requiring centralised brokering or data transfer services. The powerful API combined with Python and Java bindings can easily be used to build new ...

  11. Common accounting system for monitoring the ATLAS distributed computing resources

    International Nuclear Information System (INIS)

    This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.

  12. ATLAS computing activities and developments in the Italian Grid cloud

    International Nuclear Information System (INIS)

    The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.

  13. The December 2006 ATLAS Computing & Software Workshop

    CERN Multimedia

    Fred Luehring

    The 29th ATLAS Computing & Software Workshop was held on December 11-15 at CERN. With the rapidly approaching onset of data taking, the workshop participants had an air of urgency about them. There was considerable discussion on hot topics such as physics validation of the software, data analysis, actual software production on the GRID, and the schedule of work for 2007 including the Final Dress Rehearsal (FDR). However don't be fooled, the workshop was not all work - there were also two social events which were greatly enjoyed by the attendees. The workshop welcomed Wouter Verkerke as the new Physics Validation Coordinator (replacing Davide Costanzo). Most recent validation work has centered on the 12.0.X release series that will be used for the Computing System Commissioning (CSC) exercise. The validation is now a big job because it needs to be done over a variety of conditions (magnetic field on/off, aligned/misaligned geometry) for every candidate release. Luckily there have been a large number of pe...

  14. ATLAS@Home: Harnessing Volunteer Computing for HEP

    CERN Document Server

    Cameron, David; The ATLAS collaboration

    2015-01-01

    The ATLAS collaboration has setup a volunteer computing project called ATLAS@home. Volunteers running Monte-Carlo simulation on their personal computer provide significant computing resources, but also belong to a community potentially interested in HEP. Four types of contributors have been identified, whose questions range from advanced technical details to the reason why simulation is needed, how Computing is organized and how it relates to society. The creation of relevant outreach material for simulation, event visualization and distributed production will be described, as well as lessons learned while interacting with the BOINC volunteers community.

  15. ATLAS Distributed Computing Challenges and Plans for the Future

    CERN Document Server

    Klimentov, A; The ATLAS collaboration

    2011-01-01

    The following topics will be addressed : Data Model and Data Placement evolution, evaluation of new software technologies such as Cloud computing for the LHC computing. The ATLAS collaboration has been interested in cloud computing since commercial clouds like Amazon EC2 became available. We launched R&D project (together with WLCG) to study cloud computing for ATLAS, and then to design and implement cloud awareness in Distributed Data Management system, production and distributed analysis (PanDA) and in related tools and services.

  16. ATLAS distributed computing operations in the GridKa cloud

    International Nuclear Information System (INIS)

    The ATLAS Grid Computing resources in Germany, Poland, the Czech Republic, Austria, and Switzerland consist of a cloud of 12 Tier-2 computing centers grouped around the Tier-1 center GridKa at the Steinbuch Centre for Computing at KIT. While the Tier-1 center serves as a hub for data management in the cloud and is the principal resource for reprocessing and custodial storage of raw ATLAS data, the Tier-2 centers provide the resources for user analysis and production of simulated events. During the first full year of data taking at the LHC, the GridKa cloud has successfully contributed to the overall ATLAS computing effort, enabling physicists to quickly analyze the large volume of new incoming data and the corresponding simulated events. This talk covers the computing operations in the GridKa cloud with focus on performance and experiences at both the Tier-1 and Tier-2 centers.

  17. ATLAS distributed computing operations in the GridKa cloud

    Energy Technology Data Exchange (ETDEWEB)

    Duckeck, Guenter; Serfon, Cedric; Walker, Rodney [Ludwig-Maximilians-Universitaet, Garching (Germany); Harenberg, Torsten; Kalinin, Sergey; Schultes, Joachim [Bergische Universitaet, Wuppertal (Germany); Kawamura, Gen [Johannes-Gutenberg-Universitaet, Mainz (Germany); Leffhalm, Kai [DESY, Zeuthen (Germany); Meyer, Joerg [Georg-August-Universitaet, Goettingen (Germany); Petzold, Andreas [Karlsruher Institut fuer Technologie (Germany); Sundermann, Jan Erik [Albert-Ludwigs-Universitaet, Freiburg (Germany)

    2011-07-01

    The ATLAS Grid Computing resources in Germany, Poland, the Czech Republic, Austria, and Switzerland consist of a cloud of 12 Tier-2 computing centers grouped around the Tier-1 center GridKa at the Steinbuch Centre for Computing at KIT. While the Tier-1 center serves as a hub for data management in the cloud and is the principal resource for reprocessing and custodial storage of raw ATLAS data, the Tier-2 centers provide the resources for user analysis and production of simulated events. During the first full year of data taking at the LHC, the GridKa cloud has successfully contributed to the overall ATLAS computing effort, enabling physicists to quickly analyze the large volume of new incoming data and the corresponding simulated events. This talk covers the computing operations in the GridKa cloud with focus on performance and experiences at both the Tier-1 and Tier-2 centers.

  18. The ATLAS computing model and distributed computing evolution

    CERN Document Server

    Jones, Roger W L

    2009-01-01

    Despite only a brief availability of beam-related data, the typical usage patterns and operational requirements of the ATLAS computing model have been exercised, and the model as originally constructed remains remarkably unchanged. Resource requirements have been revised, and cosmic ray running has exercised much of the model in both duration and volume. The operational model has been adapted in several ways to increase performance and meet the asdelivered functionality of the available middleware. There are also changes reflecting the emerging roles of the different data formats. The model continues to evolve with a heightened focus on end-user performance, the key tools developed in the operational system are outlined, with an emphasis on those under recent development.

  19. Computing shifts to monitor ATLAS distributed computing infrastructure and operations

    CERN Document Server

    Adam Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS Distributed Computing (ADC) group established a new Computing Run Coordinator (CRC) shift at the start of LHC Run2 in 2015. The main goal was to rely on a person with a good overview of the ADC activities to ease the ADC experts' workload. The CRC shifter keeps track of ADC tasks related to their fields of expertise and responsibility. At the same time, the shifter maintains a global view of the day-to-day operations of the ADC system. During Run1, this task was accomplished by the ADC Manager on Duty (AMOD), a position that was removed during the shutdown period due to the reduced number and availability of ADC experts foreseen for Run2. The CRC position was proposed to cover some of the AMOD’s former functions, while allowing more people involved in computing to participate. In this way, CRC shifters help train future ADC experts. The CRC shifters coordinate daily ADC shift operations, including tracking open issues, reporting, and representing ADC in relevant meetings. The CRC also facilitates ...

  20. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2014-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  1. Next generation database relational solutions for ATLAS distributed computing

    CERN Document Server

    Dimitrov, G; The ATLAS collaboration; Garonne, V

    2013-01-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions t...

  2. Distributed computing operations in the German ATLAS cloud

    Energy Technology Data Exchange (ETDEWEB)

    Boehler, Michael; Gamel, Anton; Sundermann, Jan Erik [Universitaet Freiburg, Freiburg im Breisgau (Germany); Petzold, Andreas [KIT, Karlsruhe (Germany); Kawamura, Gen [Universitaet Mainz (Germany); Leffhalm, Kai [DESY (Germany); Sandhoff, Marisa; Harenberg, Torsten [Bergische Universitaet Wuppertal (Germany); Walker, Rod; Duckeck, Guenter [LMU Muenchen (Germany)

    2013-07-01

    Before announcing the discovery of a Higgs-like boson at the 4th of July 2012 a huge amount of data had to be distributed around the world and analysed. Moreover, to have well optimised analyses with solid background estimates, Monte Carlo simulated event samples needed to be generated. All of this, data distribution, Monte Carlo production, and also data reprocessing, is performed by the Worldwide LHC Computing Grid. The ATLAS grid computing resources in Austria, the Czech Republic, Germany, Poland, and Switzerland are organized in the GridKa cloud which is one out of 10 ATLAS computing clouds. It consists of the Tier-1 centre at KIT in Karlsruhe which serves as a hub for data management and stores raw ATLAS data and the Tier-2 centres that provide the resources for user analysis and Monte Carlo samples production. This talk gives an overview of the ATLAS grid computing operations in 2012 focusing on the performance and experiences at both the Tier-1 and Tier-2 centres and it summarises the prospects and requirements for grid computing during and after the long shut-down of the LHC in 2013/2014.

  3. Distributed computing operations in the German ATLAS cloud

    International Nuclear Information System (INIS)

    Before announcing the discovery of a Higgs-like boson at the 4th of July 2012 a huge amount of data had to be distributed around the world and analysed. Moreover, to have well optimised analyses with solid background estimates, Monte Carlo simulated event samples needed to be generated. All of this, data distribution, Monte Carlo production, and also data reprocessing, is performed by the Worldwide LHC Computing Grid. The ATLAS grid computing resources in Austria, the Czech Republic, Germany, Poland, and Switzerland are organized in the GridKa cloud which is one out of 10 ATLAS computing clouds. It consists of the Tier-1 centre at KIT in Karlsruhe which serves as a hub for data management and stores raw ATLAS data and the Tier-2 centres that provide the resources for user analysis and Monte Carlo samples production. This talk gives an overview of the ATLAS grid computing operations in 2012 focusing on the performance and experiences at both the Tier-1 and Tier-2 centres and it summarises the prospects and requirements for grid computing during and after the long shut-down of the LHC in 2013/2014.

  4. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  5. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    CERN Document Server

    Schovancová, J; The ATLAS collaboration

    2012-01-01

    This paper details variety of Monitoring tools used within the ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the Tier-0 facility at CERN after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centers distributed world-wide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfers throughput, through data processing and readiness of the computing services at the ATLAS computing centers, to the reliability and usability of the ATLAS computing centers. Described tools provide monitoring for issues of different level of criticality: from spotting issues with the instant online monitoring to the long-term accounting information.

  6. ATLAS Distributed Computing Monitoring tools after full 2 years of LHC data taking

    Science.gov (United States)

    Schovancová, Jaroslava

    2012-12-01

    This paper details a variety of Monitoring tools used within ATLAS Distributed Computing during the first 2 years of LHC data taking. We discuss tools used to monitor data processing from the very first steps performed at the CERN Analysis Facility after data is read out of the ATLAS detector, through data transfers to the ATLAS computing centres distributed worldwide. We present an overview of monitoring tools used daily to track ATLAS Distributed Computing activities ranging from network performance and data transfer throughput, through data processing and readiness of the computing services at the ATLAS computing centres, to the reliability and usability of the ATLAS computing centres. The described tools provide monitoring for issues of varying levels of criticality: from identifying issues with the instant online monitoring to long-term accounting information.

  7. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    Filip\\v{c}i\\v{c}, Andrej; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  8. ATLAS Distributed Computing Shift Operation in the first 2 full years of LHC data taking

    CERN Document Server

    Schovancová, J; The ATLAS collaboration; Elmsheuser, J; Jézéquel, S; Negri, G; Ozturk, N; Sakamoto, H; Slater, M; Smirnov, Y; Ueda, I; Van Der Ster, D C

    2012-01-01

    ATLAS Distributed Computing organized 3 teams to support data processing at Tier-0 facility at CERN, data reprocessing, data management operations, Monte Carlo simulation production, and physics analysis at the ATLAS computing centers located world-wide. In this paper we describe how these teams ensure that the ATLAS experiment data is delivered to the ATLAS physicists in a timely manner in the glamorous era of the LHC data taking. We describe experience with ways how to improve degraded service performance, we detail on the Distributed Analysis support over the exciting period of the computing model evolution.

  9. Next generation database relational solutions for ATLAS distributed computing

    International Nuclear Information System (INIS)

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  10. Next generation database relational solutions for ATLAS distributed computing

    Science.gov (United States)

    Dimitrov, G.; Maeno, T.; Garonne, V.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing (ADC) project delivers production tools and services for ATLAS offline activities such as data placement and data processing on the Grid. The system has been capable of sustaining with high efficiency the needed computing activities during the first run of LHC data taking, and has demonstrated flexibility in reacting promptly to new challenges. Databases are a vital part of the whole ADC system. The Oracle Relational Database Management System (RDBMS) has been addressing a majority of the ADC database requirements for many years. Much expertise was gained through the years and without a doubt will be used as a good foundation for the next generation PanDA (Production ANd Distributed Analysis) and DDM (Distributed Data Management) systems. In this paper we present the current production ADC database solutions and notably the planned changes on the PanDA system, and the next generation ATLAS DDM system called Rucio. Significant work was performed on studying different solutions to arrive at the best relational and physical database model for performance and scalability in order to be ready for deployment and operation in 2014.

  11. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Bauce, Matteo; Dankel, Maik; Howard, Jacob; Kama, Sami

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. These data are processed by in-house built software frameworks which have lifetimes longer than the detector itself. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to parallel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software framework called Athena. In this talk we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline reconstruction as well as their integration into a multiple process based Athena frame...

  12. Use of hardware accelerators for ATLAS computing

    CERN Document Server

    Dankel, Maik; The ATLAS collaboration; Howard, Jacob; Bauce, Matteo; Boing, Rene

    2015-01-01

    Modern HEP experiments produce tremendous amounts of data. This data is processed by in-house built software frameworks which have lifetimes longer than the detector it- self. Such frameworks were traditionally based on serial code and relied on advances in CPU technologies, mainly clock frequency, to cope with increasing data volumes. With the advent of many-core architectures and GPGPUs this paradigm has to shift to paral- lel processing and has to include the use of co-processors. However, since the design of most existing frameworks is based on the assumption of frequency scaling and predate co-processors, parallelisation and integration of co-processors are not an easy task. The ATLAS experiment is an example of such a big experiment with a big software frame- work called Athena. In this proceedings we will present the studies on parallelisation and co-processor (GPGPU) use in data preparation and tracking for trigger and offline recon- struction as well as their integration into a multiple process based...

  13. The Future of PanDA in ATLAS Distributed Computing

    CERN Document Server

    De, Kaushik; The ATLAS collaboration; Maeno, Tadashi; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyze the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favor of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addi...

  14. Validating a new computed tomography atlas for grading ankle osteoarthritis.

    Science.gov (United States)

    Cohen, Michael M; Vela, Nathan D; Levine, Jason E; Barnoy, Eran A

    2015-01-01

    As the most common joint disease, osteoarthritis (OA) poses a significant source of pain and disability. It can be defined by classic radiographic findings, particular symptoms, or a combination of the 2. Although specific grading scales have been developed to evaluate OA in various joints, such as the shoulder, hip, and knee, no definitive classification system is available for grading OA in the ankle. The purpose of the present study was to create and validate a standardized atlas for grading (or staging) ankle osteoarthritis using computed tomography (CT) and "hallmark" findings noted on coronal, sagittal, and axial views extrapolated from the Kellgren-Lawrence radiographic scale. The CT scans of 226 patients at the Miami Veterans Affairs Medical Center were reviewed. An atlas was derived from a retrospective review of 30 remaining CT scans taken from July 2008 to November 2011. After this review, 3 orthogonal static CT images, obtained from 11 remaining patients, were chosen to represent the various stages on the OA scale and were used to test the validity of the atlas developed by 2 of us (M.M.C. and N.D.V.). A multispecialty panel of 9 examiners, excluding ourselves, independently rated the 11 CT scan subjects. The differences among examiners and specialties were calculated, including an intra-examiner agreement for 2 separate readings spaced 9 months apart. Although the small number of subspecialty examiners made the intraspecialty comparisons difficult to validate, the findings nevertheless indicated excellent agreement among all specialty groups, with good intra-investigational (intraclass correlation coefficient 0.962 and 1) inter-investigational (intraclass correlation coefficient 0.851) values. These results appeared to validate the CT ankle OA atlas, which we believe will be a valuable clinical and research tool, one that will likely be more beneficial than less relevant generalized OA grading scales in use today.

  15. The future of PanDA in ATLAS distributed computing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  16. The ATLAS Rome Production Experience on the LHC Computing Grid

    CERN Document Server

    Campana, Simone

    2006-01-01

    The Large Hadron Collider at CERN will start data acquisition in 2007. The ATLAS (A Toroidal LHC ApparatuS) experiment is preparing for the data handling and analysis via a series of Data Challenges and production exercises to validate its computing model and to provide useful samples of data for detector and physics studies. The last Data Challenge, begun in June 2004 and ended in early 2005, was the first performed completely in a Grid environment. Immediately afterwards, a new production activity was necessary in order to provide the event samples for the ATLAS physics workshop, taking place in June 2005 in Rome. This exercise offered a unique opportunity to estimate the reached improvements and to continue the validation of the computing model. In this contribution we discuss the experience of the “Rome production” on the LHC Computing Grid infrastructure, describing the achievements, the improvements with respect to the previous Data Challenge and the problems observed, together with the lessons lear...

  17. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This paper surveys the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  18. ATLAS computing challenges before the next LHC run

    CERN Document Server

    Barberis, D; The ATLAS collaboration

    2014-01-01

    ATLAS software and computing is in a period of intensive evolution. The current long shutdown presents an opportunity to assimilate lessons from the very successful Run 1 (2009-2013) and to prepare for the substantially increased computing requirements for Run 2 (from spring 2015). Run 2 will bring a near doubling of the energy and the data rate, high event pile-up levels, and higher event complexity from detector upgrades, meaning the number and complexity of events to be analyzed will increase dramatically. At the same time operational loads must be reduced through greater automation, a wider array of opportunistic resources must be supported, costly storage must be used with greater efficiency, a sophisticated new analysis model must be integrated, and concurrency features of new processors must be exploited. This presentation will survey the distributed computing aspects of the upgrade program and the plans for 2014 to exercise the new capabilities in a large scale Data Challenge.

  19. ATLAS Experience with HEP Software at the Argonne Leadership Computing Facility

    CERN Document Server

    LeCompte, T; The ATLAS collaboration; Benjamin, D

    2014-01-01

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  20. ATLAS experience with HEP software at the Argonne leadership computing facility

    International Nuclear Information System (INIS)

    A number of HEP software packages used by the ATLAS experiment, including GEANT4, ROOT and ALPGEN, have been adapted to run on the IBM Blue Gene supercomputers at the Argonne Leadership Computing Facility. These computers use a non-x86 architecture and have a considerably less rich operating environment than in common use in HEP, but also represent a computing capacity an order of magnitude beyond what ATLAS is presently using via the LCG. The status and potential for making use of leadership-class computing, including the status of integration with the ATLAS production system, is discussed.

  1. ATLAS Great Lakes Tier-2 Computing and Muon Calibration Center Commissioning

    CERN Document Server

    McKee, Shawn

    2009-01-01

    Large-scale computing in ATLAS is based on a grid-linked system of tiered computing centers. The ATLAS Great Lakes Tier-2 came online in September 2006 and now is commissioning with full capacity to provide significant computing power and services to the USATLAS community. Our Tier-2 Center also host the Michigan Muon Calibration Center which is responsible for daily calibrations of the ATLAS Monitored Drift Tubes for ATLAS endcap muon system. During the first LHC beam period in 2008 and following ATLAS global cosmic ray data taking period, the Calibration Center received a large data stream from the muon detector to derive the drift tube timing offsets and time-to-space functions with a turn-around time of 24 hours. We will present the Calibration Center commissioning status and our plan for the first LHC beam collisions in 2009.

  2. The Architecture and Administration of the ATLAS Online Computing System

    CERN Document Server

    Dobson, M; Ertorer, E; Garitaonandia, H; Leahu, L; Leahu, M; Malciu, I M; Panikashvili, E; Topurov, A; Ünel, G; Computing In High Energy and Nuclear Physics

    2006-01-01

    The needs of ATLAS experiment at the upcoming LHC accelerator, CERN, in terms of data transmission rates and processing power require a large cluster of computers (of the order of thousands) administrated and exploited in a coherent and optimal manner. Requirements like stability, robustness and fast recovery in case of failure impose a server-client system architecture with servers distributed in a tree like structure and clients booted from the network. For security reasons, the system should be accessible only through an application gateway and, also to ensure the autonomy of the system, the network services should be provided internally by dedicated machines in synchronization with CERN IT department's central services. The paper describes a small scale implementation of the system architecture that fits the given requirements and constraints. Emphasis will be put on the mechanisms and tools used to net boot the clients via the "Boot With Me" project and to synchronize information within the cluster via t...

  3. The ATLAS computing challenge for HL-LHC

    CERN Document Server

    Campana, Simone; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment successfully commissioned a software and computing infrastructure to support the physics program during LHC Run 2. The next phases of the accelerator upgrade will present new challenges in the offline area. In particular, at High Luminosity LHC (also known as Run 4) the data taking conditions will be very demanding in terms of computing resources: between 5 and 10 KHz of event rate from the HLT to be reconstructed (and possibly further reprocessed) with an average pile-up of up to 200 events per collision and an equivalent number of simulated samples to be produced. The same parameters for the current run are lower by up to an order of magnitude. While processing and storage resources would need to scale accordingly, the funding situation allows one at best to consider a flat budget over the next few years for offline computing needs. In this paper we present a study quantifying the challenge in terms of computing resources for HL-LHC and present ideas about the possible evolution of the ...

  4. Evolution of the ATLAS Distributed Computing during the LHC long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2013-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  5. Evolution of the ATLAS Distributed Computing system during the LHC Long shutdown

    CERN Document Server

    Campana, S; The ATLAS collaboration

    2014-01-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the WLCG distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileu...

  6. The ATLAS Computing Agora: a resource web site for citizen science projects

    CERN Document Server

    Bourdarios, Claire; The ATLAS collaboration

    2016-01-01

    The ATLAS collaboration has recently setup a number of citizen science projects which have a strong IT component and could not have been envisaged without the growth of general public computing resources and network connectivity: event simulation through volunteer computing, algorithms improvement via Machine Learning challenges, event display analysis on citizen science platforms, use of open data, etc. Most of the interactions with volunteers are handled through message boards, but specific outreach material was also developed, giving an enhanced visibility to the ATLAS software and computing techniques, challenges and community. In this talk the Atlas Computing Agora (ACA) web platform will be presented as well as some of the specific material developed for some of the projects.

  7. The ATLAS Distributed Computing project for LHC Run-2 and beyond.

    CERN Document Server

    Di Girolamo, Alessandro; The ATLAS collaboration

    2015-01-01

    The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run2. An increased data rate and computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (ProdSys2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward the flexible computing model. The flexible computing utilization exploring the opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model, the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover a new data management strategy, based on defined lifetime for each dataset, has been defin...

  8. Scalable Database Access Technologies for ATLAS Distributed Computing

    CERN Document Server

    Vaniachine, A

    2009-01-01

    ATLAS event data processing requires access to non-event data (detector conditions, calibrations, etc.) stored in relational databases. The database-resident data are crucial for the event data reconstruction processing steps and often required for user analysis. A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. Since Conditions DB access is critical for operations with real data, we have developed the system where a different technology can be used as a redundant backup. Redundant database operations infrastructure fully satisfies the requirements of ATLAS reprocessing, which has been proven on a scale of one billion database queries during two reprocessing campaigns of 0.5 PB of single-beam and cosmics data on the Grid. To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. We pre...

  9. ATLAS

    Data.gov (United States)

    Federal Laboratory Consortium — ATLAS is a particle physics experiment at the Large Hadron Collider at CERN, the European Organization for Nuclear Research. Scientists from Brookhaven have played...

  10. Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

    CERN Document Server

    Fernandez, A; The ATLAS collaboration; AMOROS, G; VILLAPLANA, M; FASSI, F; KACI, M; LAMAS, A; OLIVER, E; SALT, J; SANCHEZ, J; SANCHEZ, V

    2012-01-01

    ABSTRAC ISCG 2012 Evolution of the Atlas data and computing model for a Tier2 in the EGI infrastructure During last years the Atlas computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more effic...

  11. ATLAS computing operations within the GridKa Cloud

    International Nuclear Information System (INIS)

    The organisation and operations model of the ATLAS T1-T2 federation/Cloud associated to the GridKa T1 in Karlsruhe is described. Attention is paid to Cloud level services and the experience gained during the last years of operation. The ATLAS GridKa Cloud is large and divers spanning 5 countries, 2 ROC's and is currently comprised of 13 core sites. A well defined and tested operations model in such a Cloud is of the utmost importance. We have defined the core Cloud services required by the ATLAS experiment and ensured that they are performed in a managed and sustainable manner. Services such as Distributed Data Management involving data replication,deletion and consistency checks, Monte Carlo Production, software installation and data reprocessing are described in greater detail. In addition to providing these central services we have undertaken several Cloud level stress tests and developed monitoring tools to aid with Cloud diagnostics. Furthermore we have defined good channels of communication between ATLAS, the T1 and the T2's and have pro-active contributions from the T2 manpower. A brief introduction to the GridKa Cloud is provided followed by a more detailed discussion of the operations model and ATLAS services within the Cloud.

  12. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    Science.gov (United States)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  13. ATLAS

    CERN Multimedia

    2002-01-01

    Barrel and END-CAP Toroids In order to produce a powerful magnetic field to bend the paths of the muons, the ATLAS detector uses an exceptionally large system of air-core toroids arranged outside the calorimeter volumes. The large volume magnetic field has a wide angular coverage and strengths of up to 4.7tesla. The toroids system contains over 100km of superconducting wire and has a design current of 20 500 amperes. (ATLAS brochure: The Technical Challenges)

  14. Tools and strategies to monitor the ATLAS online computing farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Darlea, G L; Dumitru, I; Scannicchio, DA; Twomey, M S; Valsan, M L; Zaytsev, A

    2012-01-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  15. Tools and strategies to monitor the ATLAS online computing farm

    Science.gov (United States)

    Ballestrero, S.; Brasolin, F.; Dârlea, G.–L.; Dumitru, I.; Scannicchio, D. A.; Twomey, M. S.; Vâlsan, M. L.; Zaytsev, A.

    2012-12-01

    In the ATLAS experiment the collection, processing, selection and conveyance of event data from the detector front-end electronics to mass storage is performed by the ATLAS online farm consisting of nearly 3000 PCs with various characteristics. To assure the correct and optimal working conditions the whole online system must be constantly monitored. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the assessment of a new monitoring and alerting system based on Icinga. This is an open source monitoring system derived from Nagios, granting backward compatibility with already known configurations, plugins and add-ons, while providing new features. We also report on the evaluation of different data gathering systems and visualization interfaces.

  16. ATLAS Grid computing activities within the Gridka cloud

    International Nuclear Information System (INIS)

    The WLCG Tier1 at GridKa in Karlsruhe Germany, has a number of Tier2 sites associated with it. Together the Tier2s, located in Germany, Austria, Czech Republic Poland and Switzerland, and the T1 at GridKa form the ATLAS Gridka-cloud. Like other clouds in WLCG, the main activities within this cloud are running Monte-Carlo production jobs, distributed data management (DDM) issues and operations, tape reading tests with data re-processing in view and monitoring of the transfer efficiencies, through-puts and networking statuses between sites. An overview talk will be presented showing the activity, progresses and current status in each of the named areas and also an evaluational overview of the cloud's readiness for the ATLAS data taking in mid 2008

  17. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  18. Evolution of the ATLAS data and computing model for a Tier2 in the EGI infrastructure

    CERN Document Server

    Fernández Casaní, A; The ATLAS collaboration; González de la Hoz, S; Salt Cairols, J; Fassi, F; Kaci, M; Lamas, A; Oliver, E; Sánchez, J; Sánchez, V

    2012-01-01

    Since the start of the LHC pp collisions in 2010, the ATLAS computing model has moved from a more strict design, where every Tier2 had a liaison and a network dependence from a Tier1, to a more meshed approach where every cloud could be connected. Evolution of ATLAS data models requires changes in ATLAS Tier2s policy for the data replication, dynamic data caching and remote data access. It also requires rethinking the network infrastructure to enable any Tier2 and associated Tier3 to easily connect to any Tier1 or Tier2. Tier2s are becoming more and more important in the ATLAS computing model as it allows more data to be readily accessible for analysis jobs to all users, independently of their geographical location. The Tier2s disk space has been reserved for real, simulated, calibration and alignment, group, and user data. A buffer disk space is needed for input and output data for simulations jobs. Tier2s are going to be used more efficiently. In this way Tier1s and Tier2s are becoming more equivalent for t...

  19. ATLAS Distributed Computing experience and performance during the LHC Run-2

    CERN Document Server

    Filipcic, Andrej; The ATLAS collaboration

    2016-01-01

    ATLAS Distributed Computing during LHC Run-1 was challenged by steadily increasing computing, storage and network requirements. In addition, the complexity of processing task workflows and their associated data management requirements led to a new paradigm in the ATLAS computing model for Run-2, accompanied by extensive evolution and redesign of the workflow and data management systems. The new systems were put into production at the end of 2014, and gained robustness and maturity during 2015 data taking. ProdSys2, the new request and task interface; JEDI, the dynamic job execution engine developed as an extension to PanDA; and Rucio, the new data management system, form the core of the Run-2 ATLAS distributed computing engine. One of the big changes for Run-2 was the adoption of the Derivation Framework, which moves the chaotic CPU and data intensive part of the user analysis into the centrally organized train production, delivering derived AOD datasets to user groups for final analysis. The effectiveness of...

  20. Design and performance of the virtualization platform for offline computing on the ATLAS TDAQ Farm

    Science.gov (United States)

    Ballestrero, S.; Batraneanu, S. M.; Brasolin, F.; Contescu, C.; Di Girolamo, A.; Lee, C. J.; Pozo Astigarraga, M. E.; Scannicchio, D. A.; Twomey, M. S.; Zaytsev, A.

    2014-06-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.

  1. ATLAS

    CERN Multimedia

    Akhnazarov, V; Canepa, A; Bremer, J; Burckhart, H; Cattai, A; Voss, R; Hervas, L; Kaplon, J; Nessi, M; Werner, P; Ten kate, H; Tyrvainen, H; Vandelli, W; Krasznahorkay, A; Gray, H; Alvarez gonzalez, B; Eifert, T F; Rolando, G; Oide, H; Barak, L; Glatzer, J; Backhaus, M; Schaefer, D M; Maciejewski, J P; Milic, A; Jin, S; Von torne, E; Limbach, C; Medinnis, M J; Gregor, I; Levonian, S; Schmitt, S; Waananen, A; Monnier, E; Muanza, S G; Pralavorio, P; Talby, M; Tiouchichine, E; Tocut, V M; Rybkin, G; Wang, S; Lacour, D; Laforge, B; Ocariz, J H; Bertoli, W; Malaescu, B; Sbarra, C; Yamamoto, A; Sasaki, O; Koriki, T; Hara, K; Da silva gomes, A; Carvalho maneira, J; Marcalo da palma, A; Chekulaev, S; Tikhomirov, V; Snesarev, A; Buzykaev, A; Maslennikov, A; Peleganchuk, S; Sukharev, A; Kaplan, B E; Swiatlowski, M J; Nef, P D; Schnoor, U; Oakham, G F; Ueno, R; Orr, R S; Abouzeid, O; Haug, S; Peng, H; Kus, V; Vitek, M; Temming, K K; Dang, N P; Meier, K; Schultz-coulon, H; Geisler, M P; Sander, H; Schaefer, U; Ellinghaus, F; Rieke, S; Nussbaumer, A; Liu, Y; Richter, R; Kortner, S; Fernandez-bosman, M; Ullan comes, M; Espinal curull, J; Chiriotti alvarez, S; Caubet serrabou, M; Valladolid gallego, E; Kaci, M; Carrasco vela, N; Lancon, E C; Besson, N E; Gautard, V; Bracinik, J; Bartsch, V C; Potter, C J; Lester, C G; Moeller, V A; Rosten, J; Crooks, D; Mathieson, K; Houston, S C; Wright, M; Jones, T W; Harris, O B; Byatt, T J; Dobson, E; Hodgson, P; Hodgkinson, M C; Dris, M; Karakostas, K; Ntekas, K; Oren, D; Duchovni, E; Etzion, E; Oren, Y; Ferrer, L M; Testa, M; Doria, A; Merola, L; Sekhniaidze, G; Giordano, R; Ricciardi, S; Milazzo, A; Falciano, S; De pedis, D; Dionisi, C; Veneziano, S; Cardarelli, R; Verzegnassi, C; Soualah, R; Ochi, A; Ohshima, T; Kishiki, S; Linde, F L; Vreeswijk, M; Werneke, P; Muijs, A; Vankov, P H; Jansweijer, P P M; Dale, O; Lund, E; Bruckman de renstrom, P; Dabrowski, W; Adamek, J D; Wolters, H; Micu, L; Pantea, D; Tudorache, V; Mjoernmark, J; Klimek, P J; Ferrari, A; Abdinov, O; Akhoundov, A; Hashimov, R; Shelkov, G; Khubua, J; Ladygin, E; Lazarev, A; Glagolev, V; Dedovich, D; Lykasov, G; Zhemchugov, A; Zolnikov, Y; Ryabenko, M; Sivoklokov, S; Vasilyev, I; Shalimov, A; Lobanov, M; Paramoshkina, E; Mosidze, M; Bingul, A; Nodulman, L J; Guarino, V J; Yoshida, R; Drake, G R; Calafiura, P; Haber, C; Quarrie, D R; Alonso, J R; Anderson, C; Evans, H; Lammers, S W; Baubock, M; Anderson, K; Petti, R; Suhr, C A; Linnemann, J T; Richards, R A; Tollefson, K A; Holzbauer, J L; Stoker, D P; Pier, S; Nelson, A J; Isakov, V; Martin, A J; Adelman, J A; Paganini, M; Gutierrez, P; Snow, J M; Pearson, B L; Cleland, W E; Savinov, V; Wong, W; Goodson, J J; Li, H; Lacey, R A; Gordeev, A; Gordon, H; Lanni, F; Nevski, P; Rescia, S; Kierstead, J A; Liu, Z; Yu, W W H; Bensinger, J; Hashemi, K S; Bogavac, D; Cindro, V; Hoeferkamp, M R; Coelli, S; Iodice, M; Piegaia, R N; Alonso, F; Wahlberg, H P; Barberio, E L; Limosani, A; Rodd, N L; Jennens, D T; Hill, E C; Pospisil, S; Smolek, K; Schaile, D A; Rauscher, F G; Adomeit, S; Mattig, P M; Wahlen, H; Volkmer, F; Calvente lopez, S; Sanchis peris, E J; Pallin, D; Podlyski, F; Says, L; Boumediene, D E; Scott, W; Phillips, P W; Greenall, A; Turner, P; Gwilliam, C B; Kluge, T; Wrona, B; Sellers, G J; Millward, G; Adragna, P; Hartin, A; Alpigiani, C; Piccaro, E; Bret cano, M; Hughes jones, R E; Mercer, D; Oh, A; Chavda, V S; Carminati, L; Cavasinni, V; Fedin, O; Patrichev, S; Ryabov, Y; Nesterov, S; Grebenyuk, O; Sasso, J; Mahmood, H; Polsdofer, E; Dai, T; Ferretti, C; Liu, H; Hegazy, K H; Benjamin, D P; Zobernig, G; Ban, J; Brooijmans, G H; Keener, P; Williams, H H; Le geyt, B C; Hines, E J; Fadeyev, V; Schumm, B A; Law, A T; Kuhl, A D; Neubauer, M S; Shang, R; Gagliardi, G; Calabro, D; Conta, C; Zinna, M; Jones, G; Li, J; Stradling, A R; Hadavand, H K; Mcguigan, P; Chiu, P; Baldelomar, E; Stroynowski, R A; Kehoe, R L; De groot, N; Timmermans, C; Lach-heb, F; Addy, T N; Nakano, I; Moreno lopez, D; Grosse-knetter, J; Tyson, B; Rude, G D; Tafirout, R; Benoit, P; Danielsson, H O; Elsing, M; Fassnacht, P; Froidevaux, D; Ganis, G; Gorini, B; Lasseur, C; Lehmann miotto, G; Kollar, D; Aleksa, M; Sfyrla, A; Duehrssen-debling, K; Fressard-batraneanu, S; Van der ster, D C; Bortolin, C; Schumacher, J; Mentink, M; Geich-gimbel, C; Yau wong, K H; Lafaye, R; Crepe-renaudin, S; Albrand, S; Hoffmann, D; Pangaud, P; Meessen, C; Hrivnac, J; Vernay, E; Perus, A; Henrot versille, S L; Le dortz, O; Derue, F; Piccinini, M; Polini, A; Terada, S; Arai, Y; Ikeno, M; Fujii, H; Nagano, K; Ukegawa, F; Aguilar saavedra, J A; Conde muino, P; Castro, N F; Eremin, V; Kopytine, M; Sulin, V; Tsukerman, I; Korol, A; Nemethy, P; Bartoldus, R; Glatte, A; Chelsky, S; Van nieuwkoop, J; Bellerive, A; Sinervo, J K; Battaglia, A; Barbier, G J; Pohl, M; Rosselet, L; Alexandre, G B; Prokoshin, F; Pezoa rivera, R A; Batkova, L; Kladiva, E; Stastny, J; Kubes, T; Vidlakova, Z; Esch, H; Homann, M; Herten, L G; Zimmermann, S U; Pfeifer, B; Stenzel, H; Andrei, G V; Wessels, M; Buescher, V; Kleinknecht, K; Fiedler, F M; Schroeder, C D; Fernandez, E; Mir martinez, L; Vorwerk, V; Bernabeu verdu, J; Salt, J; Civera navarrete, J V; Bernard, R; Berriaud, C P; Chevalier, L P; Hubbard, R; Schune, P; Nikolopoulos, K; Batley, J R; Brochu, F M; Phillips, A W; Teixeira-dias, P J; Rose, M B D; Buttar, C; Buckley, A G; Nurse, E L; Larner, A B; Boddy, C; Henderson, J; Costanzo, D; Tarem, S; Maccarrone, G; Laurelli, P F; Alviggi, M; Chiaramonte, R; Izzo, V; Palumbo, V; Fraternali, M; Crosetti, G; Marchese, F; Yamaguchi, Y; Hessey, N P; Mechnich, J M; Liebig, W; Kastanas, K A; Sjursen, T B; Zalieckas, J; Cameron, D G; Banka, P; Kowalewska, A B; Dwuznik, M; Mindur, B; Boldea, V; Hedberg, V; Smirnova, O; Sellden, B; Allahverdiyev, T; Gornushkin, Y; Koultchitski, I; Tokmenin, V; Chizhov, M; Gongadze, A; Khramov, E; Sadykov, R; Krasnoslobodtsev, I; Smirnova, L; Kramarenko, V; Minaenko, A; Zenin, O; Beddall, A J; Ozcan, E V; Hou, S; Wang, S; Moyse, E; Willocq, S; Chekanov, S; Le compte, T J; Love, J R; Ciocio, A; Hinchliffe, I; Tsulaia, V; Gomez, A; Luehring, F; Zieminska, D; Huth, J E; Gonski, J L; Oreglia, M; Tang, F; Shochet, M J; Costin, T; Mcleod, A; Uzunyan, S; Martin, S P; Pope, B G; Schwienhorst, R H; Brau, J E; Ptacek, E S; Milburn, R H; Sabancilar, E; Lauer, R; Saleem, M; Mohamed meera lebbai, M R; Lou, X; Reeves, K B; Rijssenbeek, M; Novakova, P N; Rahm, D; Steinberg, P A; Wenaus, T J; Paige, F; Ye, S; Kotcher, J R; Assamagan, K A; Oliveira damazio, D; Maeno, T; Henry, A; Dushkin, A; Costa, G; Meroni, C; Resconi, S; Lari, T; Biglietti, M; Lohse, T; Gonzalez silva, M L; Monticelli, F G; Saavedra, A F; Patel, N D; Ciodaro xavier, T; Asevedo nepomuceno, A; Lefebvre, M; Albert, J E; Kubik, P; Faltova, J; Turecek, D; Solc, J; Schaile, O; Ebke, J; Losel, P J; Zeitnitz, C; Sturm, P D; Barreiro alonso, F; Modesto alapont, P; Soret medel, J; Garzon alama, E J; Gee, C N; Mccubbin, N A; Sankey, D; Emeliyanov, D; Dewhurst, A L; Houlden, M A; Klein, M; Burdin, S; Lehan, A K; Eisenhandler, E; Lloyd, S; Traynor, D P; Ibbotson, M; Marshall, R; Pater, J; Freestone, J; Masik, J; Haughton, I; Manousakis katsikakis, A; Sampsonidis, D; Krepouri, A; Roda, C; Sarri, F; Fukunaga, C; Nadtochiy, A; Kara, S O; Timm, S; Alam, S M; Rashid, T; Goldfarb, S; Espahbodi, S; Marley, D E; Rau, A W; Dos anjos, A R; Haque, S; Grau, N C; Havener, L B; Thomson, E J; Newcomer, F M; Hansl-kozanecki, G; Deberg, H A; Takeshita, T; Goggi, V; Ennis, J S; Olness, F I; Kama, S; Ordonez sanz, G; Koetsveld, F; Elamri, M; Mansoor-ul-islam, S; Lemmer, B; Kawamura, G; Bindi, M; Schulte, S; Kugel, A; Kretz, M P; Kurchaninov, L; Blanchot, G; Chromek-burckhart, D; Di girolamo, B; Francis, D; Gianotti, F; Nordberg, M Y; Pernegger, H; Roe, S; Boyd, J; Wilkens, H G; Pauly, T; Fabre, C; Tricoli, A; Bertet, D; Ruiz martinez, M A; Arnaez, O L; Lenzi, B; Boveia, A J; Gillberg, D I; Davies, J M; Zimmermann, R; Uhlenbrock, M; Kraus, J K; Narayan, R T; John, A; Dam, M; Padilla aranda, C; Bellachia, F; Le flour chollet, F M; Jezequel, S; Dumont dayot, N; Fede, E; Mathieu, M; Gensolen, F D; Alio, L; Arnault, C; Bouchel, M; Ducorps, A; Kado, M M; Lounis, A; Zhang, Z P; De vivie de regie, J; Beau, T; Bruni, A; Bruni, G; Grafstrom, P; Romano, M; Lasagni manghi, F; Massa, L; Shaw, K; Ikegami, Y; Tsuno, S; Kawanishi, Y; Benincasa, G; Blagov, M; Fedorchuk, R; Shatalov, P; Romaniouk, A; Belotskiy, K; Timoshenko, S; Hooft van huysduynen, L; Lewis, G H; Wittgen, M M; Mader, W F; Rudolph, C J; Gumpert, C; Mamuzic, J; Rudolph, G; Schmid, P; Corriveau, F; Belanger-champagne, C; Yarkoni, S; Leroy, C; Koffas, T; Harack, B D; Weber, M S; Beck, H; Leger, A; Gonzalez sevilla, S; Zhu, Y; Gao, J; Zhang, X; Blazek, T; Rames, J; Sicho, P; Kouba, T; Sluka, T; Lysak, R; Ristic, B; Kompatscher, A E; Von radziewski, H; Groll, M; Meyer, C P; Oberlack, H; Stonjek, S M; Cortiana, G; Werthenbach, U; Ibragimov, I; Czirr, H S; Cavalli-sforza, M; Puigdengoles olive, C; Tallada crespi, P; Marti i garcia, S; Gonzalez de la hoz, S; Guyot, C; Meyer, J; Schoeffel, L O; Garvey, J; Hawkes, C; Hillier, S J; Staley, R J; Salvatore, P F; Santoyo castillo, I; Carter, J; Yusuff, I B; Barlow, N R; Berry, T S; Savage, G; Wraight, K G; Steele, G E; Hughes, G; Walder, J W; Love, P A; Crone, G J; Waugh, B M; Boeser, S; Sarkar, A M; Holmes, A; Massey, R; Pinder, A; Nicholson, R; Korolkova, E; Katsoufis, I; Maltezos, S; Tsipolitis, G; Leontsinis, S; Levinson, L J; Shoa, M; Abramowicz, H E; Bella, G; Gershon, A; Urkovsky, E; Taiblum, N; Gatti, C; Della pietra, M; Lanza, A; Negri, A; Flaminio, V; Lacava, F; Petrolo, E; Pontecorvo, L; Rosati, S; Zanello, L; Pasqualucci, E; Di ciaccio, A; Giordani, M; Yamazaki, Y; Jinno, T; Nomachi, M; De jong, P J; Ferrari, P; Homma, J; Van der graaf, H; Igonkina, O B; Stugu, B S; Buanes, T; Pedersen, M; Turala, M; Olszewski, A J; Koperny, S Z; Onofre, A; Castro nunes fiolhais, M; Alexa, C; Cuciuc, C M; Akesson, T P A; Hellman, S L; Milstead, D A; Bondyakov, A; Pushnova, V; Budagov, Y; Minashvili, I; Romanov, V; Sniatkov, V; Tskhadadze, E; Kalinovskaya, L; Shalyugin, A; Tavkhelidze, A; Rumyantsev, L; Karpov, S; Soloshenko, A; Vostrikov, A; Borissov, E; Solodkov, A; Vorob'ev, A; Sidorov, S; Malyaev, V; Lee, S; Grudzinski, J J; Virzi, J S; Vahsen, S E; Lys, J; Penwell, J W; Yan, Z; Bernard, C S; Barreiro guimaraes da costa, J P; Oliver, J N; Merritt, F S; Brubaker, E M; Kapliy, A; Kim, J; Zutshi, V V; Burghgrave, B O; Abolins, M A; Arabidze, G; Caughron, S A; Frey, R E; Radloff, P T; Schernau, M; Murillo garcia, R; Porter, R A; Mccormick, C A; Karn, P J; Sliwa, K J; Demers konezny, S M; Strauss, M G; Mueller, J A; Izen, J M; Klimentov, A; Lynn, D; Polychronakos, V; Radeka, V; Sondericker, J I I I; Bathe, S; Duffin, S; Chen, H; De castro faria salgado, P E; Kersevan, B P; Lacker, H M; Schulz, H; Kubota, T; Tan, K G; Yabsley, B D; Nunes de moura junior, N; Pinfold, J; Soluk, R A; Ouellette, E A; Leitner, R; Sykora, T; Solar, M; Sartisohn, G; Hirschbuehl, D; Huning, D; Fischer, J; Terron cuadrado, J; Glasman kuguel, C B; Lacasta llacer, C; Lopez-amengual, J; Calvet, D; Chevaleyre, J; Daudon, F; Montarou, G; Guicheney, C; Calvet, S P J; Tyndel, M; Dervan, P J; Maxfield, S J; Hayward, H S; Beck, G; Cox, B; Da via, C; Paschalias, P; Manolopoulou, M; Ragusa, F; Cimino, D; Ezzi, M; Fiuza de barros, N F; Yildiz, H; Ciftci, A K; Turkoz, S; Zain, S B; Tegenfeldt, F; Chapman, J W; Panikashvili, N; Bocci, A; Altheimer, A D; Martin, F F; Fratina, S; Jackson, B D; Grillo, A A; Seiden, A; Watts, G T; Mangiameli, S; Johns, K A; O'grady, F T; Errede, D R; Darbo, G; Ferretto parodi, A; Leahu, M C; Farbin, A; Ye, J; Liu, T; Wijnen, T A; Naito, D; Takashima, R; Sandoval usme, C E; Zinonos, Z; Moreno llacer, M; Agricola, J B; Mcgovern, S A; Sakurai, Y; Trigger, I M; Qing, D; De silva, A S; Butin, F; Dell'acqua, A; Hawkings, R J; Lamanna, M; Mapelli, L; Passardi, G; Rembser, C; Tremblet, L; Andreazza, W; Dobos, D A; Koblitz, B; Bianco, M; Dimitrov, G V; Schlenker, S; Armbruster, A J; Rammensee, M C; Romao rodrigues, L F; Peters, K; Pozo astigarraga, M E; Yi, Y; Desch, K K; Huegging, F G; Muller, K K; Stillings, J A; Schaetzel, S; Xella, S; Hansen, J D; Colas, J; Daguin, G; Wingerter, I; Ionescu, G D; Ledroit, F; Lucotte, A; Clement, B E; Stark, J; Clemens, J; Djama, F; Knoops, E; Coadou, Y; Vigeolas-choury, E; Feligioni, L; Iconomidou-fayard, L; Imbert, P; Schaffer, A C; Nikolic, I; Trincaz-duvoid, S; Warin, P; Camard, A F; Ridel, M; Pires, S; Giacobbe, B; Spighi, R; Villa, M; Negrini, M; Sato, K; Gavrilenko, I; Akimov, A; Khovanskiy, V; Talyshev, A; Voronkov, A; Hakobyan, H; Mallik, U; Shibata, A; Konoplich, R; Barklow, T L; Koi, T; Straessner, A; Stelzer, B; Robertson, S H; Vachon, B; Stoebe, M; Keyes, R A; Wang, K; Billoud, T R V; Strickland, V; Batygov, M; Krieger, P; Palacino caviedes, G D; Gay, C W; Jiang, Y; Han, L; Liu, M; Zenis, T; Lokajicek, M; Staroba, P; Tasevsky, M; Popule, J; Svatos, M; Seifert, F; Landgraf, U; Lai, S T; Schmitt, K H; Achenbach, R; Schuh, N; Kiesling, C; Macchiolo, A; Nisius, R; Schacht, P; Von der schmitt, J G; Kortner, O; Atlay, N B; Segura sole, E; Grinstein, S; Neissner, C; Bruckner, D M; Oliver garcia, E; Boonekamp, M; Perrin, P; Gaillot, F M; Wilson, J A; Thomas, J P; Thompson, P D; Palmer, J D; Falk, I E; Chavez barajas, C A; Sutton, M R; Robinson, D; Kaneti, S A; Wu, T; Robson, A; Shaw, C; Buzatu, A; Qin, G; Jones, R; Bouhova-thacker, E V; Viehhauser, G; Weidberg, A R; Gilbert, L; Johansson, P D C; Orphanides, M; Vlachos, S; Behar harpaz, S; Papish, O; Lellouch, D J H; Turgeman, D; Benary, O; La rotonda, L; Vena, R; Tarasio, A; Marzano, F; Gabrielli, A; Di stante, L; Liberti, B; Aielli, G; Oda, S; Nozaki, M; Takeda, H; Hayakawa, T; Miyazaki, K; Maeda, J; Sugimoto, T; Pettersson, N E; Bentvelsen, S; Groenstege, H L; Lipniacka, A; Vahabi, M; Ould-saada, F; Chwastowski, J J; Hajduk, Z; Kaczmarska, A; Olszowska, J B; Trzupek, A; Staszewski, R P; Palka, M; Constantinescu, S; Jarlskog, G; Lundberg, B L A; Pearce, M; Ellert, M F; Bannikov, A; Fechtchenko, A; Iambourenko, V; Kukhtin, V; Pozdniakov, V; Topilin, N; Vorozhtsov, S; Khassanov, A; Fliaguine, V; Kharchenko, D; Nikolaev, K; Kotenov, K; Kozhin, A; Zenin, A; Ivashin, A; Golubkov, D; Beddall, A; Su, D; Dallapiccola, C J; Cranshaw, J M; Price, L; Stanek, R W; Gieraltowski, G; Zhang, J; Gilchriese, M; Shapiro, M; Ahlen, S; Morii, M; Taylor, F E; Miller, R J; Phillips, F H; Torrence, E C; Wheeler, S J; Benedict, B H; Napier, A; Hamilton, S F; Petrescu, T A; Boyd, G R J; Jayasinghe, A L; Smith, J M; Mc carthy, R L; Adams, D L; Le vine, M J; Zhao, X; Patwa, A M; Baker, M; Kirsch, L; Krstic, J; Simic, L; Filipcic, A; Seidel, S C; Cantore-cavalli, D; Baroncelli, A; Kind, O M; Scarcella, M J; Maidantchik, C L L; Seixas, J; Balabram filho, L E; Vorobel, V; Spousta, M; Strachota, P; Vokac, P; Slavicek, T; Bergmann, B L; Biebel, O; Kersten, S; Srinivasan, M; Trefzger, T; Vazeille, F; Insa, C; Kirk, J; Middleton, R; Burke, S; Klein, U; Morris, J D; Ellis, K V; Millward, L R; Giokaris, N; Ioannou, P; Angelidakis, S; Bouzakis, K; Andreazza, A; Perini, L; Chtcheguelski, V; Spiridenkov, E; Yilmaz, M; Kaya, U; Ernst, J; Mahmood, A; Saland, J; Kutnink, T; Holler, J; Kagan, H P; Wang, C; Pan, Y; Xu, N; Ji, H; Willis, W J; Tuts, P M; Litke, A; Wilder, M; Rothberg, J; Twomey, M S; Rizatdinova, F; Loch, P; Rutherfoord, J P; Varnes, E W; Barberis, D; Osculati-becchi, B; Brandt, A G; Turvey, A J; Benchekroun, D; Nagasaka, Y; Thanakornworakij, T; Quadt, A; Nadal serrano, J; Magradze, E; Nackenhorst, O; Musheghyan, H; Kareem, M; Chytka, L; Perez codina, E; Stelzer-chilton, O; Brunel, B; Henriques correia, A M; Dittus, F; Hatch, M; Haug, F; Hauschild, M; Huhtinen, M; Lichard, P; Schuh-erhard, S; Spigo, G; Avolio, G; Tsarouchas, C; Ahmad, I; Backes, M P; Barisits, M; Gadatsch, S; Cerv, M; Sicoe, A D; Nattamai sekar, L P; Fazio, D; Shan, L; Sun, X; Gaycken, G F; Hemperek, T; Petersen, T C; Alonso diaz, A; Moynot, M; Werlen, M; Hryn'ova, T; Gallin-martel, M; Wu, M; Touchard, F; Menouni, M; Fougeron, D; Le guirriec, E; Chollet, J C; Veillet, J; Barrillon, P; Prat, S; Krasny, M W; Roos, L; Boudarham, G; Lefebvre, G; Boscherini, D; Valentinetti, S; Acharya, B S; Miglioranzi, S; Kanzaki, J; Unno, Y; Yasu, Y; Iwasaki, H; Tokushuku, K; Maio, A; Rodrigues fernandes, B J; Pinto figueiredo raimundo ribeiro, N M; Bot, A; Shmeleva, A; Zaidan, R; Djilkibaev, R; Mincer, A I; Salnikov, A; Aracena, I A; Schwartzman, A G; Silverstein, D J; Fulsom, B G; Anulli, F; Kuhn, D; White, M J; Vetterli, M J; Stockton, M C; Mantifel, R L; Azuelos, G; Shoaleh saadi, D; Savard, P; Clark, A; Ferrere, D; Gaumer, O P; Diaz gutierrez, M A; Liu, Y; Dubnickova, A; Sykora, I; Strizenec, P; Weichert, J; Zitek, K; Naumann, T; Goessling, C; Klingenberg, R; Jakobs, K; Rurikova, Z; Werner, M W; Arnold, H R; Buscher, D; Hanke, P; Stamen, R; Dietzsch, T A; Kiryunin, A; Salihagic, D; Buchholz, P; Pacheco pages, A; Sushkov, S; Porto fernandez, M D C; Cruz josa, R; Vos, M A; Schwindling, J; Ponsot, P; Charignon, C; Kivernyk, O; Goodrick, M J; Hill, J C; Green, B J; Quarman, C V; Bates, R L; Allwood-spiers, S E; Quilty, D; Chilingarov, A; Long, R E; Barton, A E; Konstantinidis, N; Simmons, B; Davison, A R; Christodoulou, V; Wastie, R L; Gallas, E J; Cox, J; Dehchar, M; Behr, J K; Pickering, M A; Filippas, A; Panagoulias, I; Tenenbaum katan, Y D; Roth, I; Pitt, M; Citron, Z H; Benhammou, Y; Amram, N Y N; Soffer, A; Gorodeisky, R; Antonelli, M; Chiarella, V; Curatolo, M; Esposito, B; Nicoletti, G; Martini, A; Sansoni, A; Carlino, G; Del prete, T; Bini, C; Vari, R; Kuna, M; Pinamonti, M; Itoh, Y; Colijn, A P; Klous, S; Garitaonandia elejabarrieta, H; Rosendahl, P L; Taga, A V; Malecki, P; Malecki, P; Wolter, M W; Kowalski, T; Korcyl, G M; Caprini, M; Caprini, I; Dita, P; Olariu, A; Tudorache, A; Lytken, E; Hidvegi, A; Aliyev, M; Alexeev, G; Bardin, D; Kakurin, S; Lebedev, A; Golubykh, S; Chepurnov, V; Gostkin, M; Kolesnikov, V; Karpova, Z; Davkov, K I; Yeletskikh, I; Grishkevich, Y; Rud, V; Myagkov, A; Nikolaenko, V; Starchenko, E; Zaytsev, A; Fakhrutdinov, R; Cheine, I; Istin, S; Sahin, S; Teng, P; Chu, M L; Trilling, G H; Heinemann, B; Richoz, N; Degeorge, C; Youssef, S; Pilcher, J; Cheng, Y; Purohit, M V; Kravchenko, A; Calkins, R E; Blazey, G; Hauser, R; Koll, J D; Reinsch, A; Brost, E C; Allen, B W; Lankford, A J; Ciobotaru, M D; Slagle, K J; Haffa, B; Mann, A; Loginov, A; Cummings, J T; Loyal, J D; Skubic, P L; Boudreau, J F; Lee, B E; Redlinger, G; Wlodek, T; Carcassi, G; Sexton, K A; Yu, D; Deng, W; Metcalfe, J E; Panitkin, S; Sijacki, D; Mikuz, M; Kramberger, G; Tartarelli, G F; Farilla, A; Stanescu, C; Herrberg, R; Alconada verzini, M J; Brennan, A J; Varvell, K; Marroquim, F; Gomes, A A; Do amaral coutinho, Y; Gingrich, D; Moore, R W; Dolejsi, J; Valkar, S; Broz, J; Jindra, T; Kohout, Z; Kral, V; Mann, A W; Calfayan, P P; Langer, T; Hamacher, K; Sanny, B; Wagner, W; Flick, T; Redelbach, A R; Ke, Y; Higon-rodriguez, E; Donini, J N; Lafarguette, P; Adye, T J; Baines, J; Barnett, B; Wickens, F J; Martin, V J; Jackson, J N; Prichard, P; Kretzschmar, J; Martin, A J; Walker, C J; Potter, K M; Kourkoumelis, C; Tzamarias, S; Houiris, A G; Iliadis, D; Fanti, M; Bertolucci, F; Maleev, V; Sultanov, S; Rosenberg, E I; Krumnack, N E; Bieganek, C; Diehl, E B; Mc kee, S P; Eppig, A P; Harper, D R; Liu, C; Schwarz, T A; Mazor, B; Looper, K A; Wiedenmann, W; Huang, P; Stahlman, J M; Battaglia, M; Nielsen, J A; Zhao, T; Khanov, A; Kaushik, V S; Vichou, E; Liss, A M; Gemme, C; Morettini, P; Parodi, F; Passaggio, S; Rossi, L; Kuzhir, P; Ignatenko, A; Ferrari, R; Spairani, M; Pianori, E; Sekula, S J; Firan, A I; Cao, T; Hetherly, J W; Gouighri, M; Vassilakopoulos, V; Long, M C; Shimojima, M; Sawyer, L H; Brummett, R E; Losada, M A; Schorlemmer, A L; Mantoani, M; Bawa, H S; Mornacchi, G; Nicquevert, B; Palestini, S; Stapnes, S; Veness, R; Kotamaki, M J; Sorde, C; Iengo, P; Campana, S; Goossens, L; Zajacova, Z; Pribyl, L; Poveda torres, J; Marzin, A; Conti, G; Carrillo montoya, G D; Kroseberg, J; Gonella, L; Velz, T; Schmitt, S; Lobodzinska, E M; Lovschall-jensen, A E; Galster, G; Perrot, G; Cailles, M; Berger, N; Barnovska, Z; Delsart, P; Lleres, A; Tisserant, S; Grivaz, J; Matricon, P; Bellagamba, L; Bertin, A; Bruschi, M; De castro, S; Semprini cesari, N; Fabbri, L; Rinaldi, L; Quayle, W B; Truong, T N L; Kondo, T; Haruyama, T; Ng, C; Do valle wemans, A; Almeida veloso, F M; Konovalov, S; Ziegler, J M; Su, D; Lukas, W; Prince, S; Ortega urrego, E J; Teuscher, R J; Knecht, N; Pretzl, K; Borer, C; Gadomski, S; Koch, B; Kuleshov, S; Brooks, W K; Antos, J; Kulkova, I; Chudoba, J; Chyla, J; Tomasek, L; Bazalova, M; Messmer, I; Tobias, J; Sundermann, J E; Kuehn, S S; Kluge, E; Scharf, V L; Barillari, T; Kluth, S; Menke, S; Weigell, P; Schwegler, P; Ziolkowski, M; Casado lechuga, P M; Garcia, C; Sanchez, J; Costa mezquita, M J; Valero biot, J A; Laporte, J; Nikolaidou, R; Virchaux, M; Nguyen, V T H; Charlton, D; Harrison, K; Slater, M W; Newman, P R; Parker, A M; Ward, P; Mcgarvie, S A; Kilvington, G J; D'auria, S; O'shea, V; Mcglone, H M; Fox, H; Henderson, R; Kartvelishvili, V; Davies, B; Sherwood, P; Fraser, J T; Lancaster, M A; Tseng, J C; Hays, C P; Apolle, R; Dixon, S D; Parker, K A; Gazis, E; Papadopoulou, T; Panagiotopoulou, E; Karastathis, N; Hershenhorn, A D; Milov, A; Groth-jensen, J; Bilokon, H; Miscetti, S; Canale, V; Rebuzzi, D M; Capua, M; Bagnaia, P; De salvo, A; Gentile, S; Safai tehrani, F; Solfaroli camillocci, E; Sasao, N; Tsunada, K; Massaro, G; Magrath, C A; Van kesteren, Z; Beker, M G; Van den wollenberg, W; Bugge, L; Buran, T; Read, A L; Gjelsten, B K; Banas, E A; Turnau, J; Derendarz, D K; Kisielewska, D; Chesneanu, D; Rotaru, M; Maurer, J B; Wong, M L; Lund-jensen, B; Asman, B; Jon-and, K B; Silverstein, S B; Johansen, M; Alexandrov, I; Iatsounenko, I; Krumshteyn, Z; Peshekhonov, V; Rybaltchenko, K; Samoylov, V; Cheplakov, A; Kekelidze, G; Lyablin, M; Teterine, V; Bednyakov, V; Kruchonak, U; Shiyakova, M M; Demichev, M; Denisov, S P; Fenyuk, A; Djobava, T; Salukvadze, G; Cetin, S A; Brau, B P; Pais, P R; Proudfoot, J; Van gemmeren, P; Zhang, Q; Beringer, J A; Ely, R; Leggett, C; Pengg, F X; Barnett, M R; Quick, R E; Williams, S; Gardner jr, R W; Huston, J; Brock, R; Wanotayaroj, C; Unel, G N; Taffard, A C; Frate, M; Baker, K O; Tipton, P L; Hutchison, A; Walsh, B J; Norberg, S R; Su, J; Tsybyshev, D; Caballero bejar, J; Ernst, M U; Wellenstein, H; Vudragovic, D; Vidic, I; Gorelov, I V; Toms, K; Alimonti, G; Petrucci, F; Kolanoski, H; Smith, J; Jeng, G; Watson, I J; Guimaraes ferreira, F; Miranda vieira xavier, F; Araujo pereira, R; Poffenberger, P; Sopko, V; Elmsheuser, J; Wittkowski, J; Glitza, K; Gorfine, G W; Ferrer soria, A; Fuster verdu, J A; Sanchis lozano, A; Reinmuth, G; Busato, E; Haywood, S J; Mcmahon, S J; Qian, W; Villani, E G; Laycock, P J; Poll, A J; Rizvi, E S; Foster, J M; Loebinger, F; Forti, A; Plano, W G; Brown, G J A; Kordas, K; Vegni, G; Ohsugi, T; Iwata, Y; Cherkaoui el moursli, R; Sahin, M; Akyazi, E; Carlsen, A; Kanwal, B; Cochran jr, J H; Aronnax, M V; Lockner, M J; Zhou, B; Levin, D S; Weaverdyck, C J; Grom, G F; Rudge, A; Ebenstein, W L; Jia, B; Yamaoka, J; Jared, R C; Wu, S L; Banerjee, S; Lu, Q; Hughes, E W; Alkire, S P; Degenhardt, J D; Lipeles, E D; Spencer, E N; Savine, A; Cheu, E C; Lampl, W; Veatch, J R; Roberts, K; Atkinson, M J; Odino, G A; Polesello, G; Martin, T; White, A P; Stephens, R; Grinbaum sarkisyan, E; Vartapetian, A; Yu, J; Sosebee, M; Thilagar, P A; Spurlock, B; Bonde, R; Filthaut, F; Klok, P; Hoummada, A; Ouchrif, M; Pellegrini, G; Rafi tatjer, J M; Navarro, G A; Blumenschein, U; Weingarten, J C; Mueller, D; Graber, L; Gao, Y; Bode, A; Capeans garrido, M D M; Carli, T; Wells, P; Beltramello, O; Vuillermet, R; Dudarev, A; Salzburger, A; Torchiani, C I; Serfon, C L G; Sloper, J E; Duperrier, G; Lilova, P T; Knecht, M O; Lassnig, M; Anders, G; Deviveiros, P; Young, C; Sforza, F; Shaochen, C; Lu, F; Wermes, N; Wienemann, P; Schwindt, T; Hansen, P H; Hansen, J B; Pingel, A M; Massol, N; Elles, S L; Hallewell, G D; Rozanov, A; Vacavant, L; Fournier, D A; Poggioli, L; Puzo, P M; Tanaka, R; Escalier, M A; Makovec, N; Rezynkina, K; De cecco, S; Cavalleri, P G; Massa, I; Zoccoli, A; Tanaka, S; Odaka, S; Mitsui, S; Tomasio pina, J A; Santos, H F; Satsounkevitch, I; Harkusha, S; Baranov, S; Nechaeva, P; Kayumov, F; Kazanin, V; Asai, M; Mount, R P; Nelson, T K; Smith, D; Kenney, C J; Malone, C M; Kobel, M; Friedrich, F; Grohs, J P; Jais, W J; O'neil, D C; Warburton, A T; Vincter, M; Mccarthy, T G; Groer, L S; Pham, Q T; Taylor, W J; La marra, D; Perrin, E; Wu, X; Bell, W H; Delitzsch, C M; Feng, C; Zhu, C; Tokar, S; Bruncko, D; Kupco, A; Marcisovsky, M; Jakoubek, T; Bruneliere, R; Aktas, A; Narrias villar, D I; Tapprogge, S; Mattmann, J; Kroha, H; Crespo, J; Korolkov, I; Cavallaro, E; Cabrera urban, S; Mitsou, V; Kozanecki, W; Mansoulie, B; Pabot, Y; Etienvre, A; Bauer, F; Chevallier, F; Bouty, A R; Watkins, P; Watson, A; Faulkner, P J W; Curtis, C J; Murillo quijada, J A; Grout, Z J; Chapman, J D; Cowan, G D; George, S; Boisvert, V; Mcmahon, T R; Doyle, A T; Thompson, S A; Britton, D; Smizanska, M; Campanelli, M; Butterworth, J M; Loken, J; Renton, P; Barr, A J; Issever, C; Short, D; Crispin ortuzar, M; Tovey, D R; French, R; Rozen, Y; Alexander, G; Kreisel, A; Conventi, F; Raulo, A; Schioppa, M; Susinno, G; Tassi, E; Giagu, S; Luci, C; Nisati, A; Cobal, M; Ishikawa, A; Jinnouchi, O; Bos, K; Verkerke, W; Vermeulen, J; Van vulpen, I B; Kieft, G; Mora, K D; Olsen, F; Rohne, O M; Pajchel, K; Nilsen, J K; Wosiek, B K; Wozniak, K W; Badescu, E; Jinaru, A; Bohm, C; Johansson, E K; Sjoelin, J B R; Clement, C; Buszello, C P; Huseynova, D; Boyko, I; Popov, B; Poukhov, O; Vinogradov, V; Tsiareshka, P; Skvorodnev, N; Soldatov, A; Chuguev, A; Gushchin, V; Yazici, E; Lutz, M S; Malon, D; Vanyashin, A; Lavrijsen, W; Spieler, H; Biesiada, J L; Bahr, M; Kong, J; Tatarkhanov, M; Ogren, H; Van kooten, R J; Cwetanski, P; Butler, J M; Shank, J T; Chakraborty, D; Ermoline, I; Sinev, N; Whiteson, D O; Corso radu, A; Huang, J; Werth, M P; Kastoryano, M; Meirose da silva costa, B; Namasivayam, H; Hobbs, J D; Schamberger jr, R D; Guo, F; Potekhin, M; Popovic, D; Gorisek, A; Sokhrannyi, G; Hofsajer, I W; Mandelli, L; Ceradini, F; Graziani, E; Giorgi, F; Zur nedden, M E G; Grancagnolo, S; Volpi, M; Nunes hanninger, G; Rados, P K; Milesi, M; Cuthbert, C J; Black, C W; Fink grael, F; Fincke-keeler, M; Keeler, R; Kowalewski, R V; Berghaus, F O; Qi, M; Davidek, T; Tas, P; Jakubek, J; Duckeck, G; Walker, R; Mitterer, C A; Harenberg, T; Sandvoss, S A; Del peso, J; Llorente merino, J; Gonzalez millan, V; Irles quiles, A; Crouau, M; Gris, P L Y; Liauzu, S; Romano saez, S M; Gallop, B J; Jones, T J; Austin, N C; Morris, J; Duerdoth, I; Thompson, R J; Kelly, M P; Leisos, A; Garas, A; Pizio, C; Venda pinto, B A; Kudin, L; Qian, J; Wilson, A W; Mietlicki, D; Long, J D; Sang, Z; Arms, K E; Rahimi, A M; Moss, J J; Oh, S H; Parker, S I; Parsons, J; Cunitz, H; Vanguri, R S; Sadrozinski, H; Lockman, W S; Martinez-mc kinney, G; Goussiou, A; Jones, A; Lie, K; Hasegawa, Y; Olcese, M; Gilewsky, V; Harrison, P F; Janus, M; Spangenberg, M; De, K; Ozturk, N; Pal, A K; Darmora, S; Bullock, D J; Oviawe, O; Derkaoui, J E; Rahal, G; Sircar, A; Frey, A S; Stolte, P; Rosien, N; Zoch, K; Li, L; Schouten, D W; Catinaccio, A; Ciapetti, M; Delruelle, N; Ellis, N; Farthouat, P; Hoecker, A; Klioutchnikova, T; Macina, D; Malyukov, S; Spiwoks, R D; Unal, G P; Vandoni, G; Petersen, B A; Pommes, K; Nairz, A M; Wengler, T; Mladenov, D; Solans sanchez, C A; Lantzsch, K; Schmieden, K; Jakobsen, S; Ritsch, E; Sciuccati, A; Alves dos santos, A M; Ouyang, Q; Zhou, M; Brock, I C; Janssen, J; Katzy, J; Anders, C F; Nilsson, B S; Bazan, A; Di ciaccio, L; Yildizkaya, T; Collot, J; Malek, F; Trocme, B S; Breugnon, P; Godiot, S; Adam bourdarios, C; Coulon, J; Duflot, L; Petroff, P G; Zerwas, D; Lieuvin, M; Calderini, G; Laporte, D; Ocariz, J; Gabrielli, A; Ohska, T K; Kurochkin, Y; Kantserov, V; Vasilyeva, L; Speransky, M; Smirnov, S; Antonov, A; Bulekov, O; Tikhonov, Y; Sargsyan, L; Vardanyan, G; Budick, B; Kocian, M L; Luitz, S; Young, C C; Grenier, P J; Kelsey, M; Black, J E; Kneringer, E; Jussel, P; Horton, A J; Beaudry, J; Chandra, A; Ereditato, A; Topfel, C M; Mathieu, R; Bucci, F; Muenstermann, D; White, R M; He, M; Urban, J; Straka, M; Vrba, V; Schumacher, M; Parzefall, U; Mahboubi, K; Sommer, P O; Koepke, L H; Bethke, S; Moser, H; Wiesmann, M; Walkowiak, W A; Fleck, I J; Martinez-perez, M; Sanchez sanchez, C A; Jorgensen roca, S; Accion garcia, E; Sainz ruiz, C A; Valls ferrer, J A; Amoros vicente, G; Vives torrescasana, R; Ouraou, A; Formica, A; Hassani, S; Watson, M F; Cottin buracchio, G F; Bussey, P J; Saxon, D; Ferrando, J E; Collins-tooth, C L; Hall, D C; Cuhadar donszelmann, T; Dawson, I; Duxfield, R; Argyropoulos, T; Brodet, E; Livneh, R; Shougaev, K; Reinherz, E I; Guttman, N; Beretta, M M; Vilucchi, E; Aloisio, A; Patricelli, S; Caprio, M; Cevenini, F; De vecchi, C; Livan, M; Rimoldi, A; Vercesi, V; Ayad, R; Mastroberardino, A; Ciapetti, G; Luminari, L; Rescigno, M; Santonico, R; Salamon, A; Del papa, C; Kurashige, H; Homma, Y; Tomoto, M; Horii, Y; Sugaya, Y; Hanagaki, K; Bobbink, G; Kluit, P M; Koffeman, E N; Van eijk, B; Lee, H; Eigen, G; Dorholt, O; Strandlie, A; Strzempek, P B; Dita, S; Stoicea, G; Chitan, A; Leven, S S; Moa, T; Brenner, R; Ekelof, T J C; Olshevskiy, A; Roumiantsev, V; Chlachidze, G; Zimine, N; Gusakov, Y; Grigalashvili, N; Mineev, M; Potrap, I; Barashkou, A; Shoukavy, D; Shaykhatdenov, B; Pikelner, A; Gladilin, L; Ammosov, V; Abramov, A; Arik, M; Sahinsoy, M; Uysal, Z; Azizi, K; Hotinli, S C; Zhou, S; Berger, E; Blair, R; Underwood, D G; Einsweiler, K; Garcia-sciveres, M A; Siegrist, J L; Kipnis, I; Dahl, O; Holland, S; Barbaro galtieri, A; Smith, P T; Parua, N; Franklin, M; Mercurio, K M; Tong, B; Pod, E; Cole, S G; Hopkins, W H; Guest, D H; Severini, H; Marsicano, J J; Abbott, B K; Wang, Q; Lissauer, D; Ma, H; Takai, H; Rajagopalan, S; Protopopescu, S D; Snyder, S S; Undrus, A; Popescu, R N; Begel, M A; Blocker, C A; Amelung, C; Mandic, I; Macek, B; Tucker, B H; Citterio, M; Troncon, C; Orestano, D; Taccini, C; Romeo, G L; Dova, M T; Taylor, G N; Gesualdi manhaes, A; Mcpherson, R A; Sobie, R; Taylor, R P; Dolezal, Z; Kodys, P; Slovak, R; Sopko, B; Vacek, V; Sanders, M P; Hertenberger, R; Meineck, C; Becks, K; Kind, P; Sandhoff, M; Cantero garcia, J; De la torre perez, H; Castillo gimenez, V; Ros, E; Hernandez jimenez, Y; Chadelas, R; Santoni, C; Washbrook, A J; O'brien, B J; Wynne, B M; Mehta, A; Vossebeld, J H; Landon, M; Teixeira dias castanheira, M; Cerrito, L; Keates, J R; Fassouliotis, D; Chardalas, M; Manousos, A; Grachev, V; Seliverstov, D; Sedykh, E; Cakir, O; Ciftci, R; Edson, W; Prell, S A; Rosati, M; Stroman, T; Jiang, H; Neal, H A; Li, X; Gan, K K; Smith, D S; Kruse, M C; Ko, B R; Leung fook cheong, A M; Cole, B; Angerami, A R; Greene, Z S; Kroll, J I; Van berg, R P; Forbush, D A; Lubatti, H; Raisher, J; Shupe, M A; Wolin, S; Oshita, H; Gaudio, G; Das, R; Konig, A C; Croft, V A; Harvey, A; Maaroufi, F; Melo, I; Greenwood jr, Z D; Shabalina, E; Mchedlidze, G; Drechsler, E; Rieger, J K; Blackston, M; Colombo, T

    2002-01-01

    % ATLAS \\\\ \\\\ ATLAS is a general-purpose experiment for recording proton-proton collisions at LHC. The ATLAS collaboration consists of 144 participating institutions (June 1998) with more than 1750~physicists and engineers (700 from non-Member States). The detector design has been optimized to cover the largest possible range of LHC physics: searches for Higgs bosons and alternative schemes for the spontaneous symmetry-breaking mechanism; searches for supersymmetric particles, new gauge bosons, leptoquarks, and quark and lepton compositeness indicating extensions to the Standard Model and new physics beyond it; studies of the origin of CP violation via high-precision measurements of CP-violating B-decays; high-precision measurements of the third quark family such as the top-quark mass and decay properties, rare decays of B-hadrons, spectroscopy of rare B-hadrons, and $ B ^0 _{s} $-mixing. \\\\ \\\\The ATLAS dectector, shown in the Figure includes an inner tracking detector inside a 2~T~solenoid providing an axial...

  2. PanDA for ATLAS Distributed Computing in the Next Decade

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2016-01-01

    The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarde...

  3. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2014-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  4. Design and Performance of the Virtualization Platform for Offline computing on the ATLAS TDAQ Farm

    CERN Document Server

    Ballestrero, S; The ATLAS collaboration; Brasolin, F; Contescu, C; Di Girolamo, A; Lee, C J; Pozo Astigarraga, M E; Scannicchio, D A; Twomey, M S; Zaytsev, A

    2013-01-01

    With the LHC collider at CERN currently going through the period of Long Shutdown 1 (LS1) there is a remarkable opportunity to use the computing resources of the large trigger farms of the experiments for other data processing activities. In the case of ATLAS experiment the TDAQ farm, consisting of more than 1500 compute nodes, is particularly suitable for running Monte Carlo production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of all the stages of Sim@P1 project dedicated to the design and deployment of a virtualized platform running on the ATLAS TDAQ computing resources and using it to run the large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to avoid interference with TDAQ usage of the farm and to guarantee the security and the usability of the ATLAS private network; Openstack has been chosen to provide a cloud management layer. The approaches to organizing support for the sustained operation of...

  5. Evolution of the ATLAS PanDA workload management system for exascale computational science

    International Nuclear Information System (INIS)

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  6. Evolution of the ATLAS PanDA workload management system for exascale computational science

    Science.gov (United States)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  7. Using Cloud Computing To Create A Multi-Wavelength Atlas Of The Galactic Plane

    Science.gov (United States)

    Berriman, G. B.; Good, J.; Rynge, M.; Juve, G.; Deelman, E.; Kinney, J.; Merrihew, A.

    2014-01-01

    We describe by example how to optimize cloud-computing resources offered by Amazon Web Services (AWS) to create and curate new datasets at scale. We are producing a co-registered atlas of the Galactic Plane at 16 wavelengths from 1 micron to 24 microns with a spatial sampling of 1 arcsec. The atlas is being created by using the Montage mosaic engine to generate co-registered mosaics of images released by the major surveys WISE, 2MASS, ADASS, GLIMPSE and MIPSGAL. The Atlas, when complete, will be 45 TB in size, composed of over 9,600 5 deg x 5 deg tiles with one degree overlap between them. The dataset will be housed on Amazon S3, designed for at-scale storage with access via web protocols. It will be publicly accessible through an API that will support access to the data and creation of cutouts according to the users’ specifications. The processing, which is estimated to require 340,000 compute hours for completion, has exploited virtual clusters created and managed on AWS platforms through the Pegasus workflow management system. We will describe the optimization methods, compute time and processing costs, as a guide for others wishing to exploit cloud platforms for processing and data creation.

  8. Monitoring of computing resource utilization of the ATLAS experiment

    CERN Document Server

    Rousseau, D; The ATLAS collaboration; Vukotic, I; Aidel, O; Schaffer, RD; Albrand, S

    2012-01-01

    Due to the good performance of the LHC accelerator, the ATLAS experiment has seen higher than anticipated levels for both the event rate and the average number of interactions per bunch crossing. In order to respond to these changing requirements, the current and future usage of CPU, memory and disk resources has to be monitored, understood and acted upon. This requires data collection at a fairly fine level of granularity: the performance of each object written and each algorithm run, as well as a dozen per-job variables, are gathered for the different processing steps of Monte Carlo generation and simulation and the reconstruction of both data and Monte Carlo. We present a system to collect and visualize the data from both the online Tier-0 system and distributed grid production jobs. Around 40 GB of performance data are expected from up to 200k jobs per day, thus making performance optimization of the underlying Oracle database of utmost importance.

  9. Integrated monitoring of the ATLAS online computing farm

    CERN Document Server

    Ballestrero, Sergio; The ATLAS collaboration

    2016-01-01

    The online farm of the ATLAS experiment at the LHC, consisting of nearly 4000 PCs with various characteristics, provides configuration and control of the detector and performs the collection, processing, selection and conveyance of event data from the front-end electronics to mass storage. The status and health of every host must be constantly monitored to ensure the correct and reliable operation of the whole online system. This is the first line of defense, which should not only promptly provide alerts in case of failure but, whenever possible, warn of impending issues. The monitoring system should be able to check up to 100000 health parameters and provide alerts on a selected subset. In this paper we present the implementation and validation of our new monitoring and alerting system based on Icinga 2 and Ganglia. We describe how the load distribution and high availability features of Icinga 2 allowed us to have a centralised but scalable system, with a configuration model that allows full flexibility whil...

  10. Computational mouse atlases and their application to automatic assessment of craniofacial dysmorphology caused by the Crouzon mutation Fgfr2

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Darvann, Tron Andre; Hermann, Nuno V.;

    2007-01-01

    scannings of the skulls of wild-type mice and Crouzon mice were analysed with respect to the dysmorphology caused by Crouzon syndrome. A computational craniofacial atlas was built automatically from the set of wild-type mouse Micro CT volumes using (i) affine and (ii) nonrigid image registration......Crouzon syndrome is characterised by premature fusion of sutures and synchondroses. Recently the first mouse model of the syndrome was generated, having the mutation Cys342Tyr in Fgfr2c, equivalent to the most common human Crouzon/Pfeiffer syndrome mutation. In this study, a set of Micro CT...... displacements obtained when registering the nonrigid wild-type atlas to a nonrigid Crouzon mouse atlas were determined on the surface of the wild-type atlas. This revealed a 0.6 mm bending in the nasal region and a 0.8 mm shortening of the zygoma, which are similar to characteristics previously reported...

  11. A step towards a computing grid for the LHC experiments: ATLAS Data Challenge 1

    Energy Technology Data Exchange (ETDEWEB)

    Sturrock, R.; Bischof, R.; Epp, B.; Ghete, V.M.; Kuhn, D.; Mello, A.G.; Caron, B.; Vetterli, M.C.; Karapetian, G.; Martens, K.; Agarwal, A.; Poffenberger, P.; McPherson, R.A.; Sobie, R.J.; Armstrong, S.; Benekos, N.; Boisvert, V.; Boonekamp, M.; Brandt, S.; Casado, P.; Elsing, M.; Gianotti, F.; Goossens, L.; Grote, M.; Jansen, J.B.; Mair, K.; Nairz, A.; Padilla, C.; Poppleton, A.; Poulard, G.; Richter-Was, E.; Rosati, S.; Schoerner-Sadenius, T.; Wengler, T.; Xu, G.F.; Ping, J.L.; Chudoba, J.; Kosina, J.; Lokajicek, M.; Svec, J.; Tas, P.; Hansen, J.R.; Lytken, E.; Nielsen, J.L.; Waananen, A.; Tapprogge, S.; Calvet, D.; Albrand, S.; Collot, J.; Fulachier, J.; Ledroit-Guillon, F.; Ohlsson-Malek, S.; Viret, S.; Wielers, M.; Bernardet, K.; Correard, S.; Rozanov, A.; de Vivie de Regie, J-B.; Arnault, C.; Bourdarios, C.; Hrivnac, J.; Lechowski, M.; Parrour, G.; Perus, A.; Rousseau, D.; Schaffer, A.; Unal, G.; Derue, F.; Chevalier, L.; Hassani, S.; Laporte, J-F.; Nicolaidou, R.; Pomarede, D.; Virchaux, M.; Nesvadba, N.; Baranov, Sergei; Putzer, A.; Khonich, A.; Duckeck, G.; Schieferdecker, P.; Kiryunin, A.; Schieck, J.; Lagouri, Th.; Duchovni, E.; Levinson, L.; Schrager, D.; Negri, G.; Bilokon, H.; Spogli, L.; Barberis, D.; Parodi, F.; Cataldi, G.; Gorini, E.; Primavera, M.; Spagnolo, S.; Cavalli, D.; Heldmann, M.; Lari, T.; Perini, L.; Rebatto, D.; Resconi, S.; Tartarelli, F.; Vaccarossa, L.; Biglietti, M.; Carlino, G.; Conventi, F.; Doria, A.; Merola, L.; Polesello, G.; Vercesi, V.; De Salvo, A.; Di Mattia, A.; Luminari, L.; Nisati, A.; Reale, M.; Testa, M.; Farilla, A.; Verducci, M.; Cobal, M.; Santi, L.; Hasegawa, Y.; Ishino, M.; Mashimo, T.; Matsumoto, H.; Sakamoto, H.; Tanaka, J.; Ueda, I.; Bentvelsen, S.; Fornaini, A.; Gorfine, G.; Groep, D.; Templon, J.; Koster, J.; Konstantinov, A.; Myklebust, T.; Ould-Saada, F.; Bold, T.; Kaczmarska, A.; Malecki, P.; Szymocha, T.; Turala, M.; Kulchitsky, Y.; Khoreauli, G.; Gromova, N.; Tsulaia, V.; et al.

    2004-04-23

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge was the preparation and the deployment of the software required for the production of large event samples as a worldwide-distributed activity. It should be noted that it was not an option to ''run everything at CERN'' even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. The great challenge of organizing and then carrying out this large-scale production at a significant number of sites around the world had the refore to be faced. However, the benefits of this are manifold: apart from realizing the required computing resources, this exercise created worldwide momentum for ATLAS computing as a whole. This report describes in detail the main steps carried out in DC1 and what has been learned from them as a step towards a computing Grid for the LHC experiments.

  12. Monitoring of Computing Resource Use of Active Software Releases in ATLAS

    CERN Document Server

    Limosani, Antonio; The ATLAS collaboration

    2016-01-01

    The LHC is the world's most powerful particle accelerator, colliding protons at centre of mass energy of 13 TeV. As the energy and frequency of collisions has grown in the search for new physics, so too has demand for computing resources needed for event reconstruction. We will report on the evolution of resource usage in terms of CPU and RAM in key ATLAS offline reconstruction workflows at the Tier0 at CERN and on the WLCG. Monitoring of workflows is achieved using the ATLAS PerfMon package, which is the standard ATLAS performance monitoring system running inside Athena jobs. Systematic daily monitoring has recently been expanded to include all workflows beginning at Monte Carlo generation through to end user physics analysis, beyond that of event reconstruction. Moreover, the move to a multiprocessor mode in production jobs has facilitated the use of tools, such as "MemoryMonitor", to measure the memory shared across processors in jobs. Resource consumption is broken down into software domains and displayed...

  13. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Göttingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2010-01-01

    GoeGrid is a grid resource center located in Goettingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center will be presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster will be detailed. The benefits are an efficient use of computer and manpower resources. Further interdisciplinary projects are commonly organized courses for students of all fields to support education on grid-computing.

  14. A Computer-Based Atlas of a Rat Dissection.

    Science.gov (United States)

    Quentin-Baxter, Megan; Dewhurst, David

    1990-01-01

    A hypermedia computer program that uses text, graphics, sound, and animation with associative information linking techniques to teach the functional anatomy of a rat is described. The program includes a nonintimidating tutor, to which the student may turn. (KR)

  15. ATLAS distributed computing operation shift teams experience during the discovery year and beginning of the long shutdown 1

    International Nuclear Information System (INIS)

    ATLAS Distributed Computing Operation Shifts evolve to meet new requirements. New monitoring tools as well as operational changes lead to modifications in organization of shifts. In this paper we describe the structure of shifts, the roles of different shifts in ATLAS computing grid operation, the influence of a Higgs-like particle discovery on shift operation, the achievements in monitoring and automation that allowed extra focus on the experiment priority tasks, and the influence of the Long Shutdown 1 and operational changes related to the no beam period.

  16. ATLAS Tier-2 at the Compute Resource Center GoeGrid in Goettingen

    CERN Document Server

    Meyer, J; The ATLAS collaboration; Weber, P

    2011-01-01

    GoeGrid is a grid resource center located in G¨ottingen, Germany. The resources are commonly used, funded, and maintained by communities doing research in the fields grid development, computer science, biomedicine, high energy physics, theoretical physics, astrophysics, and the humanities. For the high energy physics community GoeGrid serves as a Tier-2 center for the ATLAS experiment as part of the world-wide LHC computing grid (WLCG). The status and performance of the Tier-2 center is presented with a focus on the interdisciplinary setup and administration of the cluster. Given the various requirements of the different communities on the hardware and software setup the challenge of the common operation of the cluster is detailed. The benefits are an efficient use of computer and manpower resources.

  17. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid

    International Nuclear Information System (INIS)

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  18. Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Petrosyan, Artem; Batista, Jorge Horacio; Mc Kee, Shawn Patrick

    2015-01-01

    A crucial contributor to the success of the massively scaled global computing system that delivers the analysis needs of the LHC experiments is the networking infrastructure upon which the system is built. The experiments have been able to exploit excellent high-bandwidth networking in adapting their computing models for the most efficient utilization of resources. New advanced networking technologies now becoming available such as software defined networking hold the potential of further leveraging the network to optimize workflows and dataflows, through proactive control of the network fabric on the part of high level applications such as experiment workload management and data management systems. End to end monitoring of networks using perfSONAR combined with data flow performance metrics further allows applications to adapt based on real time conditions. We will describe efforts underway in ATLAS on integrating network awareness at the application level, particularly in workload management, building upon ...

  19. SynapSense Wireless Environmental Monitoring System of the RHIC & ATLAS Computing Facility at BNL

    Science.gov (United States)

    Casella, K.; Garcia, E.; Hogue, R.; Hollowell, C.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, A.

    2014-06-01

    RHIC & ATLAS Computing Facility (RACF) at BNL is a 15000 sq. ft. facility hosting the IT equipment of the BNL ATLAS WLCG Tier-1 site, offline farms for the STAR and PHENIX experiments operating at the Relativistic Heavy Ion Collider (RHIC), the BNL Cloud installation, various Open Science Grid (OSG) resources, and many other small physics research oriented IT installations. The facility originated in 1990 and grew steadily up to the present configuration with 4 physically isolated IT areas with the maximum rack capacity of about 1000 racks and the total peak power consumption of 1.5 MW. In June 2012 a project was initiated with the primary goal to replace several environmental monitoring systems deployed earlier within RACF with a single commercial hardware and software solution by SynapSense Corporation based on wireless sensor groups and proprietary SynapSense™ MapSense™ software that offers a unified solution for monitoring the temperature and humidity within the rack/CRAC units as well as pressure distribution underneath the raised floor across the entire facility. The deployment was completed successfully in 2013. The new system also supports a set of additional features such as capacity planning based on measurements of total heat load, power consumption monitoring and control, CRAC unit power consumption optimization based on feedback from the temperature measurements and overall power usage efficiency estimations that are not currently implemented within RACF but may be deployed in the future.

  20. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2014-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  1. Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

    CERN Document Server

    Maeno, T; The ATLAS collaboration; Klimentov, A; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T; Yu, D

    2013-01-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated a...

  2. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    CERN Document Server

    Solans, C; The ATLAS collaboration; Kim, H Y; Moreno, P; Reed, R; Sandrock, C; Ruan, X; Shalyugin, A; Schettino, V; Souza, J; Usai, G; Valero, A

    2013-01-01

    After two years of operation of the LHC, the ATLAS Tile Calorimeter is undergoing the consolidation process of its front-end electronics. The first layer of certification of the repairs is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This test-bench has been redesigned to improve the quality assessment of the data until the end of Phase I. It is now possible to identify low occurrence errors due to its increased read-out bandwidth and perform more sophisticated quality checks due to its enhanced computing power. Improved results provide fast and reliable feedback to the user.

  3. Tier-1 reprocessing and other key grid computing activities within the ATLAS-Gridka cloud

    International Nuclear Information System (INIS)

    Computing in ATLAS is organized in so-called Tier-1 clouds. The Tier-1 provides crucial services for DDM and production, which had been developed and extensively tested in the last years. A further key activity of a Tier-1 is data reprocessing which requires bulk reading of RAW data from tape. It is an I/O intensive activity. Thus an efficient performance of the tape system I/O is very important. Tape reading tests have been done with an aim of optimizing the system. The talk presents the result of the progress made and the current status in line with the expected performance. Also an overview of the current status and progress in the other areas is given

  4. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    Science.gov (United States)

    Öhman, Henrik; Panitkin, Sergey; Hendrix, Valerie; Atlas Collaboration

    2014-06-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  5. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    International Nuclear Information System (INIS)

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. The new cloud technologies also come with new challenges, and one such is the contextualization of computing resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible. This precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration and dynamic resource scaling.

  6. Voxelwise atlas rating for computer assisted diagnosis: Application to congenital heart diseases of the great arteries.

    Science.gov (United States)

    Zuluaga, Maria A; Burgos, Ninon; Mendelson, Alex F; Taylor, Andrew M; Ourselin, Sébastien

    2015-12-01

    Atlas-based analysis methods rely on the morphological similarity between the atlas and target images, and on the availability of labelled images. Problems can arise when the deformations introduced by pathologies affect the similarity between the atlas and a patient's image. The aim of this work is to exploit the morphological dissimilarities between atlas databases and pathological images to diagnose the underlying clinical condition, while avoiding the dependence on labelled images. We propose a voxelwise atlas rating approach (VoxAR) relying on multiple atlas databases, each representing a particular condition. Using a local image similarity measure to assess the morphological similarity between the atlas and target images, a rating map displaying for each voxel the condition of the atlases most similar to the target is defined. The final diagnosis is established by assigning the condition of the database the most represented in the rating map. We applied the method to diagnose three different conditions associated with dextro-transposition of the great arteries, a congenital heart disease. The proposed approach outperforms other state-of-the-art methods using annotated images, with an accuracy of 97.3% when evaluated on a set of 60 whole heart MR images containing healthy and pathological subjects using cross validation.

  7. PanDA: A New Paradigm for Distributed Computing in HEP Through the Lens of ATLAS and other Experiments

    CERN Document Server

    De, K; The ATLAS collaboration; Maeno, T; Nilsson, P; Wenaus, T

    2014-01-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide, thousands of physicists analyzing the data need remote access to hundreds of computing sites, the volume of processed data is beyond the exabyte scale, and data processing requires more than a billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of computing in HEP was discarded in favor of a far more flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at a million computing jobs per day, and processing over an exabyte of data in 2013. We will describe the design and implementation of PanDA, present data on the performance of PanDA a...

  8. Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

    CERN Document Server

    Öhman, H; The ATLAS collaboration; Hendrix, V

    2014-01-01

    With the advent of commercial as well as institutional and national clouds, new opportunities for on-demand computing resources for the HEP community become available. With the new cloud technologies come also new challenges, and one such is the contextualization of cloud resources with regard to requirements of the user and his experiment. In particular on Google's new cloud platform Google Compute Engine (GCE) upload of user's virtual machine images is not possible, which precludes application of ready to use technologies like CernVM and forces users to build and contextualize their own VM images from scratch. We investigate the use of Puppet to facilitate contextualization of cloud resources on GCE, with particular regard to ease of configuration, dynamic resource scaling, and high degree of scalability.

  9. Analysis of Metabolomics Datasets with High-Performance Computing and Metabolite Atlases

    Directory of Open Access Journals (Sweden)

    Yushu Yao

    2015-07-01

    Full Text Available Even with the widespread use of liquid chromatography mass spectrometry (LC/MS based metabolomics, there are still a number of challenges facing this promising technique. Many, diverse experimental workflows exist; yet there is a lack of infrastructure and systems for tracking and sharing of information. Here, we describe the Metabolite Atlas framework and interface that provides highly-efficient, web-based access to raw mass spectrometry data in concert with assertions about chemicals detected to help address some of these challenges. This integration, by design, enables experimentalists to explore their raw data, specify and refine features annotations such that they can be leveraged for future experiments. Fast queries of the data through the web using SciDB, a parallelized database for high performance computing, make this process operate quickly. By using scripting containers, such as IPython or Jupyter, to analyze the data, scientists can utilize a wide variety of freely available graphing, statistics, and information management resources. In addition, the interfaces facilitate integration with systems biology tools to ultimately link metabolomics data with biological models.

  10. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI

    DEFF Research Database (Denmark)

    Iglesias, Juan Eugenio; Augustinack, Jean C.; Nguyen, Khoa;

    2015-01-01

    level using ultra-high resolution, ex vivo MRI. Fifteen autopsy samples were scanned at 0.13 mm isotropic resolution (on average) using customized hardware. The images were manually segmented into 13 different hippocampal substructures using a protocol specifically designed for this study; precise...... from the in vivo and ex vivo data were combined into a single computational atlas of the hippocampal formation with a novel atlas building algorithm based on Bayesian inference. The resulting atlas can be used to automatically segment the hippocampal subregions in structural MRI images, using...... datasets with different types of MRI contrast. The results show that the atlas and companion segmentation method: 1) can segment T1 and T2 images, as well as their combination, 2) replicate findings on mild cognitive impairment based on high-resolution T2 data, and 3) can discriminate between Alzheimer...

  11. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to exceed the dedicated resources available for ATLAS on the WLCG. Examples of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at Tier 2 and Tier 3 sites, opportunistic resources at the Open Science Grid (OSG), and ATLAS High Level Trigger farm between the data taking periods. Because of specific aspects of opportunistic resources such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  12. Scaling up ATLAS Event Service to production levels on opportunistic computing platforms

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration; Ernst, Michael; Guan, Wen; Hover, John; Lesny, David; Maeno, Tadashi; Nilsson, Paul; Tsulaia, Vakhtang; van Gemmeren, Peter; Vaniachine, Alexandre; Wang, Fuquan; Wenaus, Torre

    2016-01-01

    Continued growth in public cloud and HPC resources is on track to overcome the dedicated resources available for ATLAS on the WLCG. Example of such platforms are Amazon AWS EC2 Spot Instances, Edison Cray XC30 supercomputer, backfill at the Tier-2 and Tier-3 sites, opportunistic resources at the Open Science Grid, and ATLAS High Level Trigger farm between the data taking periods. Because of opportunistic resources specifics such as preemptive job scheduling and data I/O, their efficient usage requires workflow innovations provided by the ATLAS Event Service. Thanks to the finer granularity of the Event Service data processing workflow, the opportunistic resources are used more efficiently. We report on our progress in scaling opportunistic resource usage to double-digit levels in ATLAS production.

  13. The ATLAS Analysis Model

    CERN Multimedia

    Amir Farbin

    The ATLAS Analysis Model is a continually developing vision of how to reconcile physics analysis requirements with the ATLAS offline software and computing model constraints. In the past year this vision has influenced the evolution of the ATLAS Event Data Model, the Athena software framework, and physics analysis tools. These developments, along with the October Analysis Model Workshop and the planning for CSC analyses have led to a rapid refinement of the ATLAS Analysis Model in the past few months. This article introduces some of the relevant issues and presents the current vision of the future ATLAS Analysis Model. Event Data Model The ATLAS Event Data Model (EDM) consists of several levels of details, each targeted for a specific set of tasks. For example the Event Summary Data (ESD) stores calorimeter cells and tracking system hits thereby permitting many calibration and alignment tasks, but will be only accessible at particular computing sites with potentially large latency. In contrast, the Analysis...

  14. Computational neuroanatomy: mapping cell-type densities in the mouse brain, simulations from the Allen Brain Atlas

    Science.gov (United States)

    Grange, Pascal

    2015-09-01

    The Allen Brain Atlas of the adult mouse (ABA) consists of digitized expression profiles of thousands of genes in the mouse brain, co-registered to a common three-dimensional template (the Allen Reference Atlas).This brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy. Its voxelized version (with cubic voxels of side 200 microns) is available for desktop computation in MATLAB. On the other hand, brain cells exhibit a great phenotypic diversity (in terms of size, shape and electrophysiological activity), which has inspired the names of some well-studied cell types, such as granule cells and medium spiny neurons. However, no exhaustive taxonomy of brain cell is available. A genetic classification of brain cells is being undertaken, and some cell types have been chraracterized by their transcriptome profiles. However, given a cell type characterized by its transcriptome, it is not clear where else in the brain similar cells can be found. The ABA can been used to solve this region-specificity problem in a data-driven way: rewriting the brain-wide expression profiles of all genes in the atlas as a sum of cell-type-specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain. However, the estimated brain-wide densities of 64 cell types published recently were based on one series of co-registered coronal in situ hybridization (ISH) images per gene, whereas the online ABA contains several image series per gene, including sagittal ones. In the presented work, we simulate the variability of cell-type densities in a Monte Carlo way by repeatedly drawing a random image series for each gene and solving the optimization problem. This yields error bars on the region-specificity of cell types.

  15. Computing infrastructure for ATLAS data analysis in the Italian Grid cloud

    International Nuclear Information System (INIS)

    ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.

  16. Computed tomography of the retroperitoneum: an anatomical and pathological atlas with emphasis on the fascial planes

    International Nuclear Information System (INIS)

    The aim of this thesis is to provide a descriptive clinical pathological CT atlas of a range of conditions involving retroperitoneum and neighbouring organs and structures (excluding the pelvic part of the retroperitoneum). Chapter 1 describes the patient material studied, some aspects of CT techniques and patient handling. Chapter 2 describes the anatomy of the renal fascia based upon reports derived from the literature and is followed by our CT observations in more than 5000 abdominal CT examinations. In short it is an anatomical CT atlas. Chapters 3, 4 and 5 deal with reactions of the fascial structures in different pathological conditions caused by major disease entities. The patients were scanned for these diseases, of which anatomical topographical appearances and spread are described in the general considerations, followed by CT findings and illustrative cases, combined with abstracted experience from other workers. (Auth.)

  17. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    International Nuclear Information System (INIS)

    After two years of operation of the LHC, the ATLAS Tile calorimeter is undergoing a consolidation process of its front-end electronics. The certification is performed in the experimental area with a portable test-bench which is capable of controlling and reading out one front-end module through dedicated cables. This test-bench has been redesigned to improve the tests of the electronics functionality quality assessment of the data until the end of Phase I.

  18. Computing challenges in the certification of ATLAS Tile Calorimeter front-end electronics during maintenance periods

    CERN Document Server

    Solans, C; The ATLAS collaboration; Kim, H Y; Moreno, P; Reed, R; Sandrock, C; Ruan, X; Shalyugin, A; Schettino, V; Souza, J; Usai, G; Valero, A

    2014-01-01

    After two years of operation of the LHC, the ATLAS Tile calorimeter is undergoing the consolidation process of its front-end electronics. The certification is performed in the experimental area with a portable test-bench which is capable of controlling and reading out all the inputs and outputs of one front-end module through dedicated cables. This test-bench has been redesigned to improve the quality assessment of the data until the end of Phase I.

  19. Pocket atlas of sectional anatomy: computed tomography and magnetic resonance imaging. Vol. 3. Spine, extremities, joints

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, T.B.; Reif, E. [Caritas Hospital, Dillingen (Germany). Dept. of Radiology

    2007-07-01

    Magnetic resonance imaging (MRI) of the musculoskeletal system is an established and important component in the diagnosis of diseases of the joints, soft tissues, bones, and bone marrow. We are therefore pleased to collect together images of the joints and the spinal column in a separate volume on the musculoskeletal system. Demonstrating the growing importance of new developments in MRI in recent years, with ever-increasing resolution, many images were acquired with 3-tesla units. We are deeply grateful to the manufacturers, Siemens and Philips, for making this possible. We believe that colored atlases are the ideal medium to represent the highly detailed images achieved nowadays with improved resolution techniques. Volume 3 of the Pocket Atlas of Sectional Anatomay provides a color illustration facing each magnetic resonance image, as in the preceding volumes on the skull, thorax, and abdomen. To ensure the greatest possible precision in details, we still produce these illustrations ourselves. Each is accompanied by a sectional image and an orientation aid. Uniform color schemes ensure optimal clarity, as similar structures, such as arteries, veins, nerves, tendons, etc., are consistently represented in the same color. Individual muscle groups are represented uniformly, but differentiated from other muscle groups, so that classification is possible even when numerous groups of muscles are shown in the same image. Maximal lucidity prevails even in highly detailed representations. This is made possible by the high quality of the production and printing process that are characteristic of Thieme International. (orig.)

  20. 26th February 2009 - US Google Vice President and Chief Internet Evangelist V. Cerf signing the guest book with Director for research and Computing S. Bertolucci; visiting ATLAS control room and experimental area with Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    HI-0902038 05: IT Department Head, F. Hemmer; US Google Vice President and Chief Internet Evangelist V. Cerf; Computing Security Officer and Colloquium Convenor D. R. Myers; Member of the Internet Society Advisory Council F. Flückiger; Director for Research and Scientific Computing, S. Bertolucci ; Honorary Staff Member, B. Segal. HI-0902038 16: Computing Security Officer and Colloquium Convenor D. R. Myers; UC Irvine, ATLAS Deputy Spokesperson elect A. J. Lankford; US Google Vice President and Chief Internet Evangelist V. Cerf; ATLAS Collaboration Spokesperson P. Jenni; IT Department Head, F. Hemmer.

  1. Anatomic atlas for computed tomography in the mesaticephalic dog: caudal abdomen and pelvis

    International Nuclear Information System (INIS)

    The purpose of this study was to produce a comprehensive anatomic atlas of CT anatomy of the dog for use by veterinary radiologists, clinicians, and surgeons. Whole-body CT images of two mature beagle dogs were made with the dogs supported in sternal recumbency and using a slice thickness of 13 mm. At the end of the CT session, each dog was euthanized, and while carefully maintaining the same position, the body was frozen. The body was then sectioned at 13-mm intervals, with the cuts matched as closely as possible to the CT slices. The frozen sections were cleaned, photographed, and radiographed using xeroradiography. Each CT image was studied and compared with its corresponding xeroradiograph and anatomic section to assist in the accurate identification of specific structures. Clinically relevant anatomic structures were identified and labeled in the three corresponding photographs (CT image, xeroradiograph, and anatomic section). In previous papers, the head and neck, and the thorax and cranial abdomen of the mesaticephalic (beagle) dog were presented. In this paper, the caudal part of the abdomen and pelvis of the bitch and male dog are presented

  2. Anatomic atlas for computed tomography in the mesaticephalic dog: head and neck

    International Nuclear Information System (INIS)

    The purpose of this study was to produce a comprehensive anatomic atlas of CT anatomy of the dog for use by veterinary radiologists, clinicians, and surgeons. Whole-body CT images of two mature beagle dogs were made with the dogs supported in sternal recumbency and using a slice thickness of 13 mm. The head was scanned using high-resolution imaging with a slice thickness of 8 mm. At the end of the CT session, each dog was euthanized, and while carefully maintaining the same position, the body was placed in a walk-in freezer until completely frozen. The body was then sectioned at 13-mm (head at 8-mm) intervals, with the cuts matched as closely as possible to the CT slices. The forzen sections were cleaned, photographed, and radiographed using xeroradiography. Each CT image was studied and compared with its corresponding xeroradiograph and anatomic section to assist in the accurate identification of specific structures. Intact, sagittally sectioned, and disarticulated dog skulls were used as reference models. Clinically relevant anatomic structures were identified and labeled in the three corresponding photographs (CT image, xeroradiograph, and anatomic section). In this paper, the CT anatomy of the head and neck of the mesaticephalic dog is presented

  3. The evolution of the trigger and data acquisition system in the ATLAS experiment (CHEP2013: 20. international conference on computing in high energy and nuclear physics)

    International Nuclear Information System (INIS)

    The ATLAS experiment, which records the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of this upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. While the TDAQ system successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. With higher luminosities, the required number and complexity of Level-1 triggers will increase in order to satisfy the physics goals of ATLAS, while keeping the total Level-1 rates at or below 100 kHz. The Central Trigger Processor will be upgraded to increase the number ofmanageable inputs and accommodate additional hardware for improved performance, and a new Topological Processor will be included. A single homogeneous high level trigger system will be deployed. The current second and third trigger levels will be executed together on a unique hardware node. This design has many advantages: the radical simplification of the architecture, the flexible and automatically balanced distribution of the computing resources, the sharing of code and services on nodes. In this paper, we report on the design and the development status of the upgraded TDAQ system, with particular attention to the tests currently on-going to identify the required performance and to spot its possible limitations.

  4. ATLAS cloud R and D

    International Nuclear Information System (INIS)

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R and D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R and D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R and D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R and D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  5. ATLAS Cloud R&D

    Science.gov (United States)

    Panitkin, Sergey; Barreiro Megino, Fernando; Caballero Bejar, Jose; Benjamin, Doug; Di Girolamo, Alessandro; Gable, Ian; Hendrix, Val; Hover, John; Kucharczyk, Katarzyna; Medrano Llamas, Ramon; Love, Peter; Ohman, Henrik; Paterson, Michael; Sobie, Randall; Taylor, Ryan; Walker, Rodney; Zaytsev, Alexander; Atlas Collaboration

    2014-06-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained a significant insight into the cloud computing landscape and has identified points that still need to be addressed in order to fully utilize this technology. This contribution will explain the cloud integration models that are being evaluated and will discuss ATLAS' learning during the collaboration with leading commercial and academic cloud providers.

  6. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid; Reconstruction et identification des electrons dans l'experience Atlas. Participation a la mise en place d'un Tier 2 de la grille de calcul

    Energy Technology Data Exchange (ETDEWEB)

    Derue, F

    2008-03-15

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  7. The ATLAS Distributed Data Management project: Past and Future

    CERN Document Server

    Garonne, V; The ATLAS collaboration

    2012-01-01

    ATLAS has recorded almost 8PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 90PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All this data is managed by the ATLAS Distributed Data Management system, called Don Quijote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs, and to help ATLAS physicists get access to this data. In this paper, we describe new and improved DQ2 services, and the experience of data management operation in ATLAS computing, showing how these services enable the management of petabyte scale computing operations. We also present the concepts of the new version of the ATLAS Distributed Data Management (DDM) system, Rucio.

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  9. ATLAS production system

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Golubkov, Dmitry; Maeno, Tadashi; Mashinistov, Ruslan; Wenaus, Torre; Padolski, Siarhei

    2016-01-01

    The second generation of the ATLAS production system called ProdSys2 is a distributed workload manager which used by thousands of physicists to analyze the data remotely, with the volume of processed data is beyond the exabyte scale, across a more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criterias, such as input and output size, memory requirements and CPU consumption with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteering computers. Besides jobs definition Production System also includes flexible web user interface, which implements user-friendly environment for main ATLAS workflows, e.g. simple way of combining different data flows, and real-time monitoring, optimised for using with huge amount of information to present. We present an overview of the ATLAS Production System major components: job and task definition, workflow manager web user i...

  10. ATLAS DQ2 DELETION SERVICE

    CERN Document Server

    Oleynik, D; The ATLAS collaboration; Garonne, V; Campana, S

    2012-01-01

    ATLAS DQ2 Deletion service is a sub system of the ATLAS Distributed Data Management (DDM) project DQ2. DDM DQ2 responsible for the replication, access and bookkeeping of ATLAS data across more than 130 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. Responsibility of ATLAS DQ2 Deletion service is serving deletion requests on the grid by interacting with grid middleware and the DQ2 catalogues. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this talk special attention is paid to the technical details, which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Also specialty of database backend implementation will be described. Special section will be devote to the deletion monitoring service that allows operators a detailed view of the working system.

  11. Methods and computing challenges of the realistic simulation of physics events in the presence of pile-up in the ATLAS experiment

    CERN Document Server

    Chapman, J D; The ATLAS collaboration

    2014-01-01

    We are now in a regime where we observe substantial multiple proton-proton collisions within each filled LHC bunch-crossing and also multiple filled bunch-crossings within the sensitive time window of the ATLAS detector. This will increase with increased luminosity in the near future. Including these effects in Monte Carlo simulation poses significant computing challenges. We present a description of the standard approach used by the ATLAS experiment and details of how we manage the conflicting demands of keeping the background dataset size as small as possible while minimizing the effect of background event re-use. We also present details of the methods used to minimize the memory footprint of these digitization jobs, to keep them within the grid limit, despite combining the information from thousands of simulated events at once. We also describe an alternative approach, known as Overlay. Here, the actual detector conditions are sampled from raw data using a special zero-bias trigger, and the simulated physi...

  12. Distributed analysis in ATLAS

    Science.gov (United States)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  13. Distributed analysis in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2015-01-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data for the distributed physics community is a challenging task. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are daily running on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We r...

  14. ATLAS Cloud R&D

    CERN Document Server

    Panitkin, S; The ATLAS collaboration; Caballero Bejar, J; Benjamin, D; DiGirolamo, A; Gable, I; Hendrix, V; Hover, J; Kucharczuk, K; Medrano LLamas, R; Love, P; Ohman, H; Paterson, M; Sobie, R; Taylor, R; Walker, R; Zaytsev, A

    2014-01-01

    The computing model of the ATLAS experiment was designed around the concept of grid computing and, since the start of data taking, this model has proven very successful. However, new cloud computing technologies bring attractive features to improve the operations and elasticity of scientific distributed computing. ATLAS sees grid and cloud computing as complementary technologies that will coexist at different levels of resource abstraction, and two years ago created an R&D working group to investigate the different integration scenarios. The ATLAS Cloud Computing R&D has been able to demonstrate the feasibility of offloading work from grid to cloud sites and, as of today, is able to integrate transparently various cloud resources into the PanDA workload management system. The ATLAS Cloud Computing R&D is operating various PanDA queues on private and public resources and has provided several hundred thousand CPU days to the experiment. As a result, the ATLAS Cloud Computing R&D group has gained...

  15. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  16. The ATLAS Simulation Infrastructure

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Adorisio, Cristina; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahmed, Hossain; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Åkesson, Torsten Paul Ake; Akimoto, Ginga; Akimov , Andrei; Aktas, Adil; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amelung, Christoph; Amorim, Antonio; Amorós, Gabriel; Amram, Nir; Anastopoulos, Christos; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antos, Jaroslav; Antunovic, Bijana; Anulli, Fabio; Aoun, Sahar; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Argyropoulos, Theodoros; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Arutinov, David; Asai, Makoto; Asai, Shoji; Silva, José; Asfandiyarov, Ruslan; Ask, Stefan; Åsman, Barbro; Asner, David; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Auerbach, Benjamin; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Badescu, Elisabeta; Bagnaia, Paolo; Bai, Yu; Bain, Travis; Baines, John; Baker, Mark; Baker, Oliver Keith; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Baranov, Sergey; Baranov, Sergei; Barashkou, Andrei; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barr, Alan; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Barrillon, Pierre; Bartoldus, Rainer; Bartsch, Detlef; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Bauer, Florian; Bawa, Harinder Singh; Bazalova, Magdalena; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Becerici, Neslihan; Bechtle, Philip; Beck, Graham; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Ayda; Beddall, Andrew; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Massimiliano; Belloni, Alberto; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benincasa, Gianpaolo; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jürg; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertin, Antonio; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blocker, Craig; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bocci, Andrea; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Böser, Sebastian; Bogaerts, Joannes Andreas; Bogouch, Andrei; Bohm, Christian; Bohm, Jan; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bondarenko, Valery; Bondioli, Mario; Boonekamp, Maarten; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, André; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodet, Eyal; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Bucci, Francesca; Buchanan, James; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Budick, Burton; Büscher, Volker; Bugge, Lars; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, Françcois; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Byatt, Tom; Caballero, Jose; Cabrera Urbán, Susana; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Calvet, David; Camarri, Paolo; Cameron, David; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carrillo Montoya, German D.; Carron Montero, Sebastian; Carter, Antony; Carter, Janet; Carvalho, João; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Ceradini, Filippo; Cerqueira, Augusto Santiago; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chen, Hucheng; Chen, Shenjian; Chen, Xin; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Tcherniatine, Valeri; Chesneanu, Daniela; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chevallier, Florent; Chiarella, Vitaliano; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chu, Ming-Lee; Chudoba, Jiri; Ciapetti, Guido; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Citterio, Mauro; Clark, Allan G.; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H.; Coggeshall, James; Cogneras, Eric; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Conde Muiño, Patricia; Coniavitis, Elias; Consonni, Michele; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Corso-Radu, Alina; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, María José; Costanzo, Davide; Costin, Tudor; Côté, David; Coura Torres, Rodrigo; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Cranshaw, Jack; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crépé-Renaudin, Sabine; Cuenca Almenar, Cristóbal; Cuhadar Donszelmann, Tulay; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Via, Cinzia; Dabrowski, Wladyslaw; Dai, Tiesheng; Dallapiccola, Carlo; Dallison, Steve; Daly, Colin; Dam, Mogens; Danielsson, Hans Olof; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Merlin; Davison, Adam; Dawson, Ian; Daya, Rozmin; De, Kaushik; de Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; De Mora, Lee; De Oliveira Branco, Miguel; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vivie De Regie, Jean-Baptiste; De Zorzi, Guido; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Del Papa, Carlo; Del Peso, Jose; Del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Deng, Wensheng; Denisov, Sergey; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Domenico, Antonio; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Dindar Yagci, Kamile; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; do Vale, Maria Aline Barros; Do Valle Wemans, André; Doan, Thi Kieu Oanh; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Doglioni, Caterina; Doherty, Tom; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dos Anjos, Andre; Dotti, Andrea; Dova, Maria-Teresa; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Dris, Manolis; Dubbert, Jörg; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Dührssen , Michael; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Dushkin, Andrei; Duxfield, Robert; Dwuznik, Michal; Düren, Michael; Ebenstein, William; Ebke, Johannes; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Egorov, Kirill; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ermoline, Iouri; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Facius, Katrine; Fakhrutdinov, Rinat; Falciano, Speranza; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Fayard, Louis; Fayette, Florent; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Woiciech; Feligioni, Lorenzo; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernandes, Bruno; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Fiascaris, Maria; Fiedler, Frank; Filipčič, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fisher, Matthew; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fonseca Martin, Teresa; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Fournier, Daniel; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; Freestone, Julian; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, K K; Gao, Yongsheng; Gaponenko, Andrei; Garcia-Sciveres, Maurice; García, Carmen; García Navarro, José Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Gatti, Claudio; Gaudio, Gabriella; Gautard, Valerie; Gauzzi, Paolo; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Gentile, Simonetta; Georgatos, Fotios; George, Simon; Gershon, Avi; Ghazlane, Hamid; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordani, MarioPaolo; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Girtler, Peter; Giugni, Danilo; Giusti, Paolo; Gjelsten, Børge Kile; Gladilin, Leonid; Glasman, Claudia; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Göpfert, Thomas; Goeringer, Christian; Gössling, Claus; Göttfert, Tobias; Goggi, Virginio; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Gonçcalo, Ricardo; Gonella, Laura; Gong, Chenwei; González de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Gornicki, Edward; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grafström, Per; Grahn, Karl-Johan; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Green, Barry; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Grishkevich, Yaroslav; Groh, Manfred; Groll, Marius; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Grybel, Kai; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Gupta, Ambreesh; Gusakov, Yury; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Härtel, Roland; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamilton, Andrew; Hamilton, Samuel; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, John Renner; Hansen, Peter Henrik; Hansl-Kozanecka, Traudl; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hashemi, Kevan; Hassani, Samira; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayakawa, Takashi; Hayward, Helen; Haywood, Stephen; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Hemperek, Tomasz; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Hensel, Carsten; Henß, Tobias; Hernández Jiménez, Yesenia; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Higón-Rodriguez, Emilio; Hill, John; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Horazdovsky, Tomas; Hori, Takuya; Horn, Claus; Horner, Stephan; Hostachy, Jean-Yves; Hou, Suen; Hoummada, Abdeslam; Howe, Travis; Hrivnac, Julius; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Hughes, Emlyn; Hughes, Gareth; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idarraga, John; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Ince, Tayfun; Ioannou, Pavlos; Iodice, Mauro; Irles Quiles, Adrian; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Isobe, Tadaaki; Issakov, Vladimir; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jared, Richard; Jarlskog, Göran; Jeanty, Laura; Jen-La Plante, Imai; Jenni, Peter; Jež, Pavel; Jézéquel, Stéphane; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Shan; Jinnouchi, Osamu; Joffe, David; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tim; Jorge, Pedro; Joseph, John; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kaci, Mohammed; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kalinowski, Artur; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagounis, Michael; Karagoz, Muge; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kastoryano, Michael; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kayumov, Fred; Kazanin, Vassili; Kazarinov, Makhail; Keates, James Robert; Keeler, Richard; Keener, Paul; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kerševan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Khomich, Andrei; Khoriauli, Gia; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kind, Oliver; Kind, Peter; King, Barry; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiyamura, Hironori; Kladiva, Eduard; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Klute, Markus; Kluth, Stefan; Knecht, Neil; Kneringer, Emmerich; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Koblitz, Birger; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Köneke, Karsten; König, Adriaan; Koenig, Sebastian; Köpke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kolos, Serguei; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Kondo, Takahiko; Kono, Takanori; Konoplich, Rostislav; Konovalov, Serguei; Konstantinidis, Nikolaos; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostka, Peter; Kostyukhin, Vadim; Kotov, Serguei; Kotov, Vladislav; Kotov, Konstantin; Kourkoumelis, Christine; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Henri; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumshteyn, Zinovii; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurchaninov, Leonid; Kurochkin, Yurii; Kus, Vlastimil; Kwee, Regina; La Rotonda, Laura; Labbe, Julien; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramón; Ladygin, Evgueni; Lafaye, Rémi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Lane, Jenna; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larner, Aimee; Lassnig, Mario; Laurelli, Paolo; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Menedeu, Eve; Le Vine, Micheal; Lebedev, Alexander; Lebel, Céline; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lefebvre, Michel; Legendre, Marie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leitner, Rupert; Lellouch, Daniel; Lellouch, Jeremie; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leroy, Claude; Lessard, Jean-Raphael; Lester, Christopher; Leung Fook Cheong, Annabelle; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Leyton, Michael; Li, Haifeng; Li, Shumin; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lilley, Joseph; Lim, Heuijin; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Tiankuan; Liu, Yanwen; Livan, Michele; Lleres, Annick; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Lovas, Lubomir; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dörthe; Ludwig, Inga; Luehring, Frederick; Luisa, Luca; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Björn; Lundberg, Johan; Lundquist, Johan; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Maček, Boštjan; Machado Miguens, Joana; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mättig, Peter; Mättig, Stefan; Magalhaes Martins, Paulo Jorge; Magradze, Erekle; Mahalalel, Yair; Mahboubi, Kambiz; Mahmood, A.; Maiani, Camilla; Maidantchik, Carmen; Maio, Amélia; Majewski, Stephanie; Makida, Yasuhiro; Makouski, Mikhail; Makovec, Nikola; Malecki, Piotr; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mambelli, Marco; Mameghani, Raphael; Mamuzic, Judita; Mandelli, Luciano; Mandić, Igor; Mandrysch, Rocco; Maneira, José; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Mapelli, Alessandro; Mapelli, Livio; March , Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchiori, Giovanni; Marcisovsky, Michal; Marino, Christopher; Marroquim, Fernando; Marshall, Zach; Marti-Garcia, Salvador; Martin, Alex; Martin, Andrew; Martin, Brian; Martin, Brian; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Tim; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martini, Agnese; Martyniuk, Alex; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Massa, Ignazio; Massol, Nicolas; Mastroberardino, Anna; Masubuchi, Tatsuya; Matricon, Pierre; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maxfield, Stephen; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mc Donald, Jeffrey; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCubbin, Norman; McFarlane, Kenneth; McGlone, Helen; Mchedlidze, Gvantsa; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Menke, Sven; Meoni, Evelin; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W. Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Migas, Sylwia; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Miller, David; Mills, Corrinne; Mills, Bill; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Miñano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Misawa, Shigeki; Miscetti, Stefano; Misiejuk, Andrzej; Mitrevski, Jovan; Mitsou, Vasiliki A.; Miyagawa, Paul; Mjörnmark, Jan-Ulf; Mladenov, Dimitar; Moa, Torbjoern; Moed, Shulamit; Moeller, Victoria; Mönig, Klaus; Möser, Nicolas; Mohr, Wolfgang; Mohrdieck-Möck, Susanne; Moles-Valls, Regina; Molina-Perez, Jorge; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Moore, Roger; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morel, Julien; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morley, Anthony Keith; Mornacchi, Giuseppe; Morozov, Sergey; Morris, John; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Müller, Thomas; Muenstermann, Daniel; Muir, Alex; Munwes, Yonathan; Murillo Garcia, Raul; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakamura, Koji; Nakano, Itsuo; Nakatsuka, Hiroki; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Nderitu, Simon Kirichu; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nelson, Andrew; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newcomer, Mitchel; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicoletti, Giovanni; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Nikiforov, Andriy; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nordberg, Markus; Nordkvist, Bjoern; Notz, Dieter; Novakova, Jana; Nozaki, Mitsuaki; Nožička, Miroslav; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver, John; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, António; Onyisi, Peter; Oram, Christopher; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Ottersbach, John; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Oyarzun, Alejandro; Ozcan, Veysi Erkcan; Ozone, Kenji; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Pahl, Christoph; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Papadopoulou, Theodora; Park, Su-Jung; Park, Woochun; Parker, Andy; Parker, Sherwood; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pásztor , Gabriella; Pataraia, Sophio; Pater, Joleen; Patricelli, Sergio; Patwa, Abid; Pauly, Thilo; Peak, Lawrence; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Perez Codina, Estel; Pérez García-Estañ, María Teresa; Perez Reale, Valeria; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Persembe, Seda; Perus, Antoine; Peshekhonov, Vladimir; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Piacquadio, Giacinto; Piccinini, Maurizio; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, João Antonio; Pinamonti, Michele; Pinfold, James; Pinto, Belmiro; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Pleier, Marc-Andre; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poffenberger, Paul; Poggioli, Luc; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomeroy, Daniel; Pommès, Kathy; Ponsot, Patrick; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Popule, Jiri; Portell Bueso, Xavier; Porter, Robert; Pospelov, Guennady; Pospisil, Stanislav; Potekhin, Maxim; Potrap, Igor; Potter, Christina; Potter, Christopher; Potter, Keith; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Pribyl, Lukas; Price, Darren; Price, Lawrence; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Puigdengoles, Carles; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qi, Ming; Qian, Jianming; Qian, Weiming; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radeka, Veljko; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rajagopalan, Srinivasan; Rammensee, Michael; Rammes, Marcus; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renkel, Peter; Rescia, Sergio; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richards, Alexander; Richards, Ronald; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rizatdinova, Flera; Rizvi, Eram; Roa Romero, Diego Alejandro; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Shaun; Røhne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rosenbaum, Gabriel; Rosselet, Laurent; Rossetti, Valerio; Rossi, Leonardo Paolo; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Rühr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rumyantsev, Leonid; Rurikova, Zuzana; Rusakovich, Nikolai; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryan, Patrick; Rybkin, Grigori; Rzaeva, Sevda; Saavedra, Aldo; Sadrozinski, Hartmut; Sadykov, Renat; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvachua Ferrando, Belén; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Björn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandhu, Pawan; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sanny, Bernd; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Saraiva, João; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sasaki, Osamu; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Savard, Pierre; Savine, Alexandre; Savinov, Vladimir; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schäfer, Uli; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R.~Dean; Schamov, Andrey; Schegelsky, Valery; Scheirich, Daniel; Schernau, Michael; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schmidt, Evelyn; Schmieden, Kristof; Schmitt, Christian; Schmitz, Martin; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schreiner, Alexander; Schroeder, Christian; Schroer, Nicolai; Schroers, Marcel; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, José; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjölin, Jörgen; Sjursen, Therese; Skovpen, Kirill; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloper, John erik; Sluka, Tomas; Smakhtin, Vladimir; Smirnov, Sergei; Smirnov, Yuri; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Soluk, Richard; Sondericker, John; Sopko, Vit; Sopko, Bruno; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spanò, Francesco; Spencer, Edwin; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St. Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stancu, Stefan Nicolae; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Stastny, Jan; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Su, Dong; Soh, Dart-yin; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Takuya; Suzuki, Yu; Sykora, Ivan; Sykora, Tomas; Szymocha, Tadeusz; Sánchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Ryan P.; Taylor, Wendy; Teixeira-Dias, Pedro; Ten Kate, Herman; Teng, Ping-Kun; Tennenbaum-Katan, Yaniv-David; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Stan; Thompson, Emily; Thompson, Peter; Thompson, Paul; Thompson, Ray; Thomson, Evelyn; Thun, Rudolf; Tic, Tomas; Tikhomirov, Vladimir; Tikhonov, Yury; Tipton, Paul; Tique Aires Viegas, Florbela De Jes; Tisserant, Sylvain; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tollefson, Kirsten; Tomasek, Lukas; Tomasek, Michal; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torrence, Eric; Torró Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Trefzger, Thomas; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trivedi, Arjun; Trocmé, Benjamin; Troncon, Clara; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tuggle, Joseph; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Tuts, Michael; Twomey, Matthew Shaun; Tylmad, Maja; Tyndel, Mike; Uchida, Kirika; Ueda, Ikuo; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Valls Ferrer, Juan Antonio; Van Berg, Richard; van der Graaf, Harry; van der Kraaij, Erik; van der Poel, Egge; van der Ster, Daniel; van Eldik, Niels; van Gemmeren, Peter; van Kesteren, Zdenko; van Vulpen, Ivo; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasilyeva, Lidia; Vassilakopoulos, Vassilios; Vazeille, Francois; Vellidis, Constantine; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Villa, Mauro; Villani, Giulio; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Viret, Sébastien; Virzi, Joseph; Vitale , Antonio; Vitells, Ofer; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Matteo; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vu Anh, Tuan; Vudragovic, Dusan; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Walbersloh, Jorg; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Wang, Chiho; Wang, Haichen; Wang, Jin; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Wastie, Roy; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Marc; Weber, Manuel; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Werthenbach, Ulrich; Wessels, Martin; Whalen, Kathleen; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wildauer, Andreas; Wildt, Martin Andre; Wilkens, Henric George; Williams, Eric; Williams, Hugh; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wright, Dennis; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wulf, Evan; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xu, Da; Xu, Neng; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Zhaoyu; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yuan, Li; Yurkewicz, Adam; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zambrano, Valentina; Zanello, Lucia; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zemla, Andrzej; Zendler, Carolin; Zenin, Oleg; Ženiš, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi della Porta, Giovanni; Zhan, Zhichao; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Qizhi; Zhang, Xueyao; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; zur Nedden, Martin; Zutshi, Vishnu

    2010-01-01

    The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

  17. Evolution of the ATLAS Nightly Build System

    CERN Document Server

    Undrus, A

    2012-01-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Builds and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, test...

  18. The ATLAS distributed analysis system

    International Nuclear Information System (INIS)

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of Grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high and steadily improving; Grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters provides user support and communicates user problems to the sites. Both the user support techniques and the direct feedback of users have been effective in improving the success rate and user experience when utilizing the distributed computing environment. In this contribution a description of the main components, activities and achievements of ATLAS distributed analysis is given. Several future improvements being undertaken will be described.

  19. ATLAS Fact Sheet : To raise awareness of the ATLAS detector and collaboration on the LHC

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    Facts on the Detector, Calorimeters, Muon System, Inner Detector, Pixel Detector, Semiconductor Tracker, Transition Radiation Tracker,, Surface hall, Cavern, Detector, Magnet system, Solenoid, Toroid, Event rates, Physics processes, Supersymmetric particles, Comparing LHC with Cosmic rays, Heavy ion collisions, Trigger and Data Acquisition TDAQ, Computing, the LHC and the ATLAS collaboration. This fact sheet also contains images of ATLAS and the collaboration as well as a short list of videos on ATLAS available for viewing.

  20. ATLAS status and physics program

    International Nuclear Information System (INIS)

    Full text: The ATLAS detector will observe proton collisions in the Large Hadron Collider (LHC) at CERN, which is scheduled for commissioning in 2007. When operational the LHC will collide protons at a centre-of-mass energy of 14 TeV with nominally 2 X 108 collisions per second at each of four beam-crossing points. ATLAS has been optimised for the detection of the hypothesised Higgs Boson, the only missing component of the otherwise experimentally well-verified electro-weak theory. In addition ATLAS is also sensitive to many other physics processes including QCD, b-physics, heavy ion interactions and those that could provide first evidence for super-symmetry. The current status of the LHC and the various aspects of the ATLAS detector will be discussed as well as the ability of ATLAS to observe new physics. The Australian contributions to the ATLAS project will also be described. These include: 1. Development and implementation of components of the Semi-Conductor Tracker (SCT), which provides spatial information for charged particles traversing the ATLAS inner detector. 2. Fast algorithms for simulating electromagnetic events in the calorimeter. 3. Development and application of fast reconstruction algorithms within the ATLAS software framework. 4. Analysis of Monte-Carlo data produced using simulated models of the ATLAS detector. The information provided will determine the most efficient strategies in searching for new physics once collisions at the LHC commence. 5. Advances in grid computing to handle the storage, transfer and offline processing of data amassed by LHC experiments, which totals over 2.4 P-bytes per annum. Copyright (2005) Australian Institute of Physics

  1. Supporting ATLAS

    CERN Multimedia

    maximilien brice

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator.

  2. Supporting ATLAS

    CERN Multimedia

    2003-01-01

    Eighteen feet made of stainless steel will support the barrel ATLAS detector in the cavern at Point 1. In total, the ATLAS feet system will carry approximately 6000 tons, and will give the same inclination to the detector as the LHC accelerator. The installation of the feet is scheduled to finish during January 2004 with an installation precision at the 1 mm level despite their height of 5.3 metres. The manufacture was carried out in Russia (Company Izhorskiye Zavody in St. Petersburg), as part of a Russian and JINR Dubna in-kind contribution to ATLAS. Involved in the installation is a team from IHEP-Protvino (Russia), the ATLAS technical co-ordination team at CERN, and the CERN survey team. In all, about 15 people are involved. After the feet are in place, the barrel toroid magnet and the barrel calorimeters will be installed. This will keep the ATLAS team busy for the entire year 2004.

  3. Spanish ATLAS Tier-2: facing up to LHC Run 2

    CERN Document Server

    Gonzalez de la Hoz, Santiago; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Sánchez, Javier; Sanchez Martinez, Victoria; Salt, José; Villaplana Perez, Miguel

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 with respect to Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation on these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  4. Glance Information System for ATLAS Management

    Science.gov (United States)

    Grael, F. F.; Maidantchik, C.; Évora, L. H. R. A.; Karam, K.; Moraes, L. O. F.; Cirilli, M.; Nessi, M.; Pommès, K.; ATLAS Collaboration

    2011-12-01

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  5. Glance Information System for ATLAS Management

    International Nuclear Information System (INIS)

    ATLAS Experiment is an international collaboration where more than 37 countries, 172 institutes and laboratories, 2900 physicists, engineers, and computer scientists plus 700 students participate. The management of this teamwork involves several aspects such as institute contribution, employment records, members' appointment, authors' list, preparation and publication of papers and speakers nomination. Previously, most of the information was accessible by a limited group and developers had to face problems such as different terminology, diverse data modeling, heterogeneous databases and unlike users needs. Moreover, the systems were not designed to handle new requirements. The maintenance has to be an easy task due to the long lifetime experiment and professionals turnover. The Glance system, a generic mechanism for accessing any database, acts as an intermediate layer isolating the user from the particularities of each database. It retrieves, inserts and updates the database independently of its technology and modeling. Relying on Glance, a group of systems were built to support the ATLAS management and operation aspects: ATLAS Membership, ATLAS Appointments, ATLAS Speakers, ATLAS Analysis Follow-Up, ATLAS Conference Notes, ATLAS Thesis, ATLAS Traceability and DSS Alarms Viewer. This paper presents the overview of the Glance information framework and describes the privilege mechanism developed to grant different level of access for each member and system.

  6. Networks in ATLAS

    CERN Document Server

    Mc Kee, Shawn Patrick; The ATLAS collaboration

    2016-01-01

    Networks have played a critical role in high-energy physics (HEP), enabling us to access and effectively utilize globally distributed resources to meet the needs of our physicists. Because of their importance in enabling our grid computing infrastructure many physicists have taken leading roles in research and education (R&E) networking, participating in, and even convening, network related meetings and research programs with the broader networking community worldwide. This has led to HEP benefiting from excellent global networking capabilities for little to no direct cost. However, as other science domains ramp-up their need for similar networking it becomes less clear that this situation will continue unchanged. What this means for ATLAS in particular needs to be understood. ATLAS has evolved its computing model since the LHC started based upon its experience with using globally distributed resources. The most significant theme of those changes has been increased reliance upon, and use of, its networks....

  7. ATLAS@Home looks for CERN volunteers

    CERN Multimedia

    Rosaria Marraffino

    2014-01-01

    ATLAS@Home is a CERN volunteer computing project that runs simulated ATLAS events. As the project ramps up, the project team is looking for CERN volunteers to test the system before planning a bigger promotion for the public.   The ATLAS@home outreach website. ATLAS@Home is a large-scale research project that runs ATLAS experiment simulation software inside virtual machines hosted by volunteer computers. “People from all over the world offer up their computers’ idle time to run simulation programmes to help physicists extract information from the large amount of data collected by the detector,” explains Claire Adam Bourdarios of the ATLAS@Home project. “The ATLAS@Home project aims to extrapolate the Standard Model at a higher energy and explore what new physics may look like. Everything we’re currently running is preparation for next year's run.” ATLAS@Home became an official BOINC (Berkeley Open Infrastructure for Network ...

  8. Computer-aided evaluation as an adjunct to revised BI-RADS Atlas: improvement in positive predictive value at screening breast MRI

    International Nuclear Information System (INIS)

    To investigate whether kinetic features via magnetic resonance (MR)-computer-aided evaluation (CAE) can improve the positive predictive value (PPV) of morphological descriptors for suspicious lesions at screening breast MRI. One hundred and sixteen consecutive, suspiciously enhancing lesions detected at contralateral breast MRI screening in 116 women with newly-diagnosed breast cancers were included. Morphological descriptors according to the revised BI-RADS Atlas and kinetic features from MR-CAE were analysed. The PPV of each descriptor was analysed to identify subgroups in which PPV could be improved by the addition of MR-CAE. When biopsy recommendations were downgraded to follow-up in cases where there were both the absence of enhancement at a 50 % threshold and the absence of delayed washout, PPV increased from 0.328 (95 % CI, 0.249-0.417) to 0.500 (95 % CI, 0.387- 0.613). Two ductal carcinoma in situ (DCIS) non-mass enhancement (NME) lesions were missed. Application of downgrading criteria to foci or masses led to increased PPV from 0.310 (95 % CI, 0.216-0.419) to 0.437 (95 % CI, 0.331-0.547) without missing cancers. MR-CAE has the potential to improve the PPV of breast MR imaging by reducing the number of false positives. When suspicious mass lesions do not show enhancement at a 50 % threshold nor delayed washout, follow-up rather than biopsy can be considered. (orig.)

  9. Mongolian Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Climatic atlas dated 1985, in Mongolian, with introductory material also in Russian and English. One hundred eight pages in single page PDFs.

  10. Renewable Energy Atlas of the United States

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J. [Environmental Science Division; Hlava, K. [Environmental Science Division; Greenwood, H. [Environmentall Science Division; Carr, A. [Environmental Science Division

    2013-12-13

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. This report explains how to add the Atlas to your computer and install the associated software. The report also includes: A description of each of the components of the Atlas; Lists of the Geographic Information System (GIS) database content and sources; and A brief introduction to the major renewable energy technologies. The Atlas includes the following: A GIS database organized as a set of Environmental Systems Research Institute (ESRI) ArcGIS Personal GeoDatabases, and ESRI ArcReader and ArcGIS project files providing an interactive map visualization and analysis interface.

  11. The Next Generation ATLAS Production System

    CERN Document Server

    Borodin, Mikhail; The ATLAS collaboration; Golubkov, Dmitry; Klimentov, Alexei; Maeno, Tadashi; Mashinistov, Ruslan; Vaniachine, Alexandre

    2015-01-01

    The ATLAS experiment at LHC data processing and simulation grows continuously, as more data and more use cases emerge. For data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, dynamically submitted by the ATLAS workload management system (PanDA/JEDI) and executed on the Grid, clouds and supercomputers. Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. User interface and system logic of these workflows are being implemented in the Database Engine for Tasks (DEFT). Such development required using modern computing technologies and approaches. We report technical details of this development: database implementation, server logic and Web user interface technologies.

  12. Pseudospread of the atlas: false sign of Jefferson fracture in young children

    International Nuclear Information System (INIS)

    Jefferson fractures are rare prior to teen-age. Three young children examined after trauma exhibited the characteristic spread appearance of the atlas, but fractures were excluded radiographically and clinically. A retrospective study demonstrated a similar appearance, termed pseudospread, in most children aged 3 months to 4 years, including over 90% during the second year. Pseudospread results from a discrepancy between the neural growth pattern of the atlas and the somatic pattern of the axis. An atlas spread index is defined and a normal range presented. When an atlas fracture is suggested by apparent lateral spread of the lateral atlas masses, computed tomography is useful to demonstrate an intact atlas ring

  13. Development, deployment and operations of ATLAS databases

    International Nuclear Information System (INIS)

    In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services

  14. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2014-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  15. The ATLAS Distributed Analysis System

    CERN Document Server

    Legger, F; The ATLAS collaboration; Pacheco Pages, A; Stradling, A

    2013-01-01

    In the LHC operations era, analysis of the multi-petabyte ATLAS data sample by globally distributed physicists is a challenging task. To attain the required scale the ATLAS Computing Model was designed around the concept of grid computing, realized in the Worldwide LHC Computing Grid (WLCG), the largest distributed computational resource existing in the sciences. The ATLAS experiment currently stores over 140 PB of data and runs about 140,000 concurrent jobs continuously at WLCG sites. During the first run of the LHC, the ATLAS Distributed Analysis (DA) service has operated stably and scaled as planned. More than 1600 users submitted jobs in 2012, with 2 million or more analysis jobs per week, peaking at about a million jobs per day. The system dynamically distributes popular data to expedite processing and maximally utilize resources. The reliability of the DA service is high but steadily improving; grid sites are continually validated against a set of standard tests, and a dedicated team of expert shifters ...

  16. ATLAS DQ2 Deletion Service

    CERN Document Server

    OLEYNIK, D; The ATLAS collaboration; GARONNE, V; CAMPANA, S

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 Deletion Service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogues to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service, accomplished without overloading either site storage, catalogues or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  17. ATLAS DQ2 Deletion Service

    CERN Document Server

    OLEYNIK, D; The ATLAS collaboration; GARONNE, V; CAMPANA, S

    2012-01-01

    The ATLAS Distributed Data Management project DQ2 is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. It also enforces data management policies decided on by the collaboration and defined in the ATLAS computing model. The DQ2 deletion service is one of the most important DDM services. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogs to serve data deletion requests on the grid. Furthermore, it also takes care of retry strategies, check-pointing transactions, load management and fault tolerance. In this paper special attention is paid to the technical details which are used to achieve the high performance of service (peaking at more than 4 millions files deleted per day), accomplished without overloading either site storage, catalogs or other DQ2 components. Special attention is also paid to the deletion monitoring service that allows operators a detailed view of the working system.

  18. ATLAS DDM integration in ARC

    DEFF Research Database (Denmark)

    Behrmann, Gerd; Cameron, David; Ellert, Mattias;

    The Nordic Data Grid Facility (NDGF) consists of Grid resources running ARC middleware in Scandinavia and other countries. These resources serve many virtual organisations and contribute a large fraction of total worldwide resources for the ATLAS experiment, whose data is distributed and managed...... by the DQ2 software. Managing ATLAS data within NDGF and between NDGF and other Grids used by ATLAS (the LHC Computing Grid and the Open Science Grid) presents a unique challenge for several reasons. Firstly, the entry point for data, the Tier 1 centre, is physically distributed among heterogeneous...... resources in several countries and yet must present a single access point for all data stored within the centre. The middleware framework used in NDGF differs significantly from other Grids, specifically in the way that all data movement and registration is performed by services outside the worker node...

  19. The ATLAS Glasgow Overview Week

    CERN Multimedia

    Richard Hawkings

    2007-01-01

    The ATLAS Overview Weeks always provide a good opportunity to see the status and progress throughout the experiment, and the July week at Glasgow University was no exception. The setting, amidst the traditional buildings of one of the UK's oldest universities, provided a nice counterpoint to all the cutting-edge research and technology being discussed. And despite predictions to the contrary, the weather at these northern latitudes was actually a great improvement on the previous few weeks in Geneva. The meeting sessions comprehensively covered the whole ATLAS project, from the subdetector and TDAQ systems and their commissioning, through to offline computing, analysis and physics. As a long-time ATLAS member who remembers plenary meetings in 1991 with 30 people drawing detector layouts on a whiteboard, the hardware and installation sessions were particularly impressive - to see how these dreams have been translated into 7000 tons of reality (and with attendant cabling, supports and services, which certainly...

  20. A service-based SLA (Service Level Agreement) for the RACF (RHIC and ATLAS computing facility) at brookhaven national lab

    Science.gov (United States)

    Karasawa, Mizuka; Chan, Tony; Smith, Jason

    2010-04-01

    The RACF provides computing support to a broad spectrum of scientific programs at Brookhaven. The continuing growth of the facility, the diverse needs of the scientific programs and the increasingly prominent role of distributed computing requires the RACF to change from a system to a service-based SLA with our user communities. A service-based SLA allows the RACF to coordinate more efficiently the operation, maintenance and development of the facility by mapping out a matrix of system and service dependencies and by creating a new, configurable alarm management layer that automates service alerts and notification of operations staff. This paper describes the adjustments made by the RACF to transition to a service-based SLA, including the integration of its monitoring software, alarm notification mechanism and service ticket system at the facility to make the new SLA a reality.

  1. A service-based SLA (Service Level Agreement) for the RACF (RHIC and ATLAS computing facility) at brookhaven national lab

    International Nuclear Information System (INIS)

    The RACF provides computing support to a broad spectrum of scientific programs at Brookhaven. The continuing growth of the facility, the diverse needs of the scientific programs and the increasingly prominent role of distributed computing requires the RACF to change from a system to a service-based SLA with our user communities. A service-based SLA allows the RACF to coordinate more efficiently the operation, maintenance and development of the facility by mapping out a matrix of system and service dependencies and by creating a new, configurable alarm management layer that automates service alerts and notification of operations staff. This paper describes the adjustments made by the RACF to transition to a service-based SLA, including the integration of its monitoring software, alarm notification mechanism and service ticket system at the facility to make the new SLA a reality.

  2. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to `transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  3. ATLAS Job Transforms

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Maddocks, H J; Harenberg, T; Sandhoff, M; Sarrazin, B

    2013-01-01

    The need to run complex workflows for a high energy physics experiment such as ATLAS has always been present. However, as computing resources have become even more constrained, compared to the wealth of data generated by the LHC, the need to use resources efficiently and manage complex workflows within a single grid job have increased. In ATLAS, a new Job Transform framework has been developed that we describe in this paper. This framework manages the multiple execution steps needed to 'transform' one data type into another (e.g., RAW data to ESD to AOD to final ntuple) and also provides a consistent interface for the ATLAS production system. The new framework uses a data driven workflow definition which is both easy to manage and powerful. After a transform is defined, jobs are expressed simply by specifying the input data and the desired output data. The transform infrastructure then executes only the necessary substeps to produce the final data products. The global execution cost of running the job is mini...

  4. ATLAS Outreach Highlights

    CERN Document Server

    Cheatham, Susan; The ATLAS collaboration

    2016-01-01

    The ATLAS outreach team is very active, promoting particle physics to a broad range of audiences including physicists, general public, policy makers, students and teachers, and media. A selection of current outreach activities and new projects will be presented. Recent highlights include the new ATLAS public website and ATLAS Open Data, the very recent public release of 1 fb-1 of ATLAS data.

  5. ATLAS software packaging

    Science.gov (United States)

    Rybkin, Grigory

    2012-12-01

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages—platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis projects (currently 6) used by particular physics groups on top of the full release. The tools provide an installation test for the full distribution kit. Packaging is done in two formats for use with the Pacman and RPM package managers. The tools are functional on the platforms supported by ATLAS—GNU/Linux and Mac OS X. The packaged software is used for software deployment on all ATLAS computing resources from the detector and trigger computing farms, collaboration laboratories computing centres, grid sites, to physicist laptops, and CERN VMFS and covers the use cases of running all applications as well as of software development.

  6. ATLAS Story

    CERN Multimedia

    Nordberg, Markus

    2012-01-01

    This film produced in July 2012 explains how fundamental research connects to Society and what benefits collaborative way of working can and may generate in the future, using ATLAS Collaboration as a case study. The film is intellectually inspired by the book "Collisions and Collaboration" (OUP) by Max Boisot (ed.), see: collisionsandcollaboration.com. The film is directed by Andrew Millington (OMNI Communications)

  7. Production Experience with the ATLAS Event Service

    CERN Document Server

    Benjamin, Douglas; The ATLAS collaboration

    2016-01-01

    The ATLAS Event Service (ES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the ES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Goggle Comput...

  8. Atlas of liver imaging

    International Nuclear Information System (INIS)

    This atlas is an outcome of an IAEA co-ordinated research programme. In addition to Japan, nine other Asian countries participated in the project and 293 liver scintigrams (116 from Japanese institutions and 177 from seven Asian countries) were evaluated by physicians from the participating Asian countries. The computer analysis of the scan findings of the individual physicians was carried out and individual scores have been separately tabulated for: (a) scan abnormality; (b) space occupying lesions; (c) cirrhosis and (d) diffuse liver diseases like hepatitis. Refs, figs and tabs

  9. 14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

    CERN Multimedia

    Jean-claude Gadmer

    2011-01-01

    14th March 2011 - Australian Senator the Hon. K. Carr Minister for Innovation, Industry, Science and Research in the ATLAS Visitor Centre with Collaboration Spokesperson F. Gianotti,visiting the SM18 area with G. De Rijk,the Computing centre with Department Head F. Hemmer, signing the guest book with Director-General R. Heuer with Head of International relations F. Pauss

  10. 23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Maximilien Brice

    2010-01-01

    23 April 2010 - Her Majesty’s Ambassador to Switzerland and Liechtenstein, United Kingdom of Great Britain and Northern Ireland, S. Gillett CMG CVO, accompanied by Beams Department Head P. Collier, visiting the ATLAS control room with Collaboration Deputy Spokesperson, University of Birmingham, D. Charlton and signing the guest book with Director for Research and Scientific Computing S. Bertolucci.

  11. 28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    28th February 2011 - Turkish Minister of Foreign Affairs A. Davutoğlu signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss; meeting the CERN Turkish Community at Point 1; visiting the ATLAS control room with Former Collaboration Spokesperson P. Jenni.

  12. 30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

    CERN Multimedia

    Jean-Claude Gadmer

    2012-01-01

    30 January 2012 - Danish National Research Foundation Chairman of board K. Bock and University of Copenhagen Rector R. Hemmingsen visiting ATLAS underground experimental area, CERN Control Centre and ALICE underground experimental area, throughout accompanied by J. Dines Hansen and B. Svane Nielsen; signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci and Head of International Relations F. Pauss.

  13. 28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

    CERN Multimedia

    Gadmer, Jean-Claude

    2014-01-01

    28 March 2014 - Italian Minister of Education, University and Research S. Giannini welcomed by CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci in the ATLAS experimental cavern with Former Collaboration Spokesperson F. Gianotti. Signature of the guest book with Belgian State Secretary for the Scientific Policy P. Courard.

  14. 11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

    CERN Document Server

    Jean-Claude Gadmer

    2011-01-01

    11 July 2011 - Carleton University Ottawa, Canada Vice President (Research and International) K. Matheson in the ATLAS visitor centre with Collaboration Spokesperson F. Gianotti, accompanied by Adviser J. Ellis and signing the guest book with CERN Director for Research and Scientific Computing S. Bertolucci.

  15. Using the Hadoop/MapReduce approach for monitoring the CERN storage system and improving the ATLAS computing model

    CERN Document Server

    Russo, Stefano Alberto; Lamanna, M

    The processing of huge amounts of data, an already fundamental task for the research in the elementary particle physics field, is becoming more and more important also for companies operating in the Information Technology (IT) industry. In this context, if conventional approaches are adopted several problems arise, starting from the congestion of the communication channels. In the IT sector, one of the approaches designed to minimize this congestion on is to exploit the data locality, or in other words, to bring the computation as closer as possible to where the data resides. The most common implementation of this concept is the Hadoop/MapReduce framework. In this thesis work I evaluate the usage of Hadoop/MapReduce in two areas: a standard one similar to typical IT analyses, and an innovative one related to high energy physics analyses. The first consists in monitoring the history of the storage cluster which stores the data generated by the LHC experiments, the second in the physics analysis of the latter, ...

  16. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, GA; The ATLAS collaboration

    2011-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 55PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations to manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: - Popularity service, which measures usage of data across ATLAS. - Space monitoring and accounting at sites. - Automated blacklisting service. - Cleaning agents, which trigger deletion of unused data at sites. - Deletion agents, to reliably delete unwanted data from sites. We describe the experience of data management operation in ATLAS computing, showing how these serv...

  17. The TRIDEC Virtual Tsunami Atlas - customized value-added simulation data products for Tsunami Early Warning generated on compute clusters

    Science.gov (United States)

    Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.

    2012-04-01

    The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set

  18. The magnetically driven imploding liner parameter space of the ATLAS capacitor bank

    CERN Document Server

    Lindemuth, I R; Faehl, R J; Reinovsky, R E

    2001-01-01

    Summary form only given, as follows. The Atlas capacitor bank (23 MJ, 30 MA) is now operational at Los Alamos. Atlas was designed primarily to magnetically drive imploding liners for use as impactors in shock and hydrodynamic experiments. We have conducted a computational "mapping" of the high-performance imploding liner parameter space accessible to Atlas. The effect of charge voltage, transmission inductance, liner thickness, liner initial radius, and liner length has been investigated. One conclusion is that Atlas is ideally suited to be a liner driver for liner-on-plasma experiments in a magnetized target fusion (MTF) context . The parameter space of possible Atlas reconfigurations has also been investigated.

  19. ATLAS Recordings

    CERN Multimedia

    Steven Goldfarb; Mitch McLachlan; Homer A. Neal

    Web Archives of ATLAS Plenary Sessions, Workshops, Meetings, and Tutorials from 2005 until this past month are available via the University of Michigan portal here. Most recent additions include the Trigger-Aware Analysis Tutorial by Monika Wielers on March 23 and the ROOT Workshop held at CERN on March 26-27.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal.Feedback WelcomeOur group is making arrangements now to record plenary sessions, tutorials, and other important ATLAS events for 2007. Your suggestions for potential recording, as well as your feedback on existing archives is always welcome. Please contact us at wlap@umich.edu. Thank you.Enjoy the Lectures!

  20. Spanish ATLAS Tier-2 facing up to Run-2 period of LHC

    CERN Document Server

    Gonzalez de la Hoz, Santiago; The ATLAS collaboration; Fassi, Farida; Fernandez Casani, Alvaro; Kaci, Mohammed; Lacort Pellicer, Victor Ruben; Montiel Gonzalez, Almudena Del Rocio; Oliver Garcia, Elena; Pacheco Pages, Andres; Salt, José; Villaplana Perez, Miguel; Sanchez Martinez, Victoria; Sánchez, Javier

    2015-01-01

    The goal of this work is to describe the way of addressing the main challenges of Run-2 by the Spanish ATLAS Tier-2. The considerable increase of energy and luminosity for the upcoming Run-2 w.r.t. Run-1 has led to a revision of the ATLAS computing model as well as some of the main ATLAS computing tools. The adaptation to these changes will be shown, with the peculiarities that it is a distributed Tier-2 composed of three sites and its members are involved on ATLAS computing tasks with a hub of research, innovation and education.

  1. A Lego version of ATLAS

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    There's nothing very unusual about a small child making simple objects out of Lego. But wouldn't you be surprised to learn that one six-year old has just made a life-like model of the ATLAS detector?   Bastian with his Lego ATLAS detector. © Photo provided by Kai Nicklas, Bastian's father. It all began a month ago when the boy's father was watching a video about the construction of the ATLAS detector on the Internet. He hadn't noticed that his son was watching it over his shoulder. The small boy was fascinated by what he was seeing on the computer screen and his first reaction was to exclaim: "Wow! That's a terrific machine! I think the people who built it must be really clever." The detector must have really fired his imagination because, after asking his father a few questions, he decided to make a Lego model of it. Look at the photo and you will see how closely the model he produced resembles the actual ATLAS detector. Is the little boy in question, Bastia...

  2. ATLAS Fast Tracker Simulation Challenges

    CERN Document Server

    Adelman, Jahred; The ATLAS collaboration; Borodin, Mikhail; Chakraborty, Dhiman; García Navarro, José Enrique; Golubkov, Dmitry; Kama, Sami; Panitkin, Sergey; Smirnov, Yuri; Stewart, Graeme; Tompkins, Lauren; Vaniachine, Alexandre; Volpi, Guido

    2015-01-01

    To deal with Big Data flood from the ATLAS detector most events have to be rejected in the trigger system. the trigger rejection is complicated by the presence of a large number of minimum-bias events – the pileup. To limit pileup effects in the high luminosity environment of the LHC Run-2, ATLAS relies on full tracking provided by the Fast TracKer (FTK) implemented with custom electronics. The FTK data processing pipeline has to be simulated in preparation for LHC upgrades to support electronics design and develop trigger strategies at high luminosity. The simulation of the FTK - a highly parallelized system - has inherent performance bottlenecks on general-purpose CPUs. To take advantage of the Grid Computing power, the FTK simulation is integrated with Monte Carlo simulations at the Production System level above the ATLAS workload management system PanDA. We report on ATLAS experience with FTK simulations on the Grid and next steps for accommodating the growing requirements for resources during the LHC R...

  3. Simulation of the heat transfer around the ATLAS muon chambers

    CERN Multimedia

    2005-01-01

    This 2D simulation recently carried out on the ATLAS muon chambers by a small team of CERN engineers specialises in the numerical computation of fluid dynamics, in other words the flow of fluids and heat.

  4. Triggering events with GPUs at ATLAS

    Science.gov (United States)

    Kama, S.; Soares, J. Augusto; Baines, J.; Bauce, M.; Bold, T.; Conde Muino, P.; Emeliyanov, D.; Goncalo, R.; Messina, A.; Negrini, M.; Rinaldi, L.; Sidoti, A.; Tavares Delgado, A.; Tupputi, S.; Vaz Gil Lopes, L.

    2015-12-01

    The growing complexity of events produced in LHC collisions demands increasing computing power both for the online selection and for the offline reconstruction of events. In recent years there have been significant advances in the performance of Graphics Processing Units (GPUs) both in terms of increased compute power and reduced power consumption that make GPUs extremely attractive for use in a complex particle physics experiments such as ATLAS. A small scale prototype of the full ATLAS High Level Trigger has been implemented that exploits reconstruction algorithms optimized for this new massively parallel paradigm. We discuss the integration procedure followed for this prototype and present the performance achieved and the prospects for the future.

  5. ATLAS PhD Grants 2015

    CERN Multimedia

    Marcelloni De Oliveira, Claudia

    2015-01-01

    ATLAS PHd Grants - We are excited to announce the creation of a dedicated grant scheme (thanks to a donation from Fabiola Gianotti and Peter Jenni following their award from the Fundamental Physics Prize foundation) to encourage young and high-caliber doctoral students in particle physics research (including computing for physics) and permit them to obtain world class exposure, supervision and training within the ATLAS collaboration. This special PhD Grant is aimed at graduate students preparing a doctoral thesis in particle physics (incl. computing for physics) to spend one year at CERN followed by one year support also at the home Institute.

  6. ATLAS Distributed Data Analysis: challenges and performance

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  7. ATLAS Distributed Data Analysis: performance and challenges

    CERN Document Server

    Fassi, Farida; The ATLAS collaboration

    2015-01-01

    In the LHC operations era the key goal is to analyse the results of the collisions of high-energy particles as a way of probing the fundamental forces of nature. The ATLAS experiment at the LHC at CERN is recording and simulating several 10's of PetaBytes of data per year. The ATLAS Computing Model was designed around the concepts of Grid Computing. Large data volumes from the detectors and simulations require a large number of CPUs and storage space for data processing. To cope with this challenge a global network known as the Worldwide LHC Computing Grid (WLCG) was built. This is the most sophisticated data taking and analysis system ever built. ATLAS accumulated more than 140 PB of data between 2009 and 2014. To analyse these data ATLAS developed, deployed and now operates a mature and stable distributed analysis (DA) service on the WLCG. The service is actively used: more than half a million user jobs run daily on DA resources, submitted by more than 1500 ATLAS physicists. A significant reliability of the...

  8. Class Generation for Numerical Wind Atlases

    DEFF Research Database (Denmark)

    Cutler, N.J.; Jørgensen, B.H.; Ersbøll, Bjarne Kjær;

    2006-01-01

    A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability of the atmos......A new optimised clustering method is presented for generating wind classes for mesoscale modelling to produce numerical wind atlases. It is compared with the existing method of dividing the data in 12 to 16 sectors, 3 to 7 wind-speed bins and dividing again according to the stability...... of the atmosphere. Wind atlases are typically produced using many years of on-site wind observations at many locations. Numerical wind atlases are the result of mesoscale model integrations based on synoptic scale wind climates and can be produced in a number of hours of computation. 40 years of twice daily NCEP...... by optimising the representation of the data and by automating the procedure more. The Karlsruhe Atmospheric Mesoscale Model (KAMM) is combined with the WAsP analysis to produce numerical wind atlases for two sites, Ireland and Egypt. The model results are compared with wind atlases made from measurements...

  9. Electroweak Physics with ATLAS

    OpenAIRE

    Akhundov, Arif

    2008-01-01

    The precision measurements of electroweak parameters of the Standard Model with the ATLAS detector at LHC are reviewed. An emphasis is put on the bridge connecting the ATLAS measurements with the SM analysis at LEP/SLC and the Tevatron.

  10. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Science.gov (United States)

    Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access. PMID:27285947

  11. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation.

    Science.gov (United States)

    Liyanage, Kishan Andre; Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O'Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson's disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5-0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0-0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access.

  12. Development and Implementation of a Corriedale Ovine Brain Atlas for Use in Atlas-Based Segmentation

    Science.gov (United States)

    Steward, Christopher; Moffat, Bradford Armstrong; Opie, Nicholas Lachlan; Rind, Gil Simon; John, Sam Emmanuel; Ronayne, Stephen; May, Clive Newton; O’Brien, Terence John; Milne, Marjorie Eileen; Oxley, Thomas James

    2016-01-01

    Segmentation is the process of partitioning an image into subdivisions and can be applied to medical images to isolate anatomical or pathological areas for further analysis. This process can be done manually or automated by the use of image processing computer packages. Atlas-based segmentation automates this process by the use of a pre-labelled template and a registration algorithm. We developed an ovine brain atlas that can be used as a model for neurological conditions such as Parkinson’s disease and focal epilepsy. 17 female Corriedale ovine brains were imaged in-vivo in a 1.5T (low-resolution) MRI scanner. 13 of the low-resolution images were combined using a template construction algorithm to form a low-resolution template. The template was labelled to form an atlas and tested by comparing manual with atlas-based segmentations against the remaining four low-resolution images. The comparisons were in the form of similarity metrics used in previous segmentation research. Dice Similarity Coefficients were utilised to determine the degree of overlap between eight independent, manual and atlas-based segmentations, with values ranging from 0 (no overlap) to 1 (complete overlap). For 7 of these 8 segmented areas, we achieved a Dice Similarity Coefficient of 0.5–0.8. The amygdala was difficult to segment due to its variable location and similar intensity to surrounding tissues resulting in Dice Coefficients of 0.0–0.2. We developed a low resolution ovine brain atlas with eight clinically relevant areas labelled. This brain atlas performed comparably to prior human atlases described in the literature and to intra-observer error providing an atlas that can be used to guide further research using ovine brains as a model and is hosted online for public access. PMID:27285947

  13. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  14. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, T; Ruan, D [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  15. Recent ATLAS Articles on WLAP

    CERN Multimedia

    J. Herr

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Physics Workshop 6-11 June 2005 June 2005 ATLAS Week Plenary Session Click here to browse WLAP for all ATLAS lectures.

  16. Optimal number of atlases and label fusion for automatic multi-atlas-based brachial plexus contouring in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    The present study aimed to define the optimal number of atlases for automatic multi-atlas-based brachial plexus (BP) segmentation and to compare Simultaneous Truth and Performance Level Estimation (STAPLE) label fusion with Patch label fusion using the ADMIRE® software. The accuracy of the autosegmentations was measured by comparing all of the generated autosegmentations with the anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were used for automatic multi-atlas-based segmentation. To determine the optimal number of atlases, one atlas was selected as a patient and the 11 remaining atlases were registered onto this patient using a deformable image registration algorithm. Next, label fusion was performed by using every possible combination of 2 to 11 atlases, once using STAPLE and once using Patch. This procedure was repeated for every atlas as a patient. The similarity of the generated automatic BP segmentations and the gold standard segmentation was measured by calculating the average Dice similarity (DSC), Jaccard (JI) and True positive rate (TPR) for each number of atlases. These similarity indices were compared for the different number of atlases using an equivalence trial and for the two label fusion groups using an independent sample-t test. DSC’s and JI’s were highest when using nine atlases with both STAPLE (average DSC = 0,532; JI = 0,369) and Patch (average DSC = 0,530; JI = 0,370). When comparing both label fusion algorithms using 9 atlases for both, DSC and JI values were not significantly different. However, significantly higher TPR values were achieved in favour of STAPLE (p < 0,001). When fewer than four atlases were used, STAPLE produced significantly lower DSC, JI and TPR values than did Patch (p = 0,0048). Using 9 atlases with STAPLE label fusion resulted in the most accurate BP autosegmentations (average DSC = 0,532; JI = 0,369 and TPR = 0,760). Only when

  17. The ATLAS Fast Tracker

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    The use of tracking information at the trigger level in the LHC Run II period is crucial for the trigger an data acquisition (TDAQ) system. The tracking precision is in fact important to identify specific decay products of the Higgs boson or new phenomena, a well as to distinguish the contributions coming from many contemporary collisions that occur at every bunch crossing. However, the track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, full reconstruction at full Level-1 trigger accept rate (100 KHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a specific processor: the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronic, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker information. Patte...

  18. Big Data Analytics Tools as Applied to ATLAS Event Data

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration

    2016-01-01

    Big Data technologies have proven to be very useful for storage, processing and visualization of derived metrics associated with ATLAS distributed computing (ADC) services. Log file data and database records, and metadata from a diversity of systems have been aggregated and indexed to create an analytics platform for ATLAS ADC operations analysis. Dashboards, wide area data access cost metrics, user analysis patterns, and resource utilization efficiency charts are produced flexibly through queries against a powerful analytics cluster. Here we explore whether these techniques and analytics ecosystem can be applied to add new modes of open, quick, and pervasive access to ATLAS event data so as to simplify access and broaden the reach of ATLAS public data to new communities of users. An ability to efficiently store, filter, search and deliver ATLAS data at the event and/or sub-event level in a widely supported format would enable or significantly simplify usage of big data, statistical and machine learning tools...

  19. ATLAS TDAQ application gateway upgrade during LS1

    CERN Document Server

    KOROL, A; The ATLAS collaboration; BOGDANCHIKOV, A; BRASOLIN, F; CONTESCU, A C; DUBROV, S; HAFEEZ, M; LEE, C J; SCANNICCHIO, D A; TWOMEY, M; VORONKOV, A; ZAYTSEV, A

    2014-01-01

    The ATLAS Gateway service is implemented with a set of dedicated computer nodes to provide a fine-grained access control between CERN General Public Network (GPN) and ATLAS Technical Control Network (ATCN). ATCN connects the ATLAS online farm used for ATLAS Operations and data taking, including the ATLAS TDAQ (Trigger and Data Aquisition) and DCS (Detector Control System) nodes. In particular, it provides restricted access to the web services (proxy), general login sessions (via SSH and RDP protocols), NAT and mail relay from ATCN. At the Operating System level the implementation is based on virtualization technologies. Here we report on the Gateway upgrade during Long Shutdown 1 (LS1) period: it includes the transition to the last production release of the CERN Linux distribution (SLC6), the migration to the centralized configuration management system (based on Puppet) and the redesign of the internal system architecture.

  20. EnviroAtlas - Portland, OR - Atlas Area Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Portland, OR Atlas Area. It represents the outside edge of all the block groups included in the EnviroAtlas Area....

  1. ATLAS Recordings

    CERN Multimedia

    Jeremy Herr; Homer A. Neal; Mitch McLachlan

    The University of Michigan Web Archives for the 2006 ATLAS Week Plenary Sessions, as well as the first of 2007, are now online. In addition, there are a wide variety of Software and Physics Tutorial sessions, recorded over the past couple years, to chose from. All ATLAS-specific archives are accessible here.Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. Lectures can be viewed directly over the web or downloaded locally.In addition, you will find access to a variety of general tutorials and events via the portal. Shaping Collaboration 2006The Michigan group is happy to announce a complete set of recordings from the Shaping Collaboration conference held last December at the CICG in Geneva.The event hosted a mix of Collaborative Tool experts and LHC Users, and featured presentations by the CERN Deputy Director General, Prof. Jos Engelen, the President of Internet2, and chief developers from VRVS/EVO, WLAP, and other tools...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  3. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  4. Global Data Grid Efforts for ATLAS

    CERN Multimedia

    Gardner, R.

    2001-01-01

    Over the past two years computational data grids have emerged as a promising new technology for large scale, data-intensive computing required by the LHC experiments, as outlined by the recent "Hoffman" review panel that addressed the LHC computing challenge. The problem essentially is to seamlessly link physicists to petabyte-scale data and computing resources, distributed worldwide, and connected by high-bandwidth research networks. Several new collaborative initiatives in Europe, the United States, and Asia have formed to address the problem. These projects are of great interest to ATLAS physicists and software developers since their objective is to offer tools that can be integrated into the core ATLAS application framework for distributed event reconstruction, Monte Carlo simulation, and data analysis, making it possible for individuals and groups of physicists to share information, data, and computing resources in new ways and at scales not previously attempted. In addition, much of the distributed IT...

  5. The evolution of the Trigger and Data Acquisition System in the ATLAS experiment (ACAT2013: 15. international workshop on advanced computing and analysis techniques in physics research)

    International Nuclear Information System (INIS)

    The ATLAS experiment, aimed at recording the results of LHC proton-proton collisions, is upgrading its Trigger and Data Acquisition (TDAQ) system during the current LHC first long shutdown. The purpose of the upgrade is to add robustness and flexibility to the selection and the conveyance of the physics data, simplify the maintenance of the infrastructure, exploit new technologies and, overall, make ATLAS data-taking capable of dealing with increasing event rates. The TDAQ system used to date is organised in a three-level selection scheme, including a hardware-based first-level trigger and second- and third-level triggers implemented as separate software systems distributed on separate, commodity hardware nodes. While this architecture was successfully operated well beyond the original design goals, the accumulated experience stimulated interest to explore possible evolutions. We will also be upgrading the hardware of the TDAQ system by introducing new elements to it. For the high-level trigger, the current plan is to deploy a single homogeneous system, which merges the execution of the second and third trigger levels, still separated, on a unique hardware node. Prototyping efforts already demonstrated many benefits to the simplified design. In this paper we report on the design and the development status of this new system

  6. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  8. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  10. Advances in Service and Operations for ATLAS Data Management

    CERN Document Server

    Stewart, G A; The ATLAS collaboration; Lassnig, M; Molfetas, A; Baristis, M; Zhang, D; Calvet, I; Beermann, T; Barreiro Megino, F; Tykhonov, A; Campana, S; Serfon, C; Oleynik, O; Petrosyan, A

    2012-01-01

    ATLAS has recorded almost 5PB of RAW data since the LHC started running at the end of 2009. Many more derived data products and complimentary simulation data have also been produced by the collaboration and, in total, 70PB is currently stored in the Worldwide LHC Computing Grid by ATLAS. All of this data is managed by the ATLAS Distributed Data Management system, called Don Quixote 2 (DQ2). DQ2 has evolved rapidly to help ATLAS Computing operations manage these large quantities of data across the many grid sites at which ATLAS runs and to help ATLAS physicists get access to this data. In this paper we describe new and improved DQ2 services: egin{itemize} item hspace{2mm} Popularity service, which measures usage of data across ATLAS. item hspace{2mm} Space monitoring and accounting at sites. item hspace{2mm} Automated exclusion service. item hspace{2mm} Cleaning agents, which trigger deletion of unused data at sites. item hspace{2mm} Deletion agents, to reliably delete unwanted data from sites. end{itemize} We...

  11. COMPUTING

    CERN Document Server

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  12. The ATLAS ARC backend to HPC

    Science.gov (United States)

    Haug, S.; Hostettler, M.; Sciacca, F. G.; Weber, M.

    2015-12-01

    The current distributed computing resources used for simulating and processing collision data collected by ATLAS and the other LHC experiments are largely based on dedicated x86 Linux clusters. Access to resources, job control and software provisioning mechanisms are quite different from the common concept of self-contained HPC applications run by particular users on specific HPC systems. We report on the development and the usage in ATLAS of a SSH backend to the Advanced Resource Connector (ARC) middleware to enable HPC compliant access and on the corresponding software provisioning mechanisms.

  13. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  14. Computer tomographic imaging and anatomic correlation of the human brain: A comparative atlas of thin CT-scan sections and correlated neuro-anatomic preparations

    International Nuclear Information System (INIS)

    It is of the greatest importance to the radiologist, the neurologist and the neurosurgeon to be able to localize topographically a pathological brain process on the CT scan as precisely as possible. For that purpose, the identification of as many anatomical structures as possible on the CT scan image are necessary and indispensable. In this atlas a great number of detailed anatomical data on frontal horizontal CT scan sections, each being only 2 mm thick, are indicated, e.g. the cortical gyri, the basal ganglia, details of the white matter, extracranial muscles and blood vessels, parts of the base and the vault of the skull, etc. The very precise topographical description of the numerous CT scan images was realized by the author by confrontation of these images with the corresponding anatomical sections of the same brain specimen, performed by an original technique

  15. ATLAS Grid Data Processing: system evolution and scalability

    International Nuclear Information System (INIS)

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software and Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users providing data for physics analysis and other ATLAS main activities.

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  17. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  18. Renewable energy atlas of the United States.

    Energy Technology Data Exchange (ETDEWEB)

    Kuiper, J.A.; Hlava, K.Greenwood, H.; Carr, A. (Environmental Science Division)

    2012-05-01

    The Renewable Energy Atlas (Atlas) of the United States is a compilation of geospatial data focused on renewable energy resources, federal land ownership, and base map reference information. It is designed for the U.S. Department of Agriculture Forest Service (USFS) and other federal land management agencies to evaluate existing and proposed renewable energy projects. Much of the content of the Atlas was compiled at Argonne National Laboratory (Argonne) to support recent and current energy-related Environmental Impact Statements and studies, including the following projects: (1) West-wide Energy Corridor Programmatic Environmental Impact Statement (PEIS) (BLM 2008); (2) Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2010); (3) Supplement to the Draft PEIS for Solar Energy Development in Six Southwestern States (DOE/BLM 2011); (4) Upper Great Plains Wind Energy PEIS (WAPA/USFWS 2012, in progress); and (5) Energy Transport Corridors: The Potential Role of Federal Lands in States Identified by the Energy Policy Act of 2005, Section 368(b) (in progress). This report explains how to add the Atlas to your computer and install the associated software; describes each of the components of the Atlas; lists the Geographic Information System (GIS) database content and sources; and provides a brief introduction to the major renewable energy technologies.

  19. All 2006 ATLAS Tutorials online

    CERN Multimedia

    Steven Goldfarb,; Mitch McLachlan,; Homer A. Neal

    The University of Michigan has completed its full agenda of Web Lecture recording for ATLAS for 2006. The archives include all three ATLAS Week Plenary Sessions, as well as a large variety of tutorials. They are accessible at target="_top" this location. Viewing requires a standard web browser with RealPlayer plug-in (included in most browsers automatically) and works on any major platform. This is the first year our group has been asked to provide this complete service to the collaboration, so any and all feedback is welcome. We would especially like to know if you had any difficulties viewing the lectures, if you found the selection of material to be useful, and/or if you think there are any other specific events we ought to cover in 2007. Please send you comments to wlap@umich.edu. We look forward to bringing you a rich variety of new lectures in 2007, starting with the ATLAS Distributed Computing Tutorial on Feb 1, 2 in Edinburgh and concluding with the Higgs discovery talk (of course). Enjoy the Lec...

  20. Canadian ATLAS data center to support CERN's LHC

    CERN Multimedia

    2006-01-01

    "The biggest science experiment in history is currently underway at the world-famous CERN labs in Switzerland, and Canada is poised to play a critical role in its success. Thanks to a $10.5 million investment announced by the Canada Foundation for Innovation (CFI), an ultra-sophisticated computing facility -- the ATLAS Data Center -- will be created to support the ATLAS project at CERN's Large Hadron Collider (LHC)." (1 page)

  1. Enhancing atlas based segmentation with multiclass linear classifiers

    Energy Technology Data Exchange (ETDEWEB)

    Sdika, Michaël, E-mail: michael.sdika@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR 5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Villeurbanne 69300 (France)

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  4. COMPUTING

    CERN Document Server

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  6. COMPUTING

    CERN Document Server

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  7. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  8. The Irish Wind Atlas

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R. [Univ. College Dublin, Dept. of Electronic and Electrical Engineering, Dublin (Ireland); Landberg, L. [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    The development work on the Irish Wind Atlas is nearing completion. The Irish Wind Atlas is an updated improved version of the Irish section of the European Wind Atlas. A map of the irish wind resource based on a WA{sup s}P analysis of the measured data and station description of 27 measuring stations is presented. The results of previously presented WA{sup s}P/KAMM runs show good agreement with these results. (au)

  9. Quantification of Tc-99m-ethyl cysteinate dimer brain single photon emission computed tomography images using statistical probabilistic brain atlas in depressive end-stage renal disease patients Correlation with disease severity and symptom factors

    Institute of Scientific and Technical Information of China (English)

    Heeyoung Kim; In Joo Kim; Seong-Jang Kim; Sang Heon Song; Kyoungjune Pak; Keunyoung Kim

    2012-01-01

    This study adapted a statistical probabilistic anatomical map of the brain for single photon emission computed tomography images of depressive end-stage renal disease patients. This research aimed to investigate the relationship between symptom clusters, disease severity, and cerebral blood flow. Twenty-seven patients (16 males, 11 females) with stages 4 and 5 end-stage renal disease were enrolled, along with 25 healthy controls. All patients underwent depressive mood assessment and brain single photon emission computed tomography. The statistical probabilistic anatomical map images were used to calculate the brain single photon emission computed tomography counts. Asymmetric index was acquired and Pearson correlation analysis was performed to analyze the correlation between symptom factors, severity, and regional cerebral blood flow. The depression factors of the Hamilton Depression Rating Scale showed a negative correlation with cerebral blood flow in the left amygdale. The insomnia factor showed negative correlations with cerebral blood flow in the left amygdala, right superior frontal gyrus, right middle frontal gyrus, and left middle frontal gyrus. The anxiety factor showed a positive correlation with cerebral glucose metabolism in the cerebellar vermis and a negative correlation with cerebral glucose metabolism in the left globus pallidus, right inferior frontal gyrus, both temporal poles, and left parahippocampus. The overall depression severity (total scores of Hamilton Depression Rating Scale) was negatively correlated with the statistical probabilistic anatomical map results in the left amygdala and right inferior frontal gyrus. In conclusion, our results demonstrated that the disease severity and extent of cerebral blood flow quantified by a probabilistic brain atlas was related to various brain areas in terms of the overall severity and symptom factors in end-stage renal disease patients.

  10. 11 March 2009 - Italian Minister of Education, University and Research M. Gelmini, visiting ATLAS and CMS underground experimental areas and LHC tunnel with Director for Research and Scientific Computing S. Bertolucci. Signature of the guest book with CERN Director-General R. Heuer and S. Bertolucci at CMS Point 5.

    CERN Multimedia

    Maximilien Brice

    2009-01-01

    Members of the Ministerial delegation: Cons. Amb. Sebastiano FULCI, Consigliere Diplomatico Dott.ssa Elisa GREGORINI, Segretario Particolare del Ministro Dott. Massimo ZENNARO, Responsabile rapporti con la stampa Prof. Roberto PETRONZIO, Presidente dell’INFN (Istituto Nazionale di Fisica Nucleare) Dott. Luciano CRISCUOLI, Direttore Generale della Ricerca, MIUR Dott. Andrea MARINONI, Consulente scientifico del Ministro CERN delegation present throughout the programme: Prof. Sergio Bertolucci, Director for Research and Scientific Computing Prof. Fabiola Gianotti, ATLAS Collaboration Spokesperson Prof. Paolo Giubellino, ALICE Deputy Spokesperson, Universita & INFN, Torino Prof. Guido Tonelli, CMS Collaboration Deputy Spokesperson, INFN Pisa Dr Monica Pepe-Altarelli, LHCb Collaboration CERN Team Leader Guests in the ATLAS exhibition area: Dr Marcello Givoletti\tPresident of CAEN Dr Davide Malacalza\tPresident of ASG Ansaldo Superconductors and users: Prof. Clara Matteuzzi, LHCb Collaboration, Universita' d...

  11. A unified framework for cross-modality multi-atlas segmentation of brain MRI

    DEFF Research Database (Denmark)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-01-01

    Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented....... These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when...... the atlases and target images are obtained via different sensor types or imaging protocols.In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely...

  12. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S

    2005-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: Atlas Software Week Plenary 6-10 December 2004 North American ATLAS Physics Workshop (Tucson) 20-21 December 2004 (17 talks) Physics Analysis Tools Tutorial (Tucson) 19 December 2004 Full Chain Tutorial 21 September 2004 ATLAS Plenary Sessions, 17-18 February 2005 (17 talks) Coming soon: ATLAS Tutorial on Electroweak Physics, 14 Feb. 2005 Software Workshop, 21-22 February 2005 Click here to browse WLAP for all ATLAS lectures.

  13. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  14. COMPUTING

    CERN Document Server

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  15. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  17. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  18. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  20. Iberian ATLAS Cloud response during the first LHC collisions

    International Nuclear Information System (INIS)

    The computing model of the ATLAS experiment at the LHC (Large Hadron Collider) is based on a tiered hierarchy that ranges from Tier0 (CERN) down to end-user's own resources (Tier3). According to the same computing model, the role of the Tier2s is to provide computing resources for event simulation processing and distributed data analysis. Tier3 centers, on the other hand, are the responsibility of individual institutions to define, fund, deploy and support. In this contribution we report on the operations of the ATLAS Iberian Cloud centers facing data taking and we describe some of the Tier3 facilities currently deployed at the Cloud.

  1. Iberian ATLAS Cloud response during the first LHC collisions

    CERN Document Server

    Villaplana, M; The ATLAS collaboration; Borges, G; Borrego, C; Carvalho, J; David, M; Espinal, X; Fernández, A; Gomes, J; González de la Hoz, S; Kaci, M; Lamas, A; Nadal, J; Oliveira, M; Oliver, E; Osuna, C; Pacheco, A; Pardo, JJ; del Peso, J; Salt, J; Sánchez, J; Wolters, H

    2011-01-01

    The computing model of the ATLAS experiment at the LHC (Large Hadron Collider) is based on a tiered hierarchy that ranges from Tier0 (CERN) down to end-user's own resources (Tier3). According to the same computing model, the role of the Tier2s is to provide computing resources for event simulation processing and distributed data analysis. Tier3 centers, on the other hand, are the responsibility of individual institutions to define, fund, deploy and support. In this contribution we report on the operations of the ATLAS Iberian Cloud centers facing data taking and we describe some of the Tier3 facilities currently deployed at the Cloud.

  2. Implementation of the ATLAS trigger within the ATLAS Multi­Threaded Software Framework AthenaMT

    CERN Document Server

    Wynne, Benjamin; The ATLAS collaboration

    2016-01-01

    We present an implementation of the ATLAS High Level Trigger that provides parallel execution of trigger algorithms within the ATLAS multi­threaded software framework, AthenaMT. This development will enable the ATLAS High Level Trigger to meet future challenges due to the evolution of computing hardware and upgrades of the Large Hadron Collider, LHC, and ATLAS Detector. During the LHC data­taking period starting in 2021, luminosity will reach up to three times the original design value. Luminosity will increase further, to up to 7.5 times the design value, in 2026 following LHC and ATLAS upgrades. This includes an upgrade of the ATLAS trigger architecture that will result in an increase in the High Level Trigger input rate by a factor of 4 to 10 compared to the current maximum rate of 100 kHz. The current ATLAS multiprocess framework, AthenaMP, manages a number of processes that process events independently, executing algorithms sequentially in each process. AthenaMT will provide a fully multi­threaded env...

  3. Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN)

    CERN Document Server

    Lopez-Perez, Juan Antonio; Salt, Jose; Ros, Eduardo

    2008-01-01

    The Standard Model of particle physics describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. However, it presents several problems and some questions remain unanswered so it cannot be considered a complete theory of fundamental interactions. Many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions theories. In the context of some models with Extra Dimensions of size about $1 TeV^{-}1$, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z* and W*, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. These hadronic modes are more difficult to detect than the leptonic ones, but they should allow a measurement of the cou...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  5. Evolution of the ATLAS Nightly Build System

    Science.gov (United States)

    Undrus, A.

    2012-12-01

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  6. Evolution of the ATLAS Nightly Build System

    International Nuclear Information System (INIS)

    The ATLAS Nightly Build System is a major component in the ATLAS collaborative software organization, validation, and code approval scheme. For over 10 years of development it has evolved into a factory for automatic release production and grid distribution. The 50 multi-platform branches of ATLAS releases provide vast opportunities for testing new packages, verification of patches to existing software, and migration to new platforms and compilers for ATLAS code that currently contains 2200 packages with 4 million C++ and 1.4 million python scripting lines written by about 1000 developers. Recent development was focused on the integration of ATLAS Nightly Build and Installation systems. The nightly releases are distributed and validated and some are transformed into stable releases used for data processing worldwide. The ATLAS Nightly System is managed by the NICOS control tool on a computing farm with 50 powerful multiprocessor nodes. NICOS provides the fully automated framework for the release builds, testing, and creation of distribution kits. The ATN testing framework of the Nightly System runs unit and integration tests in parallel suites, fully utilizing the resources of multi-core machines, and provides the first results even before compilations complete. The NICOS error detection system is based on several techniques and classifies the compilation and test errors according to their severity. It is periodically tuned to place greater emphasis on certain software defects by highlighting the problems on NICOS web pages and sending automatic e-mail notifications to responsible developers. These and other recent developments will be presented and future plans will be described.

  7. Distributed analysis in ATLAS using GANGA

    Science.gov (United States)

    Elmsheuser, Johannes; Brochu, Frederic; Cowan, Greig; Egede, Ulrik; Gaidioz, Benjamin; Lee, Hurng-Chun; Maier, Andrew; Móscicki, Jakub; Pajchel, Katarina; Reece, Will; Samset, Bjorn; Slater, Mark; Soroko, Alexander; Vanderster, Daniel; Williams, Michael

    2010-04-01

    Distributed data analysis using Grid resources is one of the fundamental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicists will be submitting analysis jobs to the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without expertise in Grid technology. These tools enlarge the number of Grid users from a few production administrators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), developed as a common project between the ATLAS and LHCb experiments, provides and integrates these kind of tools. GANGA provides a simple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment. Support for all Grids presently used by ATLAS, namely the LCG/EGEE, NDGF/NorduGrid, and OSG/PanDA is provided. The integration and interaction with the ATLAS data management system DQ2 into GANGA is a key functionality. An intelligent job brokering is set up by using the job splitting mechanism together with data-set and file location knowledge. The brokering is aided by an automated system that regularly processes test analysis jobs at all ATLAS DQ2 supported sites. Large numbers of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports amongst other things tasks of user analysis with reconstructed data and small scale production of Monte Carlo data.

  8. Morphometric Atlas Selection for Automatic Brachial Plexus Segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Van de Velde, Joris, E-mail: joris.vandevelde@ugent.be [Department of Anatomy, Ghent University, Ghent (Belgium); Department of Radiotherapy, Ghent University, Ghent (Belgium); Wouters, Johan [Department of Anatomy, Ghent University, Ghent (Belgium); Vercauteren, Tom; De Gersem, Werner; Duprez, Fréderic; De Neve, Wilfried [Department of Radiotherapy, Ghent University, Ghent (Belgium); Van Hoof, Tom [Department of Anatomy, Ghent University, Ghent (Belgium)

    2015-07-01

    Purpose: The purpose of this study was to determine the effects of atlas selection based on different morphometric parameters, on the accuracy of automatic brachial plexus (BP) segmentation for radiation therapy planning. The segmentation accuracy was measured by comparing all of the generated automatic segmentations with anatomically validated gold standard atlases developed using cadavers. Methods and Materials: Twelve cadaver computed tomography (CT) atlases (3 males, 9 females; mean age: 73 years) were included in the study. One atlas was selected to serve as a patient, and the other 11 atlases were registered separately onto this “patient” using deformable image registration. This procedure was repeated for every atlas as a patient. Next, the Dice and Jaccard similarity indices and inclusion index were calculated for every registered BP with the original gold standard BP. In parallel, differences in several morphometric parameters that may influence the BP segmentation accuracy were measured for the different atlases. Specific brachial plexus-related CT-visible bony points were used to define the morphometric parameters. Subsequently, correlations between the similarity indices and morphometric parameters were calculated. Results: A clear negative correlation between difference in protraction-retraction distance and the similarity indices was observed (mean Pearson correlation coefficient = −0.546). All of the other investigated Pearson correlation coefficients were weak. Conclusions: Differences in the shoulder protraction-retraction position between the atlas and the patient during planning CT influence the BP autosegmentation accuracy. A greater difference in the protraction-retraction distance between the atlas and the patient reduces the accuracy of the BP automatic segmentation result.

  9. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Moles-Valls, R

    2008-01-01

    The ATLAS experiment is equipped with a tracking system for c harged particles built on two technologies: silicon and drift tube base detectors. These kind of detectors compose the ATLAS Inner Detector (ID). The Alignment of the ATLAS ID tracking s ystem requires the determination of almost 36000 degrees of freedom. From the tracking point o f view, the alignment parameters should be know to a few microns precision. This permits to att ain optimal measurements of the parameters of the charged particles trajectories, thus ena bling ATLAS to achieve its physics goals. The implementation of the alignment software, its framewor k and the data flow will be discussed. Special attention will be paid to the recent challenges wher e large scale computing simulation of the ATLAS detector has been performed, mimicking the ATLAS o peration, which is going to be very important for the LHC startup scenario. The alignment r esult for several challenges (real cosmic ray data taking and computing system commissioning) will be...

  10. Reliability Engineering for ATLAS Petascale Data Processing on the Grid

    CERN Document Server

    Golubkov, D V; The ATLAS collaboration; Vaniachine, A V

    2012-01-01

    The ATLAS detector is in its third year of continuous LHC running taking data for physics analysis. A starting point for ATLAS physics analysis is reconstruction of the raw data. First-pass processing takes place shortly after data taking, followed later by reprocessing of the raw data with updated software and calibrations to improve the quality of the reconstructed data for physics analysis. Data reprocessing involves a significant commitment of computing resources and is conducted on the Grid. The reconstruction of one petabyte of ATLAS data with 1B collision events from the LHC takes about three million core-hours. Petascale data processing on the Grid involves millions of data processing jobs. At such scales, the reprocessing must handle a continuous stream of failures. Automatic job resubmission recovers transient failures at the cost of CPU time used by the failed jobs. Orchestrating ATLAS data processing applications to ensure efficient usage of tens of thousands of CPU-cores, reliability engineering ...

  11. The last ATLAS overview week now available on Web Lectures

    CERN Multimedia

    Jeremy Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the lectures and send us a note at wlap@umich.edu to tell us what you think. The newly available WLAP items relating to ATLAS is the following: ATLAS Week Plenary, CERN, 2-3 October 2006 All previous WLAP lectures are also avilable on the web.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  13. ATLAS brochure (Norwegian version)

    CERN Multimedia

    Lefevre, C

    2009-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter. Français

  14. The ATLAS tile calorimeter

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Louis Rose-Dulcina, a technician from the ATLAS collaboration, works on the ATLAS tile calorimeter. Special manufacturing techniques were developed to mass produce the thousands of elements in this detector. Tile detectors are made in a sandwich-like structure where these scintillator tiles are placed between metal sheets.

  15. The ATLAS pixel detector

    OpenAIRE

    Cristinziani, M.

    2007-01-01

    After a ten years planning and construction phase, the ATLAS pixel detector is nearing its completion and is scheduled to be integrated into the ATLAS detector to take data with the first LHC collisions in 2007. An overview of the construction is presented with particular emphasis on some of the major and most recent problems encountered and solved.

  16. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    La Givrine near St Cergue Cross Country Skiing and Fondue at Basse Ruche with M Nordberg, P Jenni, M Nessi, F Gianotti and Co. ATLAS Management Fondu dinner, reviewing state of play of the experiment Many fun scenes from cross country skiing and after 41 minutes of the film starts the fondue dinner in a nice chalet with many persons working for ATLAS experiment

  17. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Shots of aircraft factory where machining for ATLAS is done Shots of aircraft Work on components for ATLAS big wheel Discussions between Tikhonov and Nordberg in workshop Sequence 2 Shots of downtown Novosibirsk, including little church which is mid-point of Russian Federation Sequence 3 Interview of Yuri Tikhonov by Andrew Millington

  18. ATLAS Colouring Book

    CERN Multimedia

    Anthony, Katarina

    2016-01-01

    The ATLAS Experiment Colouring Book is a free-to-download educational book, ideal for kids aged 5-9. It aims to introduce children to the field of High-Energy Physics, as well as the work being carried out by the ATLAS Collaboration.

  19. ATLAS people can run!

    CERN Multimedia

    Claudia Marcelloni de Oliveira; Pauline Gagnon

    It must be all the training we are getting every day, running around trying to get everything ready for the start of the LHC next year. This year, the ATLAS runners were in fine form and came in force. Nine ATLAS teams signed up for the 37th Annual CERN Relay Race with six runners per team. Under a blasting sun on Wednesday 23rd May 2007, each team covered the distances of 1000m, 800m, 800m, 500m, 500m and 300m taking the runners around the whole Meyrin site, hills included. A small reception took place in the ATLAS secretariat a week later to award the ATLAS Cup to the best ATLAS team. For the details on this complex calculation which takes into account the age of each runner, their gender and the color of their shoes, see the July 2006 issue of ATLAS e-news. The ATLAS Running Athena Team, the only all-women team enrolled this year, won the much coveted ATLAS Cup for the second year in a row. In fact, they are so good that Peter Schmid and Patrick Fassnacht are wondering about reducing the women's bonus in...

  20. ATLAS-Hadronic Calorimeter

    CERN Multimedia

    2003-01-01

    Hall 180 work on Hadronic Calorimeter The ATLAS hadronic tile calorimeter The Tile Calorimeter, which constitutes the central section of the ATLAS hadronic calorimeter, is a non-compensating sampling device made of iron and scintillating tiles. (IEEE Trans. Nucl. Sci. 53 (2006) 1275-81)

  1. A Slice of ATLAS

    CERN Multimedia

    2004-01-01

    An entire section of the ATLAS detector is being assembled at Prévessin. Since May the components have been tested using a beam from the SPS, giving the ATLAS team valuable experience of operating the detector as well as an opportunity to debug the system.

  2. ATLAS Brochure (english version)

    CERN Multimedia

    2004-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  3. ATLAS brochure (German version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  4. ATLAS Brochure (English version)

    CERN Multimedia

    Lefevre, Christiane

    2011-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  5. ATLAS brochure (Danish version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  6. ATLAS brochure (Italian version)

    CERN Multimedia

    Lefevre, C

    2010-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  7. ATLAS brochure (French version)

    CERN Multimedia

    Lefevre, C

    2012-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  8. ATLAS brochure (Catalan version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  9. ATLAS Brochure (german version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  10. ATLAS brochure (Polish version)

    CERN Multimedia

    Lefevre, C

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  11. ATLAS Brochure (english version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  12. ATLAS Brochure (french version)

    CERN Multimedia

    Marcastel, F

    2007-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  13. ATLAS rewards industry

    CERN Multimedia

    Maximilien Brice

    2006-01-01

    For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Picture 30 : representatives of the three award-wining companies after the ceremony

  14. ATLAS Thesis Awards 2015

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Winners of the ATLAS Thesis Award were presented with certificates and glass cubes during a ceremony on Thursday 25 February. The winners also presented their work in front of members of the ATLAS Collaboration. Winners: Javier Montejo Berlingen, Barcelona (Spain), Ruth Pöttgen, Mainz (Germany), Nils Ruthmann, Freiburg (Germany), and Steven Schramm, Toronto (Canada).

  15. ATLAS Visitors Centre

    CERN Multimedia

    claudia Marcelloni

    2009-01-01

    ATLAS Visitors Centre has opened its shiny new doors to the public. Officially launched on Monday February 23rd, 2009, the permanent exhibition at Point 1 was conceived as a tour resource for ATLAS guides, and as a way to preserve the public’s opportunity to get a close-up look at the experiment in action when the cavern is sealed.

  16. ATLAS brochure (Spanish version)

    CERN Multimedia

    Lefevre, C

    2008-01-01

    ATLAS is the largest detector at the LHC, the most powerful particle accelerator in the world, which will start up in 2008. ATLAS is a multi-purpose detector, designed to throw light on fundamental questions such as the origin of mass and the nature of the Universe's dark matter.

  17. Migration of ATLAS PanDA to CERN

    Science.gov (United States)

    Stewart, Graeme Andrew; Klimentov, Alexei; Koblitz, Birger; Lamanna, Massimo; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; Emanuel De Castro Faria Salgado, Pedro; Wenaus, Torre

    2010-04-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  18. Dear ATLAS colleagues,

    CERN Multimedia

    PH Department

    2008-01-01

    We are collecting old pairs of glasses to take out to Mali, where they can be re-used by people there. The price for a pair of glasses can often exceed 3 months salary, so they are prohibitively expensive for many people. If you have any old spectacles you can donate, please put them in the special box in the ATLAS secretariat, bldg.40-4-D01 before the Christmas closure on 19 December so we can take them with us when we leave for Africa at the end of the month. (more details in ATLAS e-news edition of 29 September 2008: http://atlas-service-enews.web.cern.ch/atlas-service-enews/news/news_mali.php) many thanks! Katharine Leney co-driver of the ATLAS car on the Charity Run to Mali

  19. ATLAS Virtual Visits

    CERN Document Server

    Goldfarb, Steven; The ATLAS collaboration

    2015-01-01

    ATLAS Virtual Visits is a project initiated in 2011 for the Education & Outreach program of the ATLAS Experiment at CERN. Its goal is to promote public appreciation of the LHC physics program and particle physics, in general, through direct dialogue between ATLAS physicists and remote audiences. A Virtual Visit is an IP-based videoconference, coupled with a public webcast and video recording, between ATLAS physicists and remote locations around the world, that typically include high school or university classrooms, Masterclasses, science fairs, or other special events, usually hosted by collaboration members. Over the past two years, more than 10,000 people, from all of the world’s continents, have actively participated in ATLAS Virtual Visits, with many more enjoying the experience from the publicly available webcasts and recordings. We present an overview of our experience and discuss potential development for the future.

  20. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    Mortensen, Niels Gylling; Said Said, Usama; Badger, Jake

    2006-01-01

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  1. Wind Atlas for Egypt

    DEFF Research Database (Denmark)

    The results of a comprehensive, 8-year wind resource assessment programme in Egypt are presented. The objective has been to provide reliable and accurate wind atlas data sets for evaluating the potential wind power output from large electricityproducing wind turbine installations. The regional wind...... climates of Egypt have been determined by two independent methods: a traditional wind atlas based on observations from more than 30 stations all over Egypt, and a numerical wind atlas based on long-term reanalysis data and a mesoscale model (KAMM). The mean absolute error comparing the two methods is about...... 10% for two large-scale KAMM domains covering all of Egypt, and typically about 5% for several smaller-scale regional domains. The numerical wind atlas covers all of Egypt, whereas the meteorological stations are concentrated in six regions. The Wind Atlas for Egypt represents a significant step...

  2. ATLAS' major cooling project

    CERN Multimedia

    2005-01-01

    In 2005, a considerable effort has been put into commissioning the various units of ATLAS' complex cryogenic system. This is in preparation for the imminent cooling of some of the largest components of the detector in their final underground configuration. The liquid helium and nitrogen ATLAS refrigerators in USA 15. Cryogenics plays a vital role in operating massive detectors such as ATLAS. In many ways the liquefied argon, nitrogen and helium are the life-blood of the detector. ATLAS could not function without cryogens that will be constantly pumped via proximity systems to the superconducting magnets and subdetectors. In recent weeks compressors at the surface and underground refrigerators, dewars, pumps, linkages and all manner of other components related to the cryogenic system have been tested and commissioned. Fifty metres underground The helium and nitrogen refrigerators, installed inside the service cavern, are an important part of the ATLAS cryogenic system. Two independent helium refrigerators ...

  3. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  4. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  5. ATLAS Forward Detectors and Physics

    CERN Document Server

    Soni, N

    2010-01-01

    In this communication I describe the ATLAS forward physics program and the detectors, LUCID, ZDC and ALFA that have been designed to meet this experimental challenge. In addition to their primary role in the determination of ATLAS luminosity these detectors - in conjunction with the main ATLAS detector - will be used to study soft QCD and diffractive physics in the initial low luminosity phase of ATLAS running. Finally, I will briefly describe the ATLAS Forward Proton (AFP) project that currently represents the future of the ATLAS forward physics program.

  6. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2015-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  7. Multilevel Workflow System in the ATLAS Experiment

    CERN Document Server

    Borodin, M; The ATLAS collaboration; De, K; Golubkov, D; Klimentov, A; Maeno, T; Vaniachine, A

    2014-01-01

    The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs...

  8. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    Science.gov (United States)

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information.

  9. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    Science.gov (United States)

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. PMID:26363845

  10. ATLAS Data Challenge 1

    CERN Document Server

    DC1 TaskForce

    2003-01-01

    The ATLAS Collaboration at CERN is preparing for the data taking and analysis at LHC that will start in 2007. Therefore, in 2002 a series of Data Challenges (DC's) was started whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples for the High Level Trigger and Physics communities, and the production of those large data samples as a worldwide distributed activity. It should be noted that it was not an option to "run everything at CERN" even if we had wanted to; the resources were not available at CERN to carry out the production on a reasonable time-scale. We were therefore faced with the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, the benefits o...

  11. EnviroAtlas - Memphis, TN - EnviroAtlas Community Boundary

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset shows the boundary of the Memphis, TN EnviroAtlas Community. It represents the outside edge of all the block groups included in the...

  12. Ceremony for ATLAS cavern

    CERN Multimedia

    2003-01-01

    Wednesday 4 June will be a special day for CERN. The President of the Swiss Confederation, Pascal Couchepin, will officially inaugurate the huge ATLAS cavern now that the civil engineering works have ended. The inauguration ceremony will be held in the ATLAS surface building, with speeches by Pascal Couchepin and CERN, ATLAS and civil engineering personalities. This ceremony will be Webcast live. To access the Webcast on 4 June at 18h00 go to CERN Intranet home page or the following address : http://webcast.cern.ch/live.php

  13. ATLAS Inner Detector Alignment

    CERN Document Server

    Bocci, A

    2008-01-01

    The ATLAS experiment is a multi-purpose particle detector that will study high-energy particle collisions produced by the Large Hadron Collider at CERN. In order to achieve its physics goals, the ATLAS tracking requires that the positions of the silicon detector elements have to be known to a precision better than 10 μm. Several track-based alignment algorithms have been developed for the Inner Detector. An extensive validation has been performed with simulated events and real data coming from the ATLAS. Results from such validation are reported in this paper.

  14. ATLAS data sonification : a new interface for musical expression

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This poster tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed. It is a partnership between the ATLAS collaboration and the MIT multimedia lab.

  15. ATLAS Event - First Splash of Particles in ATLAS

    CERN Multimedia

    ATLAS Outreach

    2008-01-01

    A simulated event. September 10, 2008 - The ATLAS detector lit up as a flood of particles traversed the detector when the beam was occasionally directed at a target near ATLAS. This allowed ATLAS physicists to study how well the various components of the detector were functioning in preparation for the forthcoming collisions. The first ATLAS data recorded on September 10, 2008 is seen here. Running time 24 seconds

  16. Recent ATLAS Articles on WLAP

    CERN Multimedia

    Goldfarb, S.

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project is a system for the archiving and publishing of multimedia presentations, using the Web as medium. We list here newly available WLAP items relating to ATLAS: June ATLAS Plenary Meeting Tutorial on Physics EDM and Tools (June) Freiburg Overview Week Ketevi Assamagan's Tutorial on Analysis Tools Click here to browse WLAP for all ATLAS lectures.

  17. Recent results from ATLAS experiment

    CERN Document Server

    Smirnov, Sergei; The ATLAS collaboration

    2016-01-01

    The 2nd LHC run has started in 2015 with a pp centre-of-mass collision energy of 13 TeV and ATLAS has taken more than 20 fb-1 of data at the new energy by 2016 summer. In this talk, an overview is given on the ATLAS data taking and the improvements made to the ATLAS experiment during the 2-year shutdown 2013/2014. Selected new results from the recent data analysis from ATLAS is also presented.

  18. Benefits and performance of ATLAS approaches to utilizing opportunistic resources

    CERN Document Server

    Filip\\v{c}i\\v{c}, Andrej; The ATLAS collaboration

    2016-01-01

    ATLAS has been extensively exploring possibilities of using computing resources extending beyond conventional grid sites in the WLCG fabric to deliver as many computing cycles as possible and thereby enhance the significance of the Monte-Carlo samples to deliver better physics results. The difficulties of using such opportunistic resources come from architectural differences such as unavailability of grid services, the absence of network connectivity on worker nodes or inability to use standard authorization protocols. Nevertheless, ATLAS has been extremely successful in running production payloads on a variety of sites, thanks largely to the job execution workflow design in which the job assignment, input data provisioning and execution steps are clearly separated and can be offloaded to custom services. To transparently include the opportunistic sites in the ATLAS central production system, several models with supporting services have been developed to mimic the functionality of a full WLCG site. Some are e...

  19. Grid site testing for ATLAS with HammerCloud

    International Nuclear Information System (INIS)

    With the exponential growth of LHC (Large Hadron Collider) data in 2012, distributed computing has become the established way to analyze collider data. The ATLAS grid infrastructure includes more than 130 sites worldwide, ranging from large national computing centers to smaller university clusters. HammerCloud was previously introduced with the goals of enabling virtual organisations (VO) and site-administrators to run validation tests of the site and software infrastructure in an automated or on-demand manner. The HammerCloud infrastructure has been constantly improved to support the addition of new test workflows. These new workflows comprise e.g. tests of the ATLAS nightly build system, ATLAS Monte Carlo production system, XRootD federation (FAX) and new site stress test workflows. We report on the development, optimization and results of the various components in the HammerCloud framework.

  20. Evolution of User Analysis on the Grid in ATLAS

    CERN Document Server

    Legger, Federica; The ATLAS collaboration

    2016-01-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Based on the experience from the first run of the LHC, substantial improvements to the ATLAS computing system have been made to optimize both production and analysis workflows. These include the re-design of the production and data management systems, a new analysis data format and event model, and the development of common reduction and analysis frameworks. The impact of such changes on the distributed analysis system is evaluated. More than 100 mill...

  1. A digital rat atlas of sectional anatomy

    Science.gov (United States)

    Yu, Li; Liu, Qian; Bai, Xueling; Liao, Yinping; Luo, Qingming; Gong, Hui

    2006-09-01

    This paper describes a digital rat alias of sectional anatomy made by milling. Two healthy Sprague-Dawley (SD) rat weighing 160-180 g were used for the generation of this atlas. The rats were depilated completely, then euthanized by Co II. One was via vascular perfusion, the other was directly frozen at -85 °C over 24 hour. After that, the frozen specimens were transferred into iron molds for embedding. A 3% gelatin solution colored blue was used to fill the molds and then frozen at -85 °C for one or two days. The frozen specimen-blocks were subsequently sectioned on the cryosection-milling machine in a plane oriented approximately transverse to the long axis of the body. The surface of specimen-blocks were imaged by a scanner and digitalized into 4,600 x2,580 x 24 bit array through a computer. Finally 9,475 sectional images (arterial vessel were not perfused) and 1,646 sectional images (arterial vessel were perfused) were captured, which made the volume of the digital atlas up to 369.35 Gbyte. This digital rat atlas is aimed at the whole rat and the rat arterial vessels are also presented. We have reconstructed this atlas. The information from the two-dimensional (2-D) images of serial sections and three-dimensional (3-D) surface model all shows that the digital rat atlas we constructed is high quality. This work lays the foundation for a deeper study of digital rat.

  2. ATLAS TV PROJECT

    CERN Multimedia

    OMNI communication

    2006-01-01

    CERN, Building 40 Interview with theorist Mr. Philip Hinchliffe (Berkeley) as well an interview with his wife Mrs. Hinchliffe who is also Physics Department head at Berkeley. They are both working in ATLAS Experiment.

  3. California Ocean Uses Atlas

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset is a result of the California Ocean Uses Atlas Project: a collaboration between NOAA's National Marine Protected Areas Center and Marine Conservation...

  4. Lunar Sample Atlas

    Data.gov (United States)

    National Aeronautics and Space Administration — The Lunar Sample Atlas provides pictures of the Apollo samples taken in the Lunar Sample Laboratory, full-color views of the samples in microscopic thin-sections,...

  5. The Latest from ATLAS

    CERN Multimedia

    2009-01-01

    Since November 2008, ATLAS has undertaken detailed maintenance, consolidation and repair work on the detector (see Bulletin of 20 July 2009). Today, the fraction of the detector that is operational has increased compared to last year: less than 1% of dead channels for most of the sub-systems. "We are going to start taking data this year with a detector which is even more efficient than it was last year," agrees ATLAS Spokesperson, Fabiola Gianotti. By mid-September the detector was fully closed again, and the cavern sealed. The magnet system has been operated at nominal current for extensive periods over recent months. Once the cavern was sealed, ATLAS began two weeks of combined running. Right now, subsystems are joining the run incrementally until the point where the whole detector is integrated and running as one. In the words of ATLAS Technical Coordinator, Marzio Nessi: "Now we really start physics." In parallel, the analysis ...

  6. PeptideAtlas

    Data.gov (United States)

    U.S. Department of Health & Human Services — PeptideAtlas is a multi-organism, publicly accessible compendium of peptides identified in a large set of tandem mass spectrometry proteomics experiments. Mass...

  7. ATLAS Cavern baseplate

    CERN Multimedia

    It-UDS-Audiovisual Services

    2002-01-01

    This video shows the incredible amounth of iron used for ATLAS cavern. Please look at the related links and also videos that are concerning the civil engineering where you can see even more detailed cavern excavation work.

  8. Printed circuit for ATLAS

    CERN Multimedia

    Laurent Guiraud

    1999-01-01

    A printed circuit board made by scientists in the ATLAS collaboration for the transition radiaton tracker (TRT). This will read data produced when a high energy particle crosses the boundary between two materials with different electrical properties.

  9. ATLAS DAQ Configuration Databases

    Institute of Scientific and Technical Information of China (English)

    I.Alexandrov; A.Amorim; 等

    2001-01-01

    The configuration databases are an important part of the Trigger/DAQ system of the future ATLAS experiment .This paper describes their current status giving details of architecture,implementation,test results and plans for future work.

  10. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    ATLAS Physics Workshop at the University of Roma Tre held from Monday 06 June 2005 to Saturday 11 June 2005. Experts establishing workshop, poster, people milling Shots of Peter Jenni introduction Many audience shots Sequences from various talks

  11. General Dynamics Atlas family

    Science.gov (United States)

    Oates, James

    Developments concerning the Atlas family of launch vehicles over the last three or four years are summarized. Attention is given to the center of gravity, load factors, acoustics, pyroshock, low-frequency sinusoidal vibration, and high-frequency random vibration.

  12. ATLAS FTK: Fast Track Trigger

    CERN Document Server

    Volpi, Guido; The ATLAS collaboration

    2015-01-01

    An overview of the ATLAS Fast Tracker processor is presented, reporting the design of the system, its expected performance, and the integration status. The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge to the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency in interesting events, despite the increase in multiple p-p collisions per bunch crossing (pile-up). In order to increase the use of tracks within the High Level Trigger (HLT), the ATLAS experiment planned the installation of an hardware processor dedicated to tracking: the Fast TracKer (FTK) processor. The FTK is designed to perform full scan track reconstruction at every Level-1 accept. To achieve this goal, the FTK uses a fully parallel architecture, with algorithms designed to exploit the computing power of custom VLSI chips, the Associative Memory, as well as modern FPGAs. The FT...

  13. ATLAS Overview Week at Brookhaven

    CERN Multimedia

    Pilcher, J

    Over 200 ATLAS participants gathered at Brookhaven National Laboratory during the first week of June for our annual overview week. Some system communities arrived early and held meetings on Saturday and Sunday, and the detector interface group (DIG) and Technical Coordination also took advantage of the time to discuss issues of interest for all detector systems. Sunday was also marked by a workshop on the possibilities for heavy ion physics with ATLAS. Beginning on Monday, and for the rest of the week, sessions were held in common in the well equipped Berkner Hall auditorium complex. Laptop computers became the norm for presentations and a wireless network kept laptop owners well connected. Most lunches and dinners were held on the lawn outside Berkner Hall. The weather was very cooperative and it was an extremely pleasant setting. This picture shows most of the participants from a view on the roof of Berkner Hall. Technical Coordination and Integration issues started the reports on Monday and became a...

  14. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    1999-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. The film has original working sound.

  15. Budker INP in ATLAS

    CERN Multimedia

    2001-01-01

    The Novosibirsk group has proposed a new design for the ATLAS liquid argon electromagnetic end-cap calorimeter with a constant thickness of absorber plates. This design has signifi- cant advantages compared to one in the Technical Proposal and it has been accepted by the ATLAS Collaboration. The Novosibirsk group is responsible for the fabrication of the precision aluminium structure for the e.m.end-cap calorimeter.

  16. ATLAS physics results

    CERN Document Server

    AUTHOR|(CDS)2074312

    2015-01-01

    The ATLAS experiment at the Large Hadron Collider at CERN has been successfully taking data since the end of 2009 in proton-proton collisions at centre-of-mass energies of 7 and 8 TeV, and in heavy ion collisions. In these lectures, some of the most recent ATLAS results will be given on Standard Model measurements, the discovery of the Higgs boson, searches for supersymmetry and exotics and on heavy-ion results.

  17. ATLAS Transitional Radiation Tracker

    CERN Multimedia

    ATLAS Outreach

    2006-01-01

    This colorful 3D animation is an excerpt from the film "ATLAS-Episode II, The Particles Strike Back." Shot with a bug's eye view of the inside of the detector. The viewer is taken on a tour of the inner workings of the transitional radiation tracker within the ATLAS detector. Subjects covered include what the tracker is used to measure, its structure, what happens when particles pass through the tracker, how it distinguishes between different types of particles within it.

  18. The ATLAS electromagnetic calorimeter

    CERN Document Server

    Maximilien Brice

    2003-01-01

    Michel Mathieu, a technician for the ATLAS collaboration, is cabling the ATLAS electromagnetic calorimeter's first end-cap, before insertion into its cryostat. Millions of wires are connected to the electromagnetic calorimeter on this end-cap that must be carefully fed out from the detector so that data can be read out. Every element on the detector will be attached to one of these wires so that a full digital map of the end-cap can be recreated.

  19. ATLAS Jet Energy Scale

    OpenAIRE

    D. Schouten; Tanasijczuk, A.; Vetterli, M.(Department of Physics, Simon Fraser University, Burnaby, BC, Canada); Collaboration, for the ATLAS

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy cor...

  20. 10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

    CERN Multimedia

    Jean-Claude Gadmer

    2013-01-01

    10 September 2013 - Italian Minister for Economic Development F. Zanonato visiting the ATLAS cavern with Collaboration Spokesperson D. Charlton and Italian scientists F. Gianotti and A. Di Ciaccio; signing the guest book with CERN Director-General R. Heuer and Director for Research and Scientific Computing S. Bertolucci; in the LHC tunnel with S. Bertolucci, Technology Deputy Department Head L. Rossi and Engineering Department Head R. Saban; visiting CMS cavern with Scientists G. Rolandi and P. Checchia.

  1. 18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

    CERN Multimedia

    Samuel Morier-Genoud

    2012-01-01

    18 December 2012 -Portuguese President of FCT M. Seabra visiting the Computing Centre with IT Department Head F. Hemmer, ATLAS experimental area with Collaboration Spokesperson F. Gianotti and A. Henriques Correia, in the LHC tunnel at Point 2 and CMS experimental area with Deputy Spokesperson J. Varela, signing an administrative agreement with Director-General R. Heuer; LIP President J. M. Gago and Delegate to CERN Council G. Barreia present.

  2. An ATLAS Virtual Visit connects physicists at the Town Square of Cracow and physicists of the LHC Experiment in the ATLAS control room; special participation of CERN's General Director, Rolf Heuer and the Director for Research and Scientific Computing, Sergio Bertolucci.

    CERN Multimedia

    2012-01-01

    he 12 Festival of Science "Theory-knowledge-experience...". Fest will be located on the traditional Main Square, which is visited by thousands of citizens and tourists. The Institute of Nuclear Physics as usual participates in this annual event. Our visitors will learn the secrets of the CERN experiments on the Large Hadron Collider - ATLAS, LHCb, ALICE, CMS, find out more about the Higgs particles, antimatter quark-gluon plasma (beeing guided by our scientists and PhD students). One of the attractions will be ATLAS Control Room Virtual Visit. Visiting people will have an opportunity to see how ATLAS is controlled and operated to collect its exciting data and ask questions to scientists and engineers involved in LHC program at CERN. Institute of Nuclear Physics has prepared also several interactive demonstrations of Atomic Force Microscopy, Magnetic Resonance, Hadron Therapy and Crystal Physics.

  3. ATLAS data sonification: a new interface for musical expression and public interaction

    CERN Document Server

    Hill, Ewan; The ATLAS collaboration

    2016-01-01

    The goal of this project is to transform ATLAS data into sound and explore how ATLAS audio can be a source of inspiration and education for musicians and for the general public. Real-time ATLAS data is sonified and streamed as music on a dedicated website. Listeners may be motivated to learn more about the ATLAS experiment and composers have the opportunity to explore the physics in the collision data through a new medium. The ATLAS collaboration has shared its expertise and access to the live data stream from which the live event displays are generated. This talk tells the story of a long journey from the hallways of CERN where the project collaboration began to the halls of the Montreux Jazz Festival where harmonies were performed. The mapping of the data to sound will be outlined and interactions with musicians and contributions to conferences dedicated to human-computer interaction will also be discussed.

  4. ATLAS Facility Description Report

    International Nuclear Information System (INIS)

    A thermal-hydraulic integral effect test facility, ATLAS (Advanced Thermal-hydraulic Test Loop for Accident Simulation), has been constructed at KAERI (Korea Atomic Energy Research Institute). The ATLAS has the same two-loop features as the APR1400 and is designed according to the well-known scaling method suggested by Ishii and Kataoka to simulate the various test scenarios as realistically as possible. It is a half-height and 1/288-volume scaled test facility with respect to the APR1400. The fluid system of the ATLAS consists of a primary system, a secondary system, a safety injection system, a break simulating system, a containment simulating system, and auxiliary systems. The primary system includes a reactor vessel, two hot legs, four cold legs, a pressurizer, four reactor coolant pumps, and two steam generators. The secondary system of the ATLAS is simplified to be of a circulating loop-type. Most of the safety injection features of the APR1400 and the OPR1000 are incorporated into the safety injection system of the ATLAS. In the ATLAS test facility, about 1300 instrumentations are installed to precisely investigate the thermal-hydraulic behavior in simulation of the various test scenarios. This report describes the scaling methodology, the geometric data of the individual component, and the specification and the location of the instrumentations in detail

  5. IT Infrastructure Design and Implementation Considerations for the ATLAS TDAQ System

    CERN Document Server

    Dobson, M; The ATLAS collaboration; Caramarcu, C; Dumitru, I; Valsan, L; Darlea, G L; Bujor, F; Bogdanchikov, A G; Korol, A A; Zaytsev, A S; Ballestrero, S

    2013-01-01

    This paper gives a thorough overview of the ATLAS TDAQ SysAdmin group activities which deals with administration of the TDAQ computing environment supporting Front End detector hardware, Data Flow, Event Filter and other subsystems of the ATLAS detector operating on the LHC accelerator at CERN. The current installation consists of approximately 1500 netbooted nodes managed by more than 60 dedicated servers, a high performance centralized storage system, about 50 multi-screen user interface systems installed in the control rooms and various hardware and critical service monitoring machines. In the final configuration, the online computer farm will be capable of hosting tens of thousands applications running simultaneously. The ATLAS TDAQ computing environment is now serving more than 3000 users subdivided into approximately 300 categories in correspondence with their roles in the system. The access and role management system is custom built on top of an LDAP schema. The engineering infrastructure of the ATLAS ...

  6. 29 March 2011 - Ninth President of Israel S.Peres welcomed by CERN Director-General R. Heuer who introduces Council President M. Spiro, Director for Accelerators and Technology S. Myers, Head of International Relations F. Pauss, Physics Department Head P. Bloch, Technology Department Head F. Bordry, Human Resources Department Head A.-S. Catherin, Beams Department Head P. Collier, Information Technology Department Head F. Hemmer, Adviser for Israel J. Ellis, Legal Counsel E. Gröniger-Voss, ATLAS Collaboration Spokesperson F. Gianotti, Former ATLAS Collaboration Spokesperson P. Jenni, Weizmann Institute G. Mikenberg, CERN VIP and Protocol Officer W. Korda.

    CERN Multimedia

    Maximilien Brice

    2011-01-01

    During his visit he toured the ATLAS underground experimental area with Giora Mikenberg of the ATLAS collaboration, Weizmann Institute of Sciences and Israeli industrial liaison office, Rolf Heuer, CERN’s director-general, and Fabiola Gianotti, ATLAS spokesperson. The president also visited the CERN computing centre and met Israeli scientists working at CERN.

  7. 24 October 2014 - President of the Republic of Ecuador R. Correa Delgado signing the guest book with Vice President L. Moreno and Director for Research and Scientific Computing S. Bertolucci.

    CERN Multimedia

    Guillaume, Jeanneret

    2014-01-01

    visiting the ATLAS experimental cavern with Collaboration PSokesperson D. Charlton and ATLAS User F. Monticelli; throughout accompanied by Adviser for Ecuador J. Salicio Diez and Director for Research and Scientific Computing S. Bertolucci.

  8. Recently Published Lectures and Tutorials for ATLAS

    CERN Multimedia

    Goldfarb, S.

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, WLAP, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project. A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please e...

  9. Recently Published Lectures and Tutorials for ATLAS

    CERN Document Server

    J. Herr

    2006-01-01

    As reported in the September 2004 ATLAS eNews, the Web Lecture Archive Project, a collaboration between the University of Michigan and CERN, has developed a synchronized system for recording and publishing educational multimedia presentations, using the Web as medium. The current system, including future developments for the project and the field in general, was recently presented at the CHEP 2006 conference in Mumbai, India. The relevant presentations and papers can be found here: The Web Lecture Archive Project A Web Lecture Capture System with Robotic Speaker Tracking This year, the University of Michigan team has been asked to record and publish all ATLAS Plenary sessions, as well as a large number of Physics and Computing tutorials. A significant amount of this material has already been published and can be accessed via the links below. All lectures can be viewed on any major platform with any common internet browser, either via streaming or local download (for limited bandwidth). Please enjoy the l...

  10. The ATLAS Distributed Data Management System & Databases

    CERN Document Server

    Garonne, V; The ATLAS collaboration; Barisits, M; Beermann, T; Vigne, R; Serfon, C

    2013-01-01

    The ATLAS Distributed Data Management (DDM) System is responsible for the global management of petabytes of high energy physics data. The current system, DQ2, has a critical dependency on Relational Database Management Systems (RDBMS), like Oracle. RDBMS are well-suited to enforcing data integrity in online transaction processing applications, however, concerns have been raised about the scalability of its data warehouse-like workload. In particular, analysis of archived data or aggregation of transactional data for summary purposes is problematic. Therefore, we have evaluated new approaches to handle vast amounts of data. We have investigated a class of database technologies commonly referred to as NoSQL databases. This includes distributed filesystems, like HDFS, that support parallel execution of computational tasks on distributed data, as well as schema-less approaches via key-value stores, like HBase. In this talk we will describe our use cases in ATLAS, share our experiences with various databases used ...

  11. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A

    2013-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  12. The ATLAS Data Management Software Engineering Process

    CERN Document Server

    Lassnig, M; The ATLAS collaboration; Stewart, G A; Barisits, M; Beermann, T; Vigne, R; Serfon, C; Goossens, L; Nairz, A; Molfetas, A

    2014-01-01

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also hi...

  13. EnviroAtlas - Metrics for Austin, TX

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://enviroatlas.epa.gov/EnviroAtlas). The layers in this...

  14. ATLAS Offline Data Quality Monitoring

    CERN Document Server

    Adelman, J; Boelaert, N; D'Onofrio, M; Frost, J A; Guyot, C; Hauschild, M; Hoecker, A; Leney, K J C; Lytken, E; Martinez-Perez, M; Masik, J; Nairz, A M; Onyisi, P U E; Roe, S; Schatzel, S; Schaetzel, S; Wilson, M G

    2010-01-01

    The ATLAS experiment at the Large Hadron Collider reads out 100 Million electronic channels at a rate of 200 Hz. Before the data are shipped to storage and analysis centres across the world, they have to be checked to be free from irregularities which render them scientifically useless. Data quality offline monitoring provides prompt feedback from full first-pass event reconstruction at the Tier-0 computing centre and can unveil problems in the detector hardware and in the data processing chain. Detector information and reconstructed proton-proton collision event characteristics are distilled into a few key histograms and numbers which are automatically compared with a reference. The results of the comparisons are saved as status flags in a database and are published together with the histograms on a web server. They are inspected by a 24/7 shift crew who can notify on-call experts in case of problems and in extreme cases signal data taking abort.

  15. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  16. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    International Nuclear Information System (INIS)

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  17. Multiple brain atlas database and atlas-based neuroimaging system.

    Science.gov (United States)

    Nowinski, W L; Fang, A; Nguyen, B T; Raphel, J K; Jagannathan, L; Raghavan, R; Bryan, R N; Miller, G A

    1997-01-01

    For the purpose of developing multiple, complementary, fully labeled electronic brain atlases and an atlas-based neuroimaging system for analysis, quantification, and real-time manipulation of cerebral structures in two and three dimensions, we have digitized, enhanced, segmented, and labeled the following print brain atlases: Co-Planar Stereotaxic Atlas of the Human Brain by Talairach and Tournoux, Atlas for Stereotaxy of the Human Brain by Schaltenbrand and Wahren, Referentially Oriented Cerebral MRI Anatomy by Talairach and Tournoux, and Atlas of the Cerebral Sulci by Ono, Kubik, and Abernathey. Three-dimensional extensions of these atlases have been developed as well. All two- and three-dimensional atlases are mutually preregistered and may be interactively registered with an actual patient's data. An atlas-based neuroimaging system has been developed that provides support for reformatting, registration, visualization, navigation, image processing, and quantification of clinical data. The anatomical index contains about 1,000 structures and over 400 sulcal patterns. Several new applications of the brain atlas database also have been developed, supported by various technologies such as virtual reality, the Internet, and electronic publishing. Fusion of information from multiple atlases assists the user in comprehensively understanding brain structures and identifying and quantifying anatomical regions in clinical data. The multiple brain atlas database and atlas-based neuroimaging system have substantial potential impact in stereotactic neurosurgery and radiotherapy by assisting in visualization and real-time manipulation in three dimensions of anatomical structures, in quantitative neuroradiology by allowing interactive analysis of clinical data, in three-dimensional neuroeducation, and in brain function studies. PMID:9148878

  18. ATLAS: Exceeding all expectations

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    “One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS.   The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...

  19. OCCIPITALIZATION OF ATLAS

    Directory of Open Access Journals (Sweden)

    Sween Walia

    2014-12-01

    Full Text Available Occipitalization of atlas is an osseous anomaly of the craniovertebral junction which occurs at the base of the skull in the region of the foramen magnum. The knowledge of such a fusion is important because skeletal abnormalities at the craniocervical junction may result in sudden death. During bone cleaning procedure and routine undergraduate osteology teaching, three skulls with Occipitalization of atlas were encountered in the department of Anatomy at MMIMSR, Mullana, India. In one skull, both anterior and posterior arch were completely fused with occipital bone while the transverse process on the right side was not fused whereas left transverse process was fused with occipital bone. Both anterior and posterior arch were completely fused whereas transverse process on both sides were not fused in other skull. In another skull, partial and asymmetrical Occipitalization of atlas vertebra with occipital bone was found with bifid posterior arch of atlas at the level of posterior tubercle. Anterior arch was completely fused with basilar part of occipital bone but both the transverse processes were not fused. Reduced diameter of foramen magnum due to the atlanto-occipital fusion might cause neurological complications due to compression of spinal cord or medulla oblongata, vertebral vessels, 1st cervical nerve, thus, knowledge of occipitalization of the atlas may be of substantial importance to orthopaedicians, neurosurgeons, physicians and radiologists dealing with abnormalities of the cervical spine.

  20. ATLAS Review Office

    CERN Multimedia

    Szeless, B

    The ATLAS internal reviews, be it the mandatory Production Readiness Reviews, the now newly installed Production Advancement Reviews, or the more and more requested different Design Reviews, have become a part of our ATLAS culture over the past years. The Activity Systems Status Overviews are, for the time being, a one in time event and should be held for each system as soon as possible to have some meaning. There seems to a consensus that the reviews have become a useful project tool for the ATLAS management but even more so for the sub-systems themselves making achievements as well as possible shortcomings visible. One other recognized byproduct is the increasing cross talk between the systems, a very important ingredient to make profit all the systems from the large collective knowledge we dispose of in ATLAS. In the last two months, the first two PARs were organized for the MDT End Caps and the TRT Barrel Modules, both part of the US contribution to the ATLAS Project. Furthermore several different design...

  1. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  2. High-Performance Scalable Information Service for the ATLAS Experiment

    Science.gov (United States)

    Kolos, S.; Boutsioukis, G.; Hauser, R.

    2012-12-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  3. Congenital bipartite atlas with hypodactyly in a dog: clinical, radiographic and CT findings.

    Science.gov (United States)

    Wrzosek, M; Płonek, M; Zeira, O; Bieżyński, J; Kinda, W; Guziński, M

    2014-07-01

    A three-year-old Border collie was diagnosed with a bipartite atlas and bilateral forelimb hypodactyly. The dog showed signs of acute, non-progressive neck pain, general stiffness and right thoracic limb non-weight-bearing lameness. Computed tomography imaging revealed a bipartite atlas with abaxial vertical bone proliferation, which was the cause of the clinical signs. In addition, bilateral hypodactyly of the second and fifth digits was incidentally found. This report suggests that hypodactyly may be associated with atlas malformations. PMID:24635705

  4. ATLAS EventIndex monitoring system using Kibana analytics and visualization platform

    CERN Document Server

    Barberis, Dario; The ATLAS collaboration; Prokoshin, Fedor; Gallas, Elizabeth; Favareto, Andrea; Hrivnac, Julius; Sanchez, Javier; Fernandez Casani, Alvaro; Gonzalez de la Hoz, Santiago; Garcia Montoro, Carlos; Salt, Jose; Malon, David; Toebbicke, Rainer; Yuan, Ruijun

    2016-01-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  5. Status and Evolution of ATLAS Workload Management System PanDA

    CERN Document Server

    De, K; The ATLAS collaboration

    2012-01-01

    The ATLAS experiment at the LHC uses a sophisticated workload management system, PanDA, to provide access for thousands of physicists to distributed computing resources of unprecedented scale. This system has proved to be robust and scalable during three years of LHC operations. We describe the design and performance of PanDA in ATLAS. The features which make PanDA successful in ATLAS could be applicable to other exabyte scale scientific projects. We describe plans to evolve PanDA towards a general workload management system for the new BigData initiative announced by the US government. Other planned future improvements to PanDA will also be described

  6. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  7. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    CERN Document Server

    Chapman, J; Duehrssen, M; Elsing, M; Froidevaux, D; Harrington, R; Jansky, R; Langenberg, R; Mandrysch, R; Marshall, Z; Ritsch, E; Salzburger, A

    2014-01-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during run I relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for run II, and beyond. A number of fast detector simulation, digitization and reconstruction techniques and are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  8. Concepts and Plans towards fast large scale Monte Carlo production for the ATLAS Experiment

    Science.gov (United States)

    Ritsch, E.; Atlas Collaboration

    2014-06-01

    The huge success of the physics program of the ATLAS experiment at the Large Hadron Collider (LHC) during Run 1 relies upon a great number of simulated Monte Carlo events. This Monte Carlo production takes the biggest part of the computing resources being in use by ATLAS as of now. In this document we describe the plans to overcome the computing resource limitations for large scale Monte Carlo production in the ATLAS Experiment for Run 2, and beyond. A number of fast detector simulation, digitization and reconstruction techniques are being discussed, based upon a new flexible detector simulation framework. To optimally benefit from these developments, a redesigned ATLAS MC production chain is presented at the end of this document.

  9. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2013-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  10. Data Federation Strategies for ATLAS using XRootD

    CERN Document Server

    Gardner, R; The ATLAS collaboration; Duckeck, G; Elmsheuser, J; Hanushevski, A; Hönig, F; Iven, J; Legger, F; Vukotic, I; Yang, W

    2014-01-01

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the w...

  11. Spinal canal stenosis at the level of Atlas

    Directory of Open Access Journals (Sweden)

    Suchanda Bhattacharjee

    2011-01-01

    Full Text Available We report here a rare case of high cervical stenosis at the level of atlas who presented with progressively deteriorating quadriparesis and respiratory distress. A 10-year-old boy presented with above symptoms of one-year duration with a preceding history of trivial trauma prior to onset of such symptoms. Cervical spine MRI revealed a significant stenosis at the level of atlas from the posterior side with a syrinx extending above and below. High-resolution computed tomography of the above level yielded an ill-defined osseous bar compressing the canal at the level of C 1 posterior arch, which appeared bifid in the midline. The patient was immediately taken up for surgery in view of his respiratory complaints. The child showed an excellent recovery after excision of the posterior arch of atlas and removal of the compressing osseous structure.

  12. Big Data processing experience in the ATLAS experiment

    CERN Document Server

    Vaniachine, A; The ATLAS collaboration

    2014-01-01

    To improve the data quality for physics analysis, the ATLAS collaboration completed three major data reprocessing campaigns on the Grid during 2010-2012, with up to 2 PB of data being reprocessed every year. The Worldwide LHC Computing Grid provided petabytes of disk storage and tens of thousands of job slots for a faster throughput. High throughput is critical for timely completion of the reprocessing campaigns conducted in preparation for major physics conferences. In 2011 reprocessing the throughput doubled in comparison to the 2010 reprocessing campaign. To deliver new physics results for the 2013 Moriond Conference, ATLAS reprocessed twice more data in November 2012 within the same time period as in 2011 reprocessing, while due to increased LHC pileup, the 2012 pp events required twice more time to reconstruct than 2011 events. For a faster throughput, the number of jobs running concurrently exceeded 33k during ATLAS reprocessing campaign in November 2012. For comparison the daily average number of runni...

  13. Two ATLAS suppliers honoured

    CERN Multimedia

    2007-01-01

    The ATLAS experiment has recognised the outstanding contribution of two firms to the pixel detector. Recipients of the supplier award with Peter Jenni, ATLAS spokesperson, and Maximilian Metzger, CERN Secretary-General.At a ceremony held at CERN on 28 November, the ATLAS collaboration presented awards to two of its suppliers that had produced sensor wafers for the pixel detector. The CiS Institut für Mikrosensorik of Erfurt in Germany has supplied 655 sensor wafers containing a total of 1652 sensor tiles and the firm ON Semiconductor has supplied 515 sensor wafers (1177 sensor tiles) from its foundry at Roznov in the Czech Republic. Both firms have successfully met the very demanding requirements. ATLAS’s huge pixel detector is very complicated, requiring expertise in highly specialised integrated microelectronics and precision mechanics. Pixel detector project leader Kevin Einsweiler admits that when the project was first propo...

  14. ATLAS rewards industry

    CERN Multimedia

    2006-01-01

    Showing excellence in mechanics, electronics and cryogenics, three industries are honoured for their contributions to the ATLAS experiment. Representatives of the three award-wining companies after the ceremony. For contributing vital pieces to the ATLAS puzzle, three industries were recognized on Friday 5 May during a supplier awards ceremony. After a welcome and overview of the ATLAS experiment by spokesperson Peter Jenni, CERN Secretary-General Maximilian Metzger stressed the importance of industry to CERN's scientific goals. Close interaction with CERN was a key factor in the selection of each rewarded company, in addition to the high-quality products they delivered to the experiment. Alu Menziken Industrie AG, of Switzerland, was honoured for the production of 380,000 aluminium tubes for the Monitored Drift Tube Chambers (MDT). As Giora Mikenberg, the Muon System Project Leader stressed, the aluminium tubes were delivered on time with an extraordinary quality and precision. Between October 2000 and Jan...

  15. ATLAS TDAQ System Administration:

    CERN Document Server

    Lee, Christopher Jon; The ATLAS collaboration; Bogdanchikov, Alexander; Ballestrero, Sergio; Contescu, Alexandru Cristian; Dubrov, Sergei; Fazio, Daniel; Korol, Aleksandr; Scannicchio, Diana; Twomey, Matthew Shaun; Voronkov, Artem

    2015-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) system is responsible for the online processing of live data, streaming from the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ̃3000 servers, processing the data readout from ̃100 million detector channels through multiple trigger levels. During the two years of the first Long Shutdown (LS1) there has been a tremendous amount of work done by the ATLAS TDAQ System Administrators, implementing numerous new software applications, upgrading the OS and the hardware, changing some design philosophies and exploiting the High Level Trigger farm with different purposes. During the data taking only critical security updates are applied and broken hardware is replaced to ensure a stable operational environment. The LS1 provided an excellent opportunity to look into new technologies and applications that would help to improve and streamline the daily tasks of not only the System Administrators, but also of the scientists who wil...

  16. Software releases management for TDAQ system in ATLAS experiment

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Hauser, R; Soloviev, I

    2010-01-01

    ATLAS is a general-purpose experiment in high-energy physics at Large Hadron Collider at CERN. ATLAS Trigger and Data Acquisition (TDAQ) system is a distributed computing system which is responsible for transferring and filtering the physics data from the experiment to mass-storage. TDAQ software is developed since 1998 by a team of few dozens developers. It is used for integration of all ATLAS subsystem participating in data-taking, providing framework and API for building the s/w pieces of TDAQ system. It is currently composed of more then 200 s/w packages which are available for ATLAS users in form of regular software releases. The s/w is available for development on a shared filesystem, on test beds and it is deployed to the ATLAS pit where it is used for data-taking. The paper describes the working model, the policies and the tools which are used by s/w developers and s/w librarians in order to develop, release, deploy and maintain the TDAQ s/w for the long period of development, commissioning and runnin...

  17. Analysis facility infrastructure (Tier-3) for ATLAS experiment

    CERN Document Server

    González de la Hoza, S; Ros, E; Sánchez, J; Amorós, G; Fassi, F; Fernández, A; Kaci, M; Lamas, A; Salt, J

    2008-01-01

    In the ATLAS computing model the tiered hierarchy ranged from the Tier-0 (CERN) down to desktops or workstations (Tier-3). The focus on defining the roles of each tiered component has evolved with the initial emphasis on the Tier-0 and Tier-1 definition and roles. The various LHC (Large Hadron Collider) projects, including ATLAS, then evolved the tiered hierarchy to include Tier-2’s (Regional centers) as part of their projects. Tier-3 centres, on the other hand, have been defined as whatever an institution could construct to support their Physics goals using institutional and otherwise leveraged resources and therefore have not been considered to be part of the official ATLAS computing resources. However, Tier-3 centres are going to exist and will have implications on how the computing model should support ATLAS physicists. Tier-3 users will want to access LHC data and simulations and will want to enable their resources to support their analysis and simulation work. This document will define how IFIC (Insti...

  18. ATLAS copies its first PetaByte out of CERN

    CERN Multimedia

    M. Branco; P. Salgado; L. Goossens; A. Nairz

    2006-01-01

    On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking. The expected rate of data output from CERN when the detector is running at full trigger rate is 780 MB/s shared among 10 external Tier-1 sites(*), amounting to around 8 PetaBytes per year. The idea of the exercise was to try to reach this data rate and sustain it for as long as possible. The exercise was run as part of the LCG's Service Challenges and allowed ATLAS to test successfully the integration of ATLAS software with the LCG middleware services that are used for low level cataloging and the actual data movement. When ATLAS is produ...

  19. Analysis of empty ATLAS pilot jobs

    CERN Document Server

    Love, Peter; The ATLAS collaboration

    2016-01-01

    The pilot model used by the ATLAS production system has been in use for many years. The model has proven to be a success with many advantages over push models. However one of the negative side-effects of using a pilot model is the presence of 'empty pilots' running on sites which consume a small amount of walltime and not running a useful payload job. The impact on a site can be significant with previous studies showing a total 0.5% walltime usage with no benefit to either the site or to ATLAS. Another impact is the number of empty pilots being processed by a site's Compute Element and batch system which can be 5% of the total number of pilots being handled. In this paper we review the latest statistics using both ATLAS and site data and highlight edge cases where the number of empty pilots dominate. We also study the effect of tuning the pilot factories to reduce the number of empty pilots.

  20. ATLAS Point-1 System Administration Group

    CERN Multimedia

    Marc Dobson

    2007-01-01

    Hello, my name is Joe Blog and I am about to go on shift at ATLAS. When I enter the control room shown below with my CERN ID card, I go to the subsystem desk for which I am responsible. This is the first shift of the run period and there is a login window displayed on the screens. I just need to hit return and the control room desktop is started. Before I can do anything I must give my credentials in the shifter window which is then synchronised with the shift plan. After that I have access to all the allowed commands and can start preparing for the run. In order not to forget any steps I consult the documentation on how to prepare for a run on the Point-1 web. I can also check what the general status is for the ATLAS online computing farm, the sub-detectors and the LHC by using the utilities provided. ATLAS Control Room. The situation described is made up but the conditions are real. But the control room that the shifters and general public see is only the tip of the iceberg. Behind these tools lie the...

  1. Expected Performance of the ATLAS Experiment - Detector, Trigger and Physics

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G.; Abat, E.; Abbott, B.; Abdallah, J.; Abdelalim, A.A.; Abdesselam, A.; Abdinov, O.; Abi, B.; Abolins, M.; Abramowicz, H.; Acharya, Bobby Samir; Adams, D.L.; Addy, T.N.; Adorisio, C.; Adragna, P.; Adye, T.; Aguilar-Saavedra, J.A.; Aharrouche, M.; Ahlen, S.P.; Ahles, F.; Ahmad, A.; /SUNY, Albany /Alberta U. /Ankara U. /Annecy, LAPP /Argonne /Arizona U. /Texas U., Arlington /Athens U. /Natl. Tech. U., Athens /Baku, Inst. Phys. /Barcelona, IFAE /Belgrade U. /VINCA Inst. Nucl. Sci., Belgrade /Bergen U. /LBL, Berkeley /Humboldt U., Berlin /Bern U., LHEP /Birmingham U. /Bogazici U. /INFN, Bologna /Bologna U.

    2011-11-28

    The Large Hadron Collider (LHC) at CERN promises a major step forward in the understanding of the fundamental nature of matter. The ATLAS experiment is a general-purpose detector for the LHC, whose design was guided by the need to accommodate the wide spectrum of possible physics signatures. The major remit of the ATLAS experiment is the exploration of the TeV mass scale where groundbreaking discoveries are expected. In the focus are the investigation of the electroweak symmetry breaking and linked to this the search for the Higgs boson as well as the search for Physics beyond the Standard Model. In this report a detailed examination of the expected performance of the ATLAS detector is provided, with a major aim being to investigate the experimental sensitivity to a wide range of measurements and potential observations of new physical processes. An earlier summary of the expected capabilities of ATLAS was compiled in 1999 [1]. A survey of physics capabilities of the CMS detector was published in [2]. The design of the ATLAS detector has now been finalised, and its construction and installation have been completed [3]. An extensive test-beam programme was undertaken. Furthermore, the simulation and reconstruction software code and frameworks have been completely rewritten. Revisions incorporated reflect improved detector modelling as well as major technical changes to the software technology. Greatly improved understanding of calibration and alignment techniques, and their practical impact on performance, is now in place. The studies reported here are based on full simulations of the ATLAS detector response. A variety of event generators were employed. The simulation and reconstruction of these large event samples thus provided an important operational test of the new ATLAS software system. In addition, the processing was distributed world-wide over the ATLAS Grid facilities and hence provided an important test of the ATLAS computing system - this is the origin of

  2. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    CAMERA ON TOROID The ATLAS barrel toroid system consists of eight coils, each of axial length 25.3 m, assembled radially and symmetrically around the beam axis. The coils are of a flat racetrack type with two double-pancake windings made of 20.5 kA aluminium-stabilized niobium-titanium superconductor. The video is about the slow lowering of the toroid down to the cavern of ATLAS. It is very demanding task. The camera is placed on top of the toroid.

  3. ATLAS forward physics program

    CERN Document Server

    HELLER, M; The ATLAS collaboration

    2010-01-01

    The variety of forward detectors installed in the vicinity of the ATLAS experiment allows to look over a wide range of forward physics topics. They ensure a good information about rapidity gaps, and the installation of very forward detectors (ALFA and AFP) will allow to tag the leading proton(s) remaining from the different processes studied. Most of the studies have to be done at low luminosity to avoid pile-up, but the AFP project offers a really exiting future for the ATLAS forward physics program. We also present how these forward detectors can be used to measure the relative and absolute luminosity.

  4. ATLAS fast physics monitoring

    Indian Academy of Sciences (India)

    Karsten Köneke; on behalf of the ATLAS Collaboration

    2012-11-01

    The ATLAS experiment at the Large Hadron Collider is recording data from proton–proton collisions at a centre-of-mass energy of 7 TeV since the spring of 2010. The integrated luminosity has grown nearly exponentially since then and continues to rise fast. The ATLAS Collaboration has set up a framework to automatically process the rapidly growing dataset and produce performance and physics plots for the most interesting analyses. The system is designed to give fast feedback. The histograms are produced within hours of data reconstruction (2–3 days after data taking). Hints of potentially interesting physics signals obtained this way are followed up by physics groups.

  5. The Herschel ATLAS

    CERN Document Server

    Eales, S; Clements, D; Cooray, A R; De Zotti, G; Dye, S; Ivison, R; Jarvis, M; Lagache, G; Maddox, S; Negrello, M; Serjeant, S; Thompson, M A; Van Kampen, E; Amblard, A; Andreani, P; Baes, M; Beelen, A; Bendo, G J; Benford, D; Bertoldi, F; Bock, J; Bonfield, D; Boselli, A; Bridge, C; Buat, V; Burgarella, D; Carlberg, R; Cava, A; Chanial, P; Charlot, S; Christopher, N; Coles, P; Cortese, L; Dariush, A; Da Cunha, E; Dalton, G; Danese, L; Dannerbauer, H; Driver, S; Dunlop, J; Fan, L; Farrah, D; Frayer, D; Frenk, C; Geach, J; Gardner, J; Gomez, H; Gonzalez-Nuevo, J; Gonzalez-Solares, E; Griffin, M; Hardcastle, M; Hatziminaoglou, E; Herranz, D; Hughes, D; Ibar, E; Jeong, Woong-Seob; Lacey, C; Lapi, A; Lee, M; Leeuw, L; Liske, J; Lopez-Caniego, M; Müller, T; Nandra, K; Panuzzo, P; Papageorgiou, A; Patanchon, G; Peacock, J; Pearson, C; Phillipps, S; Pohlen, M; Popescu, C; Rawlings, S; Rigby, E; Rigopoulou, M; Rodighiero, G; Sansom, A; Schulz, B; Scott, D; Smith, D J B; Sibthorpe, B; Smail, I; Stevens, J; Sutherland, W; Takeuchi, T; Tedds, J; Temi, P; Tuffs, R; Trichas, M; Vaccari, M; Valtchanov, I; Van der Werf, P; Verma, A; Vieria, J; Vlahakis, C; White, Glenn J

    2009-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 510 square degrees of the extragalactic sky, four times larger than all the other Herschel surveys combined, in five far-infrared and submillimetre bands. We describe the survey, the complementary multi-wavelength datasets that will be combined with the Herschel data, and the six major science programmes we are undertaking. Using new models based on a previous submillimetre survey of galaxies, we present predictions of the properties of the ATLAS sources in other wavebands.

  6. The Herschel ATLAS

    Science.gov (United States)

    Eales, S.; Dunne, L.; Clements, D.; Cooray, A.; De Zotti, G.; Dye, S.; Ivison, R.; Jarvis, M.; Lagache, G.; Maddox, S.; Negrello, M.; Serjeant, S.; Thompson, M. A.; Van Kampen, E.; Amblard, A.; Andreani, P.; Baes, M.; Beelen, A.; Bendo, G. J.; Bertoldi, F.; Benford, D.; Bock, J.

    2010-01-01

    The Herschel ATLAS is the largest open-time key project that will be carried out on the Herschel Space Observatory. It will survey 570 sq deg of the extragalactic sky, 4 times larger than all the other Herschel extragalactic surveys combined, in five far-infrared and submillimeter bands. We describe the survey, the complementary multiwavelength data sets that will be combined with the Herschel data, and the six major science programs we are undertaking. Using new models based on a previous submillimeter survey of galaxies, we present predictions of the properties of the ATLAS sources in other wave bands.

  7. ATLAS Jet Energy Scale

    CERN Document Server

    Schouten, D; Vetterli, M

    2012-01-01

    Jets originating from the fragmentation of quarks and gluons are the most common, and complicated, final state objects produced at hadron colliders. A precise knowledge of their energy calibration is therefore of great importance at experiments at the Large Hadron Collider at CERN, while is very difficult to ascertain. We present in-situ techniques and results for the jet energy scale at ATLAS using recent collision data. ATLAS has demonstrated an understanding of the necessary jet energy corrections to within \\approx 4% in the central region of the calorimeter.

  8. ATLAS/CMS Upgrades

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00370685; The ATLAS collaboration

    2016-01-01

    Precision studies of the Standard Model (SM) and the searches of the physics beyond the SM are ongoing at the ATLAS and CMS experiments at the Large Hadron Collider (LHC). A luminosity upgrade of LHC is planned, which provides a significant challenge for the experiments. In this report, the plans of the ATLAS and CMS upgrades are introduced. Physics prospects for selected topics, including Higgs coupling measurements, Bs,d -> mumu decays, and top quark decays through flavor changing neutral current, are also shown.

  9. Statistical atlas based extrapolation of CT data

    Science.gov (United States)

    Chintalapani, Gouthami; Murphy, Ryan; Armiger, Robert S.; Lepisto, Jyri; Otake, Yoshito; Sugano, Nobuhiko; Taylor, Russell H.; Armand, Mehran

    2010-02-01

    We present a framework to estimate the missing anatomical details from a partial CT scan with the help of statistical shape models. The motivating application is periacetabular osteotomy (PAO), a technique for treating developmental hip dysplasia, an abnormal condition of the hip socket that, if untreated, may lead to osteoarthritis. The common goals of PAO are to reduce pain, joint subluxation and improve contact pressure distribution by increasing the coverage of the femoral head by the hip socket. While current diagnosis and planning is based on radiological measurements, because of significant structural variations in dysplastic hips, a computer-assisted geometrical and biomechanical planning based on CT data is desirable to help the surgeon achieve optimal joint realignments. Most of the patients undergoing PAO are young females, hence it is usually desirable to minimize the radiation dose by scanning only the joint portion of the hip anatomy. These partial scans, however, do not provide enough information for biomechanical analysis due to missing iliac region. A statistical shape model of full pelvis anatomy is constructed from a database of CT scans. The partial volume is first aligned with the statistical atlas using an iterative affine registration, followed by a deformable registration step and the missing information is inferred from the atlas. The atlas inferences are further enhanced by the use of X-ray images of the patient, which are very common in an osteotomy procedure. The proposed method is validated with a leave-one-out analysis method. Osteotomy cuts are simulated and the effect of atlas predicted models on the actual procedure is evaluated.

  10. Application of Grid technologies and search for exotics physics with the ATLAS experiment at the LHC

    CERN Document Server

    March, Luis; Ros, Eduardo

    The work presented in this thesis has been performed within the ATLAS (A Toroidal LHC ApparatuS) collaboration. Two subjects have been investigated. One subject is the Computing System Commissioning (CSC) production using an instance of the Production System (ProdSys), called Lexor, and the test of the ATLAS Distributed Analysis (ADA) using ProdSys. The other subject is the sim- ulation and subsequent analysis of processes involving new particles predicted by the Little Higgs model within the ATLAS detector. An introduction to the Standard Model (SM), the Large Hadron Collider (LHC) and the ATLAS experiment, software and computing is given in chapter 1. The problems of the SM are discussed and some proposed solutions are reviewed. The SM introduction is followed by an overview of LHC and ATLAS. The main AT- LAS subsystems are described and the ATLAS software and computing model is discussed. Many physics processes within and beyond the Standard Model involve b-quark decays. New heavy particles, expected in mo...

  11. 17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

    CERN Multimedia

    Mona Schweizer

    2008-01-01

    17 April 2008 - Head of Internal Audit Network meeting visiting the ATLAS experimental area with CERN ATLAS Team Leader P. Fassnacht, ATLAS Technical Coordinator M. Nessi and ATLAS Resources Manager M. Nordberg.

  12. ATLAS Civil Engineering Point 1

    CERN Multimedia

    Jean-Claude Vialis

    2000-01-01

    Different phases of realisation to Point 1 : zone of the ATLAS experiment The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video. When passing throw the walls the succeeding can be heard and seen. The film has original working sound.

  13. Taking ATLAS to new heights

    CERN Multimedia

    Abha Eli Phoboo, ATLAS experiment

    2013-01-01

    Earlier this month, 51 members of the ATLAS collaboration trekked up to the highest peak in the Atlas Mountains, Mt. Toubkal (4,167m), in North Africa.    The physicists were in Marrakech, Morocco, attending the ATLAS Overview Week (7 - 11 October), which was held for the first time on the African continent. Around 300 members of the collaboration met to discuss the status of the LS1 upgrades and plans for the next run of the LHC. Besides the trek, 42 ATLAS members explored the Saharan sand dunes of Morocco on camels.  Photos courtesy of Patrick Jussel.

  14. Searches for beyond the Standard Model physics with boosted topologies in the ATLAS experiment using the Grid-based Tier-3 facility at IFIC-Valencia

    CERN Document Server

    Villaplana Pérez, Miguel; Vos, Marcel

    Both the LHC and ATLAS have been performing well beyond expectation since the start of the data taking by the end of 2009. Since then, several thousands of millions of collision events have been recorded by the ATLAS experiment. With a data taking efficiency higher than 95% and more than 99% of its channels working, ATLAS supplies data with an unmatched quality. In order to analyse the data, the ATLAS Collaboration has designed a distributed computing model based on GRID technologies. The ATLAS computing model and its evolution since the start of the LHC is discussed in section 3.1. The ATLAS computing model groups the different types of computing centers of the ATLAS Collaboration in a tiered hierarchy that ranges from Tier-0 at CERN, down to the 11 Tier-1 centers and the nearly 80 Tier-2 centres distributed world wide. The Spanish Tier-2 activities during the first years of data taking are described in section 3.2. Tier-3 are institution-level non-ATLAS funded or controlled centres that participate presuma...

  15. Atlas of NATO.

    Science.gov (United States)

    Young, Harry F.

    This atlas provides basic information about the North Atlantic Treaty Organization (NATO). Formed in response to growing concern for the security of Western Europe after World War II, NATO is a vehicle for Western efforts to reduce East-West tensions and the level of armaments. NATO promotes political and economic collaboration as well as military…

  16. Higgs searches with ATLAS

    CERN Document Server

    Price, J D; The ATLAS collaboration

    2013-01-01

    Summary of the ATLAS analyses for the rarer SM Higgs decay channels, and the limits of the SM Higgs invisible decay width. Analyses included are the VH->Vbb, H->tautau, VH->VWW, H->Zy, H->mumu, ttH->ttyy and ZH->ll+inv.

  17. HWW in ATLAS

    CERN Document Server

    Rados, Pere; The ATLAS collaboration

    2016-01-01

    The H-->WW channel plays an important role in Higgs boson property measurements, searches for rare decay modes, and searches for possible extended Higgs sectors. In this talk the latest H-->WW results from ATLAS will be briefly summarised.

  18. ATLAS Experiment Brochure

    CERN Multimedia

    Goldfarb, Steven

    2016-01-01

    ATLAS is one of the four major experiments at the Large Hadron Collider at CERN. It is a general-purpose particle physics experiment run by an international collaboration, and is designed to exploit the full discovery potential and the huge range of physics opportunities that the LHC provides.

  19. Exotic searches at ATLAS

    CERN Document Server

    Turra, Ruggero; The ATLAS collaboration

    2016-01-01

    The ATLAS detector has collected 3.2 fb^-1 of proton-proton collisions at 13 TeV centre of mass energy during the 2015 LHC run. A selected review of the recent result are presented in the context of the direct search for BSM, not SUSY, not BSM Higgs.

  20. ATLAS starts moving in

    CERN Multimedia

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  1. A thermosiphon for ATLAS

    CERN Multimedia

    Rosaria Marraffino

    2013-01-01

    A new thermosiphon cooling system, designed for the ATLAS silicon detectors by CERN’s EN-CV team in collaboration with the experiment, will replace the current system in the next LHC run in 2015. Using the basic properties of density difference and making gravity do the hard work, the thermosiphon promises to be a very reliable solution that will ensure the long-term stability of the whole system.   Former compressor-based cooling system of the ATLAS inner detectors. The system is currently being replaced by the innovative thermosiphon. (Photo courtesy of Olivier Crespo-Lopez). Reliability is the major issue for the present cooling system of the ATLAS silicon detectors. The system was designed 13 years ago using a compressor-based cooling cycle. “The current cooling system uses oil-free compressors to avoid fluid pollution in the delicate parts of the silicon detectors,” says Michele Battistin, EN-CV-PJ section leader and project leader of the ATLAS thermosiphon....

  2. ATLAS solenoid operates underground

    CERN Multimedia

    2006-01-01

    A new phase for the ATLAS collaboration started with the first operation of a completed sub-system: the Central Solenoid. Teams monitoring the cooling and powering of the ATLAS solenoid in the control room. The solenoid was cooled down to 4.5 K from 17 to 23 May. The first current was established the same evening that the solenoid became cold and superconductive. 'This makes the ATLAS Central Solenoid the very first cold and superconducting magnet to be operated in the LHC underground areas!', said Takahiko Kondo, professor at KEK. Though the current was limited to 1 kA, the cool-down and powering of the solenoid was a major milestone for all of the control, cryogenic, power and vacuum systems-a milestone reached by the hard work and many long evenings invested by various teams from ATLAS, all of CERN's departments and several large and small companies. Since the Central Solenoid and the barrel liquid argon (LAr) calorimeter share the same cryostat vacuum vessel, this achievement was only possible in perfe...

  3. Prototype ATLAS straw tracker

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This is an early prototype of the straw tracking device for the ATLAS detector at CERN. This detector will be part of the LHC project, scheduled to start operation in 2008. The straw tracker will consist of thousands of gas-filled straws, each containing a wire, allowing the tracks of particles to be followed.

  4. ATLAS "Splash event" 2008

    CERN Multimedia

    ATLAS, Experiment

    2014-01-01

    "Splash events": As the LHC was being tuned up on 10 September 2008, beam was initially directed at beam collimators just outside the detector, so that a splash of particles would fill much of the detector allowing ATLAS experimenters to prepare the detector for actual running.

  5. Prime wires for ATLAS

    CERN Multimedia

    2003-01-01

    In an award ceremony on 3 September, ATLAS honoured the French company Axon Cable for its special coaxial cables, which were purpose-built for the Liquid Argon calorimeter modules. Working for CERN since the 1970s, Axon' Cable received the ATLAS supplier award last week for its contribution to the liquid argon calorimeter cables of ATLAS (LAL/Orsay, France and University of Victoria, Canada), started in 1996. Its two sets of minicoaxial cables, called harnesses "A" and "B", are designed to function in the harsh conditions in the liquid argon (at 90 Kelvin or -183°C) and under extreme radiation (up to several Mrads). The cables are mainly used for the readout of the calorimeters, and are connected to the outside world by 114 signal feedthroughs with 1920 channels each. The signal from the detectors is transmitted directly without any amplification, which imposes tight restrictions on the impedance and on the signal propagation time of the cables. Peter Jenni, ATLAS spokesperson, gives the award for best s...

  6. Bayesian Parameter Estimation and Segmentation in the Multi-Atlas Random Orbit Model.

    Directory of Open Access Journals (Sweden)

    Xiaoying Tang

    Full Text Available This paper examines the multiple atlas random diffeomorphic orbit model in Computational Anatomy (CA for parameter estimation and segmentation of subcortical and ventricular neuroanatomy in magnetic resonance imagery. We assume that there exist multiple magnetic resonance image (MRI atlases, each atlas containing a collection of locally-defined charts in the brain generated via manual delineation of the structures of interest. We focus on maximum a posteriori estimation of high dimensional segmentations of MR within the class of generative models representing the observed MRI as a conditionally Gaussian random field, conditioned on the atlas charts and the diffeomorphic change of coordinates of each chart that generates it. The charts and their diffeomorphic correspondences are unknown and viewed as latent or hidden variables. We demonstrate that the expectation-maximization (EM algorithm arises naturally, yielding the likelihood-fusion equation which the a posteriori estimator of the segmentation labels maximizes. The likelihoods being fused are modeled as conditionally Gaussian random fields with mean fields a function of each atlas chart under its diffeomorphic change of coordinates onto the target. The conditional-mean in the EM algorithm specifies the convex weights with which the chart-specific likelihoods are fused. The multiple atlases with the associated convex weights imply that the posterior distribution is a multi-modal representation of the measured MRI. Segmentation results for subcortical and ventricular structures of subjects, within populations of demented subjects, are demonstrated, including the use of multiple atlases across multiple diseased groups.

  7. Hippocampal unified multi-atlas network (HUMAN): protocol and scale validation of a novel segmentation tool

    Science.gov (United States)

    Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the

    2015-11-01

    In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.

  8. PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC

    Directory of Open Access Journals (Sweden)

    Megino Fernando Barreiro

    2016-01-01

    The PanDA (Production and Distributed Analysis system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS, up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA.

  9. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica; Sciacca, Francesco Giovanni; Mancinelli, Valentina

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, and CMS experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionality has been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This paper summarizes the different developments and optimiz...

  10. Improved ATLAS HammerCloud Monitoring for local Site Administration

    CERN Document Server

    Boehler, Michael; The ATLAS collaboration; Hoenig, Friedrich; Legger, Federica

    2015-01-01

    Every day hundreds of tests are run on the Worldwide LHC Computing Grid for the ATLAS, CMS, and LHCb experiments in order to evaluate the performance and reliability of the different computing sites. All this activity is steered, controlled, and monitored by the HammerCloud testing infrastructure. Sites with failing functionality tests are auto-excluded from the ATLAS computing grid, therefore it is essential to provide a detailed and well organized web interface for the local site administrators such that they can easily spot and promptly solve site issues. Additional functionalities have been developed to extract and visualize the most relevant information. The site administrators can now be pointed easily to major site issues which lead to site blacklisting as well as possible minor issues that are usually not conspicuous enough to warrant the blacklisting of a specific site, but can still cause undesired effects such as a non-negligible job failure rate. This contribution summarizes the different developm...

  11. Functional testing of the ATLAS distributed analysis resources with Ganga

    International Nuclear Information System (INIS)

    The ATLAS computing model is based on the GRID paradigm, which entails a high degree of decentralisation and sharing of computer resources. For such a large system to be efficient, regular checks on the performances of the involved computing facilities are desirable. We present the recent developments of a tool, the ATLAS Gangarobot, designed to perform regular tests of all sites by running arbitrary user applications with varied configurations at predefined time intervals. The Gangarobot uses Ganga, a front-end for job definition and management, for configuring and running the test applications on the various GRID sites. The test results can be used to dynamically blacklist sites that are temporarly unsuited to run analysis jobs, therefore providing on the one hand a way to quickly spot site problems, and on the other hand allowing for an effective distribution of the work load on the available resources.

  12. ATLAS Data Challenges - A Collaborative Worldwide Activity

    CERN Multimedia

    Poulard, G

    The goals of the ATLAS Data Challenges (DC) are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made. It is understood that these Data Challenges should be of increasing complexity and that their results will be used as input for a Computing TDR and for preparing an MoU in due time. A major feature of the current computing activities (DC1) in ATLAS is the preparation and deployment of the software required for the production of large event samples for the High Level Trigger (HLT) and physics communities, and the actual production of those samples. It should be noted that it is not an option to "run everything at CERN" even if we wanted to; the resources are not available at CERN to carry out the production on a reasonable time-scale. We have therefore had to face the great challenge of organising and then carrying out this large-scale production at a significant number of sites around the world. However, th...

  13. The Hatfield Lunar Atlas Digitally Re-Mastered Edition

    CERN Document Server

    Cook, Anthony Charles

    2012-01-01

    The Hatfield Lunar Atlas has become an amateur lunar observer's bible since it was first published in 1968. A major update of the atlas was made in 1998, using the same wonderful photographs that Commander Henry Hatfield made with his purpose-built 12-inch (300 mm) telescope, but bringing the lunar nomenclature up to date and changing the units from Imperial to S.I. metric. However, with modern telescope optics, digital imaging equipment and computer enhancement new pictures can easily surpass what was achieved with Henry Hatfield's 12-inch telescope and a film camera. This limits the usefulness of the original atlas to visual observing or imaging with rather small amateur telescopes. The new, digitally re-mastered edition vastly improves the clarity and definition of the original photographs - significantly beyond the resolution limits of the photographic grains present in earlier atlas versions - while preserving the layout and style of the original publications. This has been achieved by merging computer-v...

  14. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  15. ATLAS Off-Grid sites (Tier-3) monitoring

    CERN Document Server

    Petrosyan, A S; The ATLAS collaboration

    2012-01-01

    ATLAS is a particle physics experiment on Large Hadron Collider at CERN. The experiment produces petabytes of data every year. The ATLAS Computing model embraces the Grid paradigm and originally included three levels of computing centers to be able to operate such large volume of data. The ATLAS Distributed Computing activities concentrated so far in the “central” part of the computing system of the experiment, namely the first 3 tiers (CERN Tier-0, the 10 Tier-1s centers and about 50 Tier-2s). This is a coherent system to perform data processing and management on a global scale and host (re)processing, simulation activities down to group and user analysis. With the formation of small computing centers, usually based at universities, the model was expanded to include them as Tier-3 sites. Tier-3 centers consist of non-pledged resources mostly dedicated for the data analysis by the geographically close or local scientific groups. The experiment supplies all necessary software to operate typical Grid-site, ...

  16. Computing News

    CERN Multimedia

    McCubbin, N

    2001-01-01

    We are still five years from the first LHC data, so we have plenty of time to get the computing into shape, don't we? Well, yes and no: there is time, but there's an awful lot to do! The recently-completed CERN Review of LHC Computing gives the flavour of the LHC computing challenge. The hardware scale for each of the LHC experiments is millions of 'SpecInt95' (SI95) units of cpu power and tens of PetaBytes of data storage. PCs today are about 20-30SI95, and expected to be about 100 SI95 by 2005, so it's a lot of PCs. This hardware will be distributed across several 'Regional Centres' of various sizes, connected by high-speed networks. How to realise this in an orderly and timely fashion is now being discussed in earnest by CERN, Funding Agencies, and the LHC experiments. Mixed in with this is, of course, the GRID concept...but that's a topic for another day! Of course hardware, networks and the GRID constitute just one part of the computing. Most of the ATLAS effort is spent on software development. What we ...

  17. A unified framework for cross-modality multi-atlas segmentation of brain MRI.

    Science.gov (United States)

    Eugenio Iglesias, Juan; Rory Sabuncu, Mert; Van Leemput, Koen

    2013-12-01

    Multi-atlas label fusion is a powerful image segmentation strategy that is becoming increasingly popular in medical imaging. A standard label fusion algorithm relies on independently computed pairwise registrations between individual atlases and the (target) image to be segmented. These registrations are then used to propagate the atlas labels to the target space and fuse them into a single final segmentation. Such label fusion schemes commonly rely on the similarity between intensity values of the atlases and target scan, which is often problematic in medical imaging - in particular, when the atlases and target images are obtained via different sensor types or imaging protocols. In this paper, we present a generative probabilistic model that yields an algorithm for solving the atlas-to-target registrations and label fusion steps simultaneously. The proposed model does not directly rely on the similarity of image intensities. Instead, it exploits the consistency of voxel intensities within the target scan to drive the registration and label fusion, hence the atlases and target image can be of different modalities. Furthermore, the framework models the joint warp of all the atlases, introducing interdependence between the registrations. We use variational expectation maximization and the Demons registration framework in order to efficiently identify the most probable segmentation and registrations. We use two sets of experiments to illustrate the approach, where proton density (PD) MRI atlases are used to segment T1-weighted brain scans and vice versa. Our results clearly demonstrate the accuracy gain due to exploiting within-target intensity consistency and integrating registration into label fusion. PMID:24001931

  18. Improving atlas methodology

    Science.gov (United States)

    Robbins, C.S.; Dowell, B.A.; O'Brien, J.

    1987-01-01

    We are studying a sample of Maryland (2 %) and New Hampshire (4 %) Atlas blocks and a small sample in Maine. These three States used different sampling methods and block sizes. We compare sampling techniques, roadside with off-road coverage, our coverage with that of the volunteers, and different methods of quantifying Atlas results. The 7 1/2' (12-km) blocks used in the Maine Atlas are satisfactory for coarse mapping, but are too large to enable changes to be detected in the future. Most states are subdividing the standard 7 1/2' maps into six 5-km blocks. The random 1/6 sample of 5-km blocks used in New Hampshire, Vermont (published 1985), and many other states has the advantage of permitting detection of some changes in the future, but the disadvantage of leaving important habitats unsampled. The Maryland system of atlasing all 1,200 5-km blocks and covering one out of each six by quarterblocks (2 1/2-km) is far superior if enough observers can be found. A good compromise, not yet attempted, would be to Atlas a 1/6 random sample of 5-km blocks and also one other carefully selected (non-random) block on the same 7 1/2' map--the block that would include the best sample of habitats or elevations not in the random block. In our sample the second block raised the percentage of birds found from 86% of the birds recorded in the 7 1/2' quadrangle to 93%. It was helpful to list the expected species in each block and to revise this list annually. We estimate that 90-100 species could be found with intensive effort in most Maryland blocks; perhaps 95-105 in New Hampshire. It was also helpful to know which species were under-sampled so we could make a special effort to search for these. A total of 75 species per block (or 75% of the expected species in blocks with very restricted habitat diversity) is considered a practical and adequate goal in these States. When fewer than 60 species are found per block, a high proportion of the rarer species are missed, as well as some of

  19. Parcellation of the Healthy Neonatal Brain into 107 Regions Using Atlas Propagation through Intermediate Time Points in Childhood

    Science.gov (United States)

    Blesa, Manuel; Serag, Ahmed; Wilkinson, Alastair G.; Anblagan, Devasuda; Telford, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Macnaught, Gillian; Semple, Scott I.; Bastin, Mark E.; Boardman, James P.

    2016-01-01

    Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2–41+6). An adult brain atlas (SRI24/TZO) was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database), with the final atlas (Edinburgh Neonatal Atlas, ENA33) constructed using the Symmetric Group Normalization (SyGN) method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modeling brain growth during development. PMID:27242423

  20. Parcellation of the healthy neonatal brain into 107 regions using atlas propagation through intermediate time points in childhood

    Directory of Open Access Journals (Sweden)

    Manuel eBlesa Cabez

    2016-05-01

    Full Text Available Neuroimage analysis pipelines rely on parcellated atlases generated from healthy individuals to provide anatomic context to structural and diffusion MRI data. Atlases constructed using adult data introduce bias into studies of early brain development. We aimed to create a neonatal brain atlas of healthy subjects that can be applied to multi-modal MRI data. Structural and diffusion 3T MRI scans were acquired soon after birth from 33 typically developing neonates born at term (mean postmenstrual age at birth 39+5 weeks, range 37+2-41+6. An adult brain atlas (SRI24/TZO was propagated to the neonatal data using temporal registration via childhood templates with dense temporal samples (NIH Pediatric Database, with the final atlas (Edinburgh Neonatal Atlas, ENA33 constructed using the Symmetric Group Normalization method. After this step, the computed final transformations were applied to T2-weighted data, and fractional anisotropy, mean diffusivity, and tissue segmentations to provide a multi-modal atlas with 107 anatomical regions; a symmetric version was also created to facilitate studies of laterality. Volumes of each region of interest were measured to provide reference data from normal subjects. Because this atlas is generated from step-wise propagation of adult labels through intermediate time points in childhood, it may serve as a useful starting point for modelling brain growth during development.

  1. Advanced Technology Lifecycle Analysis System (ATLAS)

    Science.gov (United States)

    O'Neil, Daniel A.; Mankins, John C.

    2004-01-01

    Developing credible mass and cost estimates for space exploration and development architectures require multidisciplinary analysis based on physics calculations, and parametric estimates derived from historical systems. Within the National Aeronautics and Space Administration (NASA), concurrent engineering environment (CEE) activities integrate discipline oriented analysis tools through a computer network and accumulate the results of a multidisciplinary analysis team via a centralized database or spreadsheet Each minute of a design and analysis study within a concurrent engineering environment is expensive due the size of the team and supporting equipment The Advanced Technology Lifecycle Analysis System (ATLAS) reduces the cost of architecture analysis by capturing the knowledge of discipline experts into system oriented spreadsheet models. A framework with a user interface presents a library of system models to an architecture analyst. The analyst selects models of launchers, in-space transportation systems, and excursion vehicles, as well as space and surface infrastructure such as propellant depots, habitats, and solar power satellites. After assembling the architecture from the selected models, the analyst can create a campaign comprised of missions spanning several years. The ATLAS controller passes analyst specified parameters to the models and data among the models. An integrator workbook calls a history based parametric analysis cost model to determine the costs. Also, the integrator estimates the flight rates, launched masses, and architecture benefits over the years of the campaign. An accumulator workbook presents the analytical results in a series of bar graphs. In no way does ATLAS compete with a CEE; instead, ATLAS complements a CEE by ensuring that the time of the experts is well spent Using ATLAS, an architecture analyst can perform technology sensitivity analysis, study many scenarios, and see the impact of design decisions. When the analyst is

  2. Experience commissioning the ATLAS distributed data management system on top of the WLCG service

    CERN Document Server

    Campana, S

    2010-01-01

    The ATLAS experiment at CERN developed an automated system for distribution of simulated and detector data. Such system, which partially consists of various ATLAS specific services, strongly relies on the WLCG infrastructure, both at the level of middleware components, service deployment and operations. Because of the complexity of the system and its highly distributed nature, a dedicated effort was put in place to deliver a reliable service for ATLAS data distribution, offering the necessary performance, high availability and accommodating the main use cases. This contribution will describe the various challenges and activities carried on in 2008 for the commissioning of the system, together with the experience distributing simulated data and detector data. The main commissioning activity was concentrated in two Combined Computing Resource Challenges, in February and May 2008, where it was demonstrated that the WLCG service and the ATLAS system could sustain the peak load of data transfer according to the co...

  3. ATLAS: civil engineering Point 1

    CERN Multimedia

    2000-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are busy to finish the different infrastructures for ATLAS. Real underground video. Nice view from the surface to the cavern from the pit side - all the big machines looked very small. The film has original working sound.

  4. The ATLAS Forward Physics Program

    OpenAIRE

    Royon, Christophe

    2010-01-01

    We describe the ATLAS Forward Physics Program at low luminosity using the rapidity gap method and a dedicated detector called ALFA to tag the protons. We also describe the physics topics of the ATLAS Forward Physics Project at high instantaneous luminosity.

  5. ATLAS recognises its best suppliers

    CERN Multimedia

    2002-01-01

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance has just been attributed to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months ago at their headquarters in Japan.

  6. Lowering the first ATLAS toroid

    CERN Multimedia

    Maximilien Brice

    2004-01-01

    The ATLAS detector on the LHC at CERN will consist of eight toroid magnets, the first of which was lowered into the cavern in these images on 26 October 2004. The coils are supported on platforms where they will be attached to form a giant torus. The platforms will hold about 300 tonnes of ATLAS' muon chambers and will envelop the inner detectors.

  7. ATLAS end-cap detector

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    Three scientists from the Institute of Nuclear Phyiscs at Novossibirsk with one of the end-caps of the ATLAS detector. The end-caps will be used to detect particles produced in the proton-proton collisions at the heart of the ATLAS experiment that are travelling close to the axis of the two beams.

  8. Brain templates and atlases.

    Science.gov (United States)

    Evans, Alan C; Janke, Andrew L; Collins, D Louis; Baillet, Sylvain

    2012-08-15

    The core concept within the field of brain mapping is the use of a standardized, or "stereotaxic", 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or "atlases" that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease. However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from "native" space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target "template" or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease

  9. ATLAS Award for Difficult Task

    CERN Multimedia

    2004-01-01

    Two Russian companies were honoured with an ATLAS Award, for supply of the ATLAS Inner Detector barrel support structure elements, last week. On 23 March the Russian company ORPE Technologiya and its subcontractor, RSP Khrunitchev, were jointly presented with an ATLAS Supplier Award. Since 1998, ORPE Technologiya has been actively involved in the development of the carbon-fibre reinforced plastic elements of the ATLAS Inner Detector barrel support structure. After three years of joint research and development, CERN and ORPE Technologiya launched the manufacturing contract. It had a tight delivery schedule and very demanding specifications in terms of mechanical tolerance and stability. The contract was successfully completed with the arrival of the last element of the structure at CERN on 8 January 2004. The delivery of this key component of the Inner Detector deserves an ATLAS Award given the difficulty of manufacturing the end-frames, which very few companies in the world would have been able to do at an ...

  10. ATLAS BigPanDA Monitoring and Its Evolution

    CERN Document Server

    Wenaus, Torre; The ATLAS collaboration; Korchuganova, Tatiana

    2016-01-01

    BigPanDA is the latest generation of the monitoring system for the Production and Distributed Analysis (PanDA) system. The BigPanDA monitor is a core component of PanDA and also serves the monitoring needs of the new ATLAS Production System Prodsys-2. BigPanDA has been developed to serve the growing computation needs of the ATLAS Experiment and the wider applications of PanDA beyond ATLAS. Through a system-wide job database, the BigPanDA monitor provides a comprehensive and coherent view of the tasks and jobs executed by the system, from high level summaries to detailed drill-down job diagnostics. The system has been in production and has remained in continuous development since mid 2014, today effectively managing more than 2 million jobs per day distributed over 150 computing centers worldwide. BigPanDA also delivers web-based analytics and system state views to groups of users including distributed computing systems operators, shifters, physicist end-users, computing managers and accounting services. Provi...

  11. Visits to Tier-1 Computing Centres

    CERN Multimedia

    Dario Barberis

    At the beginning of 2007 it became clear that an enhanced level of communication is needed between the ATLAS computing organisation and the Tier-1 centres. Most usual meetings are ATLAS-centric and cannot address the issues of each Tier-1; therefore we decided to organise a series of visits to the Tier-1 centres and focus on site issues. For us, ATLAS computing management, it is most useful to realize how each Tier-1 centre is organised, and its relation to the associated Tier-2s; indeed their presence at these visits is also very useful. We hope it is also useful for sites... at least, we are told so! The usual participation includes, from the ATLAS side: computing management, operations, data placement, resources, accounting and database deployment coordinators; and from the Tier-1 side: computer centre management, system managers, Grid infrastructure people, network, storage and database experts, local ATLAS liaison people and representatives of the associated Tier-2s. Visiting Tier-1 centres (1-4). ...

  12. A digital framework to build, visualize and analyze a gene expression atlas with cellular resolution in zebrafish early embryogenesis.

    Directory of Open Access Journals (Sweden)

    Carlos Castro-González

    2014-06-01

    Full Text Available A gene expression atlas is an essential resource to quantify and understand the multiscale processes of embryogenesis in time and space. The automated reconstruction of a prototypic 4D atlas for vertebrate early embryos, using multicolor fluorescence in situ hybridization with nuclear counterstain, requires dedicated computational strategies. To this goal, we designed an original methodological framework implemented in a software tool called Match-IT. With only minimal human supervision, our system is able to gather gene expression patterns observed in different analyzed embryos with phenotypic variability and map them onto a series of common 3D templates over time, creating a 4D atlas. This framework was used to construct an atlas composed of 6 gene expression templates from a cohort of zebrafish early embryos spanning 6 developmental stages from 4 to 6.3 hpf (hours post fertilization. They included 53 specimens, 181,415 detected cell nuclei and the segmentation of 98 gene expression patterns observed in 3D for 9 different genes. In addition, an interactive visualization software, Atlas-IT, was developed to inspect, supervise and analyze the atlas. Match-IT and Atlas-IT, including user manuals, representative datasets and video tutorials, are publicly and freely available online. We also propose computational methods and tools for the quantitative assessment of the gene expression templates at the cellular scale, with the identification, visualization and analysis of coexpression patterns, synexpression groups and their dynamics through developmental stages.

  13. The ATLAS Forward Calorimeter

    Science.gov (United States)

    Artamonov, A.; Bailey, D.; Belanger, G.; Cadabeschi, M.; Chen, T.-Y.; Epshteyn, V.; Gorbounov, P.; Joo, K. K.; Khakzad, M.; Khovanskiy, V.; Krieger, P.; Loch, P.; Mayer, J.; Neuheimer, E.; Oakham, F. G.; O'Neill, M.; Orr, R. S.; Qi, M.; Rutherfoord, J.; Savine, A.; Schram, M.; Shatalov, P.; Shaver, L.; Shupe, M.; Stairs, G.; Strickland, V.; Tompkins, D.; Tsukerman, I.; Vincent, K.

    2008-02-01

    Forward calorimeters, located near the incident beams, complete the nearly 4π coverage for high pT particles resulting from proton-proton collisions in the ATLAS detector at the Large Hadron Collider at CERN. Both the technology and the deployment of the forward calorimeters in ATLAS are novel. The liquid argon rod/tube electrode structure for the forward calorimeters was invented specifically for applications in high rate environments. The placement of the forward calorimeters adjacent to the other calorimeters relatively close to the interaction point provides several advantages including nearly seamless calorimetry and natural shielding for the muon system. The forward calorimeter performance requirements are driven by events with missing ET and tagging jets.

  14. The ATLAS ROBIN

    Energy Technology Data Exchange (ETDEWEB)

    Cranfield, R; Crone, G [University College London, London (United Kingdom); Francis, D; Gorini, B; Joos, M; Petersen, J; Tremblet, L; Unel, G [CERN, Geneva (Switzerland); Green, B; Misiejuk, A; Strong, J; Teixeira-Dias, P [Royal Holloway University of London, London (United Kingdom); Kieft, G; Vermeulen, J [FOM - Institute SAF and University of Amsterdam/Nikhef, Amsterdam (Netherlands); Kugel, A; Mueller, M; Yu, M [University of Mannheim, Mannheim (Germany); Perera, V; Wickens, F [Rutherford Appleton Laboratory, Didcot (United Kingdom)], E-mail: kugel@ti.uni-mannheim.de

    2008-01-15

    The ATLAS readout subsystem is the main interface between {approx} 1600 detector front-end readout links and the higher-level trigger farms. To handle the high event rate (up to 100 kHz) and bandwidth (up to 160 MB/s per link) the readout PCs are equipped with four ROBIN (readout buffer input) cards. Each ROBIN attaches to three optical links, provides local event buffering for approximately 300 ms and communicates with the higher-level trigger system for data and delete requests. According to the ATLAS baseline architecture this communication runs via the PCI bus of the host PC. In addition, each ROBIN provides a private Gigabit Ethernet port which can be used for the same purpose. Operational monitoring is performed via PCI. This paper presents a summary of the ROBIN hardware and software together with measurements results obtained from various test setups.

  15. Electroweak Physics at ATLAS

    CERN Document Server

    Conti, G; The ATLAS collaboration

    2013-01-01

    Various electroweak measurements have already been performed at the ATLAS experiment since the start of the Large Hadron Collider at CERN. A review of the latest results in $W/Z$ and diboson physics will be given here. The $W/Z$ physics results include the measurement of the high-mass Drell-Yan di-lepton production cross section, the $Wb(b)$ production cross section and the study of the transverse momentum of $Z/\\gamma^*$. The latest $WW$, $WZ$, $ZZ$, $W\\gamma$ and $Z\\gamma$ production cross sections will be summarized, including updated $WW$ and $ZZ$ results. In particular, the $ZZ^*$ channel has been added. The ATLAS diboson results are also used to set limits on charged triple gauge couplings ($WWZ$, $WW\\gamma$) and on neutral triple gauge couplings ($Z\\gamma\\gamma$, $ZZ\\gamma$, $ZZZ$).

  16. ATLAS software packaging

    CERN Document Server

    Rybkin, G

    2012-01-01

    Software packaging is indispensable part of build and prerequisite for deployment processes. Full ATLAS software stack consists of TDAQ, HLT, and Offline software. These software groups depend on some 80 external software packages. We present tools, package PackDist, developed and used to package all this software except for TDAQ project. PackDist is based on and driven by CMT, ATLAS software configuration and build tool, and consists of shell and Python scripts. The packaging unit used is CMT project. Each CMT project is packaged as several packages - platform dependent (one per platform available), source code excluding header files, other platform independent files, documentation, and debug information packages (the last two being built optionally). Packaging can be done recursively to package all the dependencies. The whole set of packages for one software release, distribution kit, also includes configuration packages and contains some 120 packages for one platform. Also packaged are physics analysis pro...

  17. Electron isolation at ATLAS

    International Nuclear Information System (INIS)

    The ATLAS experiment at the Large Hadron Collider (LHC) will face the challenge of efficiently selecting interesting candidate events in pp collisions at 14 TeV centre-of-mass energy, whilst rejecting the enormous number of background events. Many of these interesting candidate events have isolated leptons in the final state, like for example events with a gauge boson or SUSY. On top of the standard ATLAS electron identification an isolation criterion has been developed using a likelihood as multivariate approach with several discriminating variables. The likelihood is constructed by selecting electrons from Z decays for the signal and for the background electrons from b quark jets. Results for the example of the associated Higgs boson production with top quarks and subsequent decay into a pair of W bosons are presented. In addition first results of a likelihood to discriminate against jets are given and a possible extension for muons is discussed

  18. Jet substructure in ATLAS

    CERN Document Server

    Miller, David W

    2011-01-01

    Measurements are presented of the jet invariant mass and substructure in proton-proton collisions at $\\sqrt{s} = 7$ TeV with the ATLAS detector using an integrated luminosity of 37 pb$^{-1}$. These results exercise the tools for distinguishing the signatures of new boosted massive particles in the hadronic final state. Two "fat" jet algorithms are used, along with the filtering jet grooming technique that was pioneered in ATLAS. New jet substructure observables are compared for the first time to data at the LHC. Finally, a sample of candidate boosted top quark events collected in the 2010 data is analyzed in detail for the jet substructure properties of hadronic "top-jets" in the final state. These measurements demonstrate not only our excellent understanding of QCD in a new energy regime but open the path to using complex jet substructure observables in the search for new physics.

  19. Overview of the ATLAS Fast Tracker Project

    CERN Document Server

    Ancu, Lucian Stefan; The ATLAS collaboration

    2016-01-01

    The next LHC runs, with a significant increase in instantaneous luminosity, will provide a big challenge for the trigger and data acquisition systems of all the experiments. An intensive use of the tracking information at the trigger level will be important to keep high efficiency for interesting events despite the increase in multiple collisions per bunch crossing. In order to increase the use of tracks within the High Level Trigger, the ATLAS experiment planned the installation of a hardware processor dedicated to tracking: the Fast TracKer processor. The Fast Tracker is designed to perform full scan track reconstruction of every event accepted by the ATLAS first level hardware trigger. To achieve this goal the system uses a parallel architecture, with algorithms designed to exploit the computing power of custom Associative Memory chips, and modern field programmable gate arrays. The processor will provide computing power to reconstruct tracks with transverse momentum greater than 1 GeV in the whol...

  20. SUSY Searches in ATLAS

    CERN Document Server

    Zhuang, Xuai; The ATLAS collaboration

    2016-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles, with focus on those obtained using proton-proton collisions at a centre of mass energy of 13 TeV using 2015+2016 data. The searches with final states including jets, missing transverse momentum, light leptons will be presented.

  1. ATLAS support rails

    CERN Multimedia

    Maximilien Brice

    2003-01-01

    These supports will hold the 7000 tonne ATLAS detector in its cavern at the LHC. The huge toroid will be assembled from eight coils that will house some of the muon chambers. Supported within the toroid will be the inner detector, containing tracking devices, as well as devices to measure the energies of the particles produced in the 14 TeV proton-proton collisions at the LHC.

  2. Topographical atlas sheets

    Science.gov (United States)

    Wheeler, George Montague

    1876-01-01

    The following topographical atlas sheets, accompanying Appendix J.J. of the Annual Report of the Chief of Engineers, U.S. Army-being Annual Report upon U. S. Geographical Surveys-have been published during the fiscal year ending June 30, 1876, and are a portion of the series projected to embrace the territory of the United States lying west of the 100th meridian.

  3. Overview of ATLAS results

    CERN Document Server

    Grabowska-Bold, Iwona; The ATLAS collaboration

    2016-01-01

    The ATLAS experiment at the Large Hadron Collider has undertaken a broad physics program to probe and characterize the hot nuclear matter created in relativistic lead-lead collisions. This talk presents recent results based on Run 2 data on production of jet, electroweak bosons and quarkonium, electromagnetic processes in ultra-peripheral collisions, and bulk particle collectivity from PbPb, pPb and pp collisions.

  4. ATLAS/CMS Upgrades

    CERN Document Server

    Horii, Yasuyuki; The ATLAS collaboration

    2016-01-01

    Precise Higgs measurements and new physics searches are planned at LHC (HL-LHC) with integrated luminosity of 300 fb^{-1} (3000 fb^{-1}). An increased peak luminosity provides a significant challenge for the experiments. In this presentation, the plans for the ATLAS and CMS upgrades are introduced. Physics prospects for some topics related with ‘flavour’, e.g Higgs couplings, B_{s, d}->mumu, and FCNC top decays, are also shown.

  5. Hybrid Atlas Models

    CERN Document Server

    Ichiba, Tomoyuki; Banner, Adrian; Karatzas, Ioannis; Fernholz, Robert

    2009-01-01

    We study Atlas-type models of equity markets with local characteristics that depend on both name and rank, and in ways that induce a stability of the capital distribution. Ergodic properties and rankings of processes are examined with reference to the theory of reflected Brownian motions in polyhedral domains. In the context of such models, we discuss properties of various investment strategies, including the so-called growth-optimal and universal portfolios.

  6. L'esperimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  7. El experimento ATLAS

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  8. The ATLAS Experiment Movie

    CERN Multimedia

    ATLAS Outreach Committee

    2000-01-01

    This award winning film gives a glimpse behind the scenes of building the ATLAS detector. This film asks: Why are so many physicists anxious to build this apparatus? Will they be able to answer fundamental questions such as: Where does mass come from? Why does the Universe have so little antimatter? Are there extra dimensions of space that are hidden from our view? Is there an underlying theory to find? Major surprises are likely in this unknown part of physics.

  9. The Genome Atlas Resource

    OpenAIRE

    Azam Qureshi, Matloob; Rotenberg, Eva; Stærfeldt, Hans Henrik; Hansson, Lena; Ussery, David

    2010-01-01

    Abstract. The Genome Atlas is a resource for addressing the challenges of synchronising prokaryotic genomic sequence data from multiple public repositories. This resource can integrate bioinformatic analyses in various data format and quality. Existing open source tools have been used together with scripts and algorithms developed in a variety of programming languages at the Centre for Biological Sequence Analysis in order to create a three-tier software application for genome analysis. The r...

  10. ATLAS overview week highlights

    CERN Multimedia

    D. Froidevaux

    2005-01-01

    A warm and early October afternoon saw the beginning of the 2005 ATLAS overview week, which took place Rue de La Montagne Sainte-Geneviève in the heart of the Quartier Latin in Paris. All visitors had been warned many times by the ATLAS management and the organisers that the premises would be the subject of strict security clearance because of the "plan Vigipirate", which remains at some level of alert in all public buildings across France. The public building in question is now part of the Ministère de La Recherche, but used to host one of the so-called French "Grandes Ecoles", called l'Ecole Polytechnique (in France there is only one Ecole Polytechnique, whereas there are two in Switzerland) until the end of the seventies, a little while after it opened its doors also to women. In fact, the setting chosen for this ATLAS overview week by our hosts from LPNHE Paris has turned out to be ideal and the security was never an ordeal. For those seeing Paris for the first time, there we...

  11. ATLAS Detector Upgrade Prospects

    CERN Document Server

    Dobre, Monica; The ATLAS collaboration

    2016-01-01

    After the successful operation at the center-of-mass energies of 7 and 8 TeV in 2010 - 2012, the LHC is ramped up and successfully took data at the center-of-mass energies of 13 TeV in 2015. Meanwhile, plans are actively advancing for a series of upgrades of the accelerator, culminating roughly ten years from now in the high-luminosity LHC (HL-LHC) project, delivering of the order of five times the LHC nominal instantaneous luminosity along with luminosity leveling. The ultimate goal is to extend the dataset from about few hundred fb−1 expected for LHC running to 3000 fb−1 by around 2035 for ATLAS and CMS. The challenge of coping with the HL-LHC instantaneous and integrated luminosity, along with the associated radiation levels, requires further major changes to the ATLAS detector. The designs are developing rapidly for a new all-silicon tracker, significant upgrades of the calorimeter and muon systems, as well as improved triggers and data acquisition. ATLAS is also examining potential benefits of extens...

  12. Clean tracks for ATLAS

    CERN Multimedia

    2006-01-01

    First cosmic ray tracks in the integrated ATLAS barrel SCT and TRT tracking detectors. A snap-shot of a cosmic ray event seen in the different layers of both the SCT and TRT detectors. The ATLAS Inner Detector Integration Team celebrated a major success recently, when clean tracks of cosmic rays were detected in the completed semiconductor tracker (SCT) and transition radiation tracker (TRT) barrels. These tracking tests come just months after the successful insertion of the SCT into the TRT (See Bulletin 09/2006). The cosmic ray test is important for the experiment because, after 15 years of hard work, it is the last test performed on the fully assembled barrel before lowering it into the ATLAS cavern. The two trackers work together to provide millions of channels so that particles' tracks can be identified and measured with great accuracy. According to the team, the preliminary results were very encouraging. After first checks of noise levels in the final detectors, a critical goal was to study their re...

  13. Experience with CORBA communication middleware in the ATLAS DAQ.

    CERN Document Server

    Kolos, S; Amorim, A; Badescu, E; Burckhart-Chromek, Doris; Caprini, M; Dobson, M; Fiuza de Barrosb, N; Flammerd, J; Jones, R; Kazarov, A; Klose, D; Korobov, S; Kotov, V; Liko, D; Mapelli, L; Mineev, M; Pedro, L; Ryabov, Yu; Soloviev, I; Computing In High Energy Physics

    2005-01-01

    As modern High Energy Physics (HEP) experiments require more distributed computing power to fulfill their demands, the need of an efficient distributed online services for control, configuration and monitoring in such experiments becomes increasingly important. This paper describes experience of using standard Common Object Request Broker Architecture (CORBA) middleware for providing a high performance and scalable software, which will be used for the online control, configuration and monitoring in the ATLAS Data Acquisition (DAQ) system. It also recites the experience, which was gained from using several CORBA implementations together and replacing one CORBA broker with another. Finally the paper presents the results of the large scale tests, demonstrating the performance and scalability of the ATLAS DAQ online services. These results show that the standard CORBA is truly appropriate for the highly efficient online distributed computing in the HEP experiments area.

  14. Trigger Menu-aware Monitoring for the ATLAS experiment

    CERN Document Server

    Hoad, Xanthe; The ATLAS collaboration

    2016-01-01

    Changes in the trigger menu, the online algorithmic event-selection of the ATLAS experiment at the LHC in response to luminosity and detector changes are followed by adjustments in their monitoring system. This is done to ensure that the collected data is useful, and can be properly reconstructed at Tier-0, the first level of the computing grid. During Run 1, ATLAS deployed monitoring updates with the installation of new software releases at Tier-0. This created unnecessary overhead for developers and operators, and unavoidably led to different releases for the data-taking and the monitoring setup. We present a "trigger menu-aware" monitoring system designed for the ATLAS Run 2 data-taking. The new monitoring system aims to simplify the ATLAS operational workflows, and allows for easy and flexible monitoring configuration changes at the Tier-0 site via an Oracle DB interface. We present the design and the implementation of the menu-aware monitoring, along with lessons from the operational experience of the ne...

  15. Data federation strategies for ATLAS using XRootD

    International Nuclear Information System (INIS)

    In the past year the ATLAS Collaboration accelerated its program to federate data storage resources using an architecture based on XRootD with its attendant redirection and storage integration services. The main goal of the federation is an improvement in the data access experience for the end user while allowing more efficient and intelligent use of computing resources. Along with these advances come integration with existing ATLAS production services (PanDA and its pilot services) and data management services (DQ2, and in the next generation, Rucio). Functional testing of the federation has been integrated into the standard ATLAS and WLCG monitoring frameworks and a dedicated set of tools provides high granularity information on its current and historical usage. We use a federation topology designed to search from the site's local storage outward to its region and to globally distributed storage resources. We describe programmatic testing of various federation access modes including direct access over the wide area network and staging of remote data files to local disk. To support job-brokering decisions, a time-dependent cost-of-data-access matrix is made taking into account network performance and key site performance factors. The system's response to production-scale physics analysis workloads, either from individual end-users or ATLAS analysis services, is discussed.

  16. Improving vertebra segmentation through joint vertebra-rib atlases

    Science.gov (United States)

    Wang, Yinong; Yao, Jianhua; Roth, Holger R.; Burns, Joseph E.; Summers, Ronald M.

    2016-03-01

    Accurate spine segmentation allows for improved identification and quantitative characterization of abnormalities of the vertebra, such as vertebral fractures. However, in existing automated vertebra segmentation methods on computed tomography (CT) images, leakage into nearby bones such as ribs occurs due to the close proximity of these visibly intense structures in a 3D CT volume. To reduce this error, we propose the use of joint vertebra-rib atlases to improve the segmentation of vertebrae via multi-atlas joint label fusion. Segmentation was performed and evaluated on CTs containing 106 thoracic and lumbar vertebrae from 10 pathological and traumatic spine patients on an individual vertebra level basis. Vertebra atlases produced errors where the segmentation leaked into the ribs. The use of joint vertebra-rib atlases produced a statistically significant increase in the Dice coefficient from 92.5 +/- 3.1% to 93.8 +/- 2.1% for the left and right transverse processes and a decrease in the mean and max surface distance from 0.75 +/- 0.60mm and 8.63 +/- 4.44mm to 0.30 +/- 0.27mm and 3.65 +/- 2.87mm, respectively.

  17. Persistent ATLAS Data Structures and Reclustering of Event Data

    CERN Document Server

    Schaller, Martin

    1999-01-01

    The ATLAS experiment will start to take data in the year 2005. The amount of experimental data forms a serious challenge for data processing and data storage. About 1 PB (1015 bytes) per year has to be processed and stored. Currently, a paradigm shift in High-Energy Physics (HEP) computing is taking place. It is planned that software is written in object-oriented languages (mainly C++). For data storage the usage of object-oriented database management systems (ODBMSs) is foreseen. This thesis investigates the usage of an ODBMS in the ATLAS experiment. Work was done in several connected areas. First, we present exhaustive benchmarks of the commercial ODBMS Objectivity/DB that is today the most promising candidate for the storage system. We describe the ATLAS 1 TB milestone that was performed to investigate the reliability and performance of an ODBMS storage solution coupled to a mass storage system. Second, we report about the design and implementation of the persistent ATLAS data structures, both in the detec...

  18. An anatomic gene expression atlas of the adult mouse brain.

    Science.gov (United States)

    Ng, Lydia; Bernard, Amy; Lau, Chris; Overly, Caroline C; Dong, Hong-Wei; Kuan, Chihchau; Pathak, Sayan; Sunkin, Susan M; Dang, Chinh; Bohland, Jason W; Bokil, Hemant; Mitra, Partha P; Puelles, Luis; Hohmann, John; Anderson, David J; Lein, Ed S; Jones, Allan R; Hawrylycz, Michael

    2009-03-01

    Studying gene expression provides a powerful means of understanding structure-function relationships in the nervous system. The availability of genome-scale in situ hybridization datasets enables new possibilities for understanding brain organization based on gene expression patterns. The Anatomic Gene Expression Atlas (AGEA) is a new relational atlas revealing the genetic architecture of the adult C57Bl/6J mouse brain based on spatial correlations across expression data for thousands of genes in the Allen Brain Atlas (ABA). The AGEA includes three discovery tools for examining neuroanatomical relationships and boundaries: (1) three-dimensional expression-based correlation maps, (2) a hierarchical transcriptome-based parcellation of the brain and (3) a facility to retrieve from the ABA specific genes showing enriched expression in local correlated domains. The utility of this atlas is illustrated by analysis of genetic organization in the thalamus, striatum and cerebral cortex. The AGEA is a publicly accessible online computational tool integrated with the ABA (http://mouse.brain-map.org/agea). PMID:19219037

  19. ATLAS: Forecasting Falling Rocks

    Science.gov (United States)

    Heinze, Aren; Tonry, John L.; Denneau, Larry; Stalder, Brian; Sherstyuk, Andrei

    2016-10-01

    The Asteroid Terrestrial-impact Last Alert System (ATLAS) is a new asteroid survey aimed at detecting small (10-100 meter) asteroids inbound for impact with Earth. Relative to the larger objects targeted by most surveys, these small asteroids pose very different threats to our planet. Large asteroids can be seen at great distances and measured over many years, resulting in precise orbits that enable long-term impact predictions. If an impact were predicted, a costly deflection mission would be warranted to avert global catastrophe -- but a large asteroid impact is very unlikely in the next century. By contrast, impacts from small asteroids are inevitable. Such objects can be detected only during close encounters with Earth -- encounters too brief to yield long-term predictions. Only a few days' warning could be expected for an impactor in the 10-100 meter range, but fortunately the impact of such an asteroid would cause only regional damage. As in the case of a hurricane, a quixotic attempt to deflect or destroy it would be more expensive than the damage from its impact. A better response is to save human lives by evacuating the impact zone, and then rebuild. Only a few days warning are needed for this purpose, and ATLAS is unique among asteroid surveys in being optimized to provide it. While the optimization has many facets, the most important is rapidly surveying the entire accessible sky. A small asteroid could come from any direction and go from invisibility to impact in less than a week: ATLAS must look everywhere, all the time. Sky coverage is more important than exquisite sensitivity to faint objects, because asteroids inbound for impact will eventually become quite bright. This makes ATLAS complementary to other surveys, which scan the sky at a more leisurely pace but are able to detect asteroids at greater distances. We report on ATLAS' first year of survey operations, including the maturing of robotic observation and detection strategies, and asteroid and

  20. HiggsHunters - a citizen science project for ATLAS

    CERN Document Server

    Haas, Andrew; The ATLAS collaboration

    2016-01-01

    Since the launch of HiggsHunters.org in November 2014, citizen science volunteers have classified more than a million points of interest in images from the ATLAS experiment at the LHC. Volunteers have been looking for displaced vertices and unusual features in images recorded during LHC Run-1. We discuss the design of the project, its impact on the public, and the surprising results of how the human volunteers performed relative to the computer algorithms in identifying displaced secondary vertices.

  1. Resource Utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    Science.gov (United States)

    Lipeles, Elliot; Ospanov, Rustem; Schaefer, Doug

    2012-12-01

    Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully recorded the collision data with high efficiency and excellent data quality. Events were selected using a three-level trigger system, where each level made a more refined selection. The Level 1 (L1) trigger consisted of a custom-designed hardware trigger which seeded two higher software based trigger levels. Over 300 triggers composed a trigger menu which selected physics signatures such as electrons, muons, particle jets, etc. Each trigger consumed computing resources of the ATLAS Trigger system and offline storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determined the composition of the ATLAS Trigger menu. We describe a trigger monitoring framework called the Cost Monitoring Framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework was used to prepare the ATLAS Trigger for data taking during increases of more than six orders of magnitude in the LHC luminosity and has been influential in guiding ATLAS Trigger computing upgrades.

  2. Job optimization in ATLAS TAG-based distributed analysis

    International Nuclear Information System (INIS)

    The ATLAS experiment is projected to collect over one billion events/year during the first few years of operation. The efficient selection of events for various physics analyses across all appropriate samples presents a significant technical challenge. ATLAS computing infrastructure leverages the Grid to tackle the analysis across large samples by organizing data into a hierarchical structure and exploiting distributed computing to churn through the computations. This includes events at different stages of processing: RAW, ESD (Event Summary Data), AOD (Analysis Object Data), DPD (Derived Physics Data). Event Level Metadata Tags (TAGs) contain information about each event stored using multiple technologies accessible by POOL and various web services. This allows users to apply selection cuts on quantities of interest across the entire sample to compile a subset of events that are appropriate for their analysis. This paper describes new methods for organizing jobs using the TAGs criteria to analyze ATLAS data. It further compares different access patterns to the event data and explores ways to partition the workload for event selection and analysis. Here analysis is defined as a broader set of event processing tasks including event selection and reduction operations ('skimming', 'slimming' and 'thinning') as well as DPD making. Specifically it compares analysis with direct access to the events (AOD and ESD data) to access mediated by different TAG-based event selections. We then compare different ways of splitting the processing to maximize performance.

  3. Automated Loads Analysis System (ATLAS)

    Science.gov (United States)

    Gardner, Stephen; Frere, Scot; O’Reilly, Patrick

    2013-01-01

    ATLAS is a generalized solution that can be used for launch vehicles. ATLAS is used to produce modal transient analysis and quasi-static analysis results (i.e., accelerations, displacements, and forces) for the payload math models on a specific Shuttle Transport System (STS) flight using the shuttle math model and associated forcing functions. This innovation solves the problem of coupling of payload math models into a shuttle math model. It performs a transient loads analysis simulating liftoff, landing, and all flight events between liftoff and landing. ATLAS utilizes efficient and numerically stable algorithms available in MSC/NASTRAN.

  4. Electrons and Photons at ATLAS

    CERN Document Server

    Heim, Sarah; The ATLAS collaboration

    2016-01-01

    The performance of the reconstruction, calibration and identification of electrons and photons with the ATLAS detector at the LHC is a key component to realize the ATLAS full physics potential, both in the searches for new physics and in precision measurements. The algorithms used for the reconstruction and identification of electrons and photons with the ATLAS detector during LHC run 2 are presented. Measurements of the identification efficiencies are derived from data. The results from the 2015 pp collision data set at sqrt(s)=13 TeV are reported. The electron and photon energy calibration procedure and its performance are also discussed.

  5. Multi-Atlas Segmentation with Joint Label Fusion and Corrective Learning - An Open Source Implementation

    Directory of Open Access Journals (Sweden)

    Hongzhi eWang

    2013-11-01

    Full Text Available Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far.

  6. Seismic Empirical Relations for the Tellian Atlas, North Africa, and their Usefulness for Seismic Risk Assessment

    Science.gov (United States)

    Beghoul, Noureddine; Chatelain, Jean-Luc; Boughacha, Mohamed-Salah; Benhallou, Hadj; Dadou, Rida; Mezioud-Saïch, Amira

    2010-03-01

    Seismic events that occurred during the past half century in the Tellian Atlas, North Africa, are used to establish fundamental seismic empirical relations, tying earthquake magnitude to source parameters (seismic moment, fault plane area, maximal displacement along the fault, and fault plane length). Those empirical relations applied to the overall seismicity from 1716 to present are used to transform the magnitude (or intensity) versus time distribution into (1) cumulative seismic moment versus time, and (2) cumulative displacements versus time. Both of those parameters as well as the computed seismic moment rate, the strain rate along the Tellian Atlas strike, and various other geological observations are consistent with the existence, in the Tellian Atlas, of three distinct active tectonic blocks. These blocks are seismically decoupled from each other, thus allowing consideration of the seismicity as occurring in three different distinct seismotectonic blocks. The cumulative displacement versus time from 1900 to present for each of these tectonic blocks presents a remarkable pattern of recurrence time intervals and precursors associated with major earthquakes. Indeed, most major earthquakes that occurred in these three blocks might have been predicted in time. The Tellian Atlas historical seismicity from the year 881 to the present more substantially confirms these observations, in particular for the western block of the Tellian Atlas. Theoretical determination of recurrence time intervals for the Tellian Atlas large earthquakes using Molnar and Kostrov formalisms is also consistent with these observations. Substantial observations support the fact that the western and central Tellian Atlas are currently at very high seismic risk, in particular the central part. Indeed, most of the accumulated seismic energy in the central Tellian Atlas crust has yet to be released, despite the occurrence of the recent destructive May 2003 Boumerdes earthquake ( M w = 6.8). The

  7. The ATLAS Trigger Muon "Vertical Slice"

    CERN Document Server

    Sidoti, A; Biglietti, M; Carlino, G; Cataldi, G; Conventi, F; Del Prete, T; Di Mattia, A; Falciano, S; Gorini, S; Kanaya, N; Kohno, T; Krasznahorkay, A; Lagouri, T; Luci, C; Luminari, L; Marzano, F; Nagano, K; Nisati, A; Panikashvili, N; Pasqualucci, E; Primavera, M; Scannicchio, D A; Spagnolo, S; Tarem, S; Tarem, Z; Tokushuku, K; Usai, G; Ventura, A; Vercesi, V; Yamazaki, Y; 10th Pisa Meeting on Advanced Detectors : Frontier Detectors For Frontier Physics

    2007-01-01

    The muon trigger system is a fundamental component of the ATLAS detector at the LHC collider. In this paper we describe the ATLAS multi-level trigger selecting events with muons: the Muon Trigger Slice.

  8. EnviroAtlas - Metrics for Memphis, TN

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  9. EnviroAtlas - Metrics for Portland, ME

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  10. ATLAS Calorimeter Part 2/2

    CERN Multimedia

    2004-01-01

    There are two videos about lowering and this one is the second part that shows the final positioning of the object. The first part shows how the ATLAS calorimeter with solenoid is lowered down in the ATLAS cavern.

  11. Forward Physics at the ATLAS experiment

    CERN Document Server

    The ATLAS collaboration

    2009-01-01

    Poster summarize forward physics at the ATLAS experiment. It aims to AFP project which is the project to install forward detectors at 220m (AFP220) and 420m (AFP420) around ATLAS for measurements at high luminosity.

  12. EnviroAtlas - Metrics for Paterson, NJ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  13. EnviroAtlas - Metrics for Tampa, FL

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  14. EnviroAtlas - Metrics for Portland, OR

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http:/www.epa.gov/enviroatlas). The layers in these web...

  15. EnviroAtlas - Metrics for Milwaukee, WI

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (http://www.epa.gov/enviroatlas). The layers in these web...

  16. EnviroAtlas - Durham, NC - Demo (Parent)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Durham, NC EnviroAtlas Area. The block groups are from the US Census Bureau and are included/excluded based on...

  17. EnviroAtlas - Memphis, TN - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Memphis, TN EnviroAtlas community. The block groups are from the US Census Bureau and are included/excluded based...

  18. ATLAS : civil engineering at Point 1

    CERN Multimedia

    CERN Audiovisual Unit

    2002-01-01

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Real underground video.

  19. EnviroAtlas - Metrics for Woodbine, IA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  20. EnviroAtlas - Metrics for Phoenix, AZ

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  1. EnviroAtlas - Metrics for Durham, NC

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas ). The layers in these web...

  2. EnviroAtlas - Metrics for Pittsburgh, PA

    Data.gov (United States)

    U.S. Environmental Protection Agency — These EnviroAtlas web services support research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in these web...

  3. EnviroAtlas - Austin, TX - Block Groups

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas dataset is the base layer for the Austin, TX EnviroAtlas area. The block groups are from the US Census Bureau and are included/excluded based on...

  4. Women of ATLAS - International Women's Day 2016

    CERN Multimedia

    Biondi, Silvia

    2016-01-01

    Women play key roles in the ATLAS Experiment: from young physicists at the start of their careers to analysis group leaders and spokespersons of the collaboration. Celebrate International Women's Day by meeting a few of these inspiring ATLAS researchers.

  5. EnviroAtlas - Metrics for Fresno, CA

    Data.gov (United States)

    U.S. Environmental Protection Agency — This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). The layers in this web...

  6. An image of an event in which a microscopic-black-hole was produced in the collision of two protons in a computer generated image of the ATLAS detector.

    CERN Multimedia

    Joao Pequenao

    2008-01-01

    In some theories, microscopic black holes may be produced in particle collisions that occur when very-high-energy cosmic rays hit particles in our atmosphere. These microscopic-black-holes would decay into ordinary particles in a tiny fraction of a second and would be very difficult to observe in our atmosphere. The ATLAS Experiment offers the exciting possibility to study them in the lab (if they exist). The simulated collision event shown is viewed along the beampipe. The event is one in which a microscopic-black-hole was produced in the collision of two protons (not shown). The microscopic-black-hole decayed immediately into many particles. The colors of the tracks show different types of particles emerging from the collision (at the center).

  7. CERN Open Days 2013, Point 1 - ATLAS: ATLAS Experiment

    CERN Multimedia

    CERN Photolab

    2013-01-01

    Stand description: The ATLAS Experiment at CERN is one of the largest and most complex scientific endeavours ever assembled. The detector, located at collision point 1 of the LHC, is designed to explore the fundamental components of nature and to study the forces that shape our universe. The past year’s discovery of a Higgs boson is one of the most important scientific achievements of our time, yet this is only one of many key goals of ATLAS. During a brief break in their journey, some of the 3000-member ATLAS collaboration will be taking time to share the excitement of this exploration with you. On surface no restricted access  The exhibit at Point 1 will give visitors a chance to meet these modern-day explorers and to learn from them how answers to the most fundamental questions of mankind are being sought. Activities will include a visit to the ATLAS detector, located 80m below ground; watching the prize-winning ATLAS movie in the ATLAS cinema; seeing real particle tracks in a cloud chamber and discussi...

  8. ATLAS experiment : mapping the secrets of the universe

    CERN Multimedia

    ATLAS Outreach

    2010-01-01

    This 4 page color brochure describes ATLAS and the LHC, the ATLAS inner detector, calorimeters, muon spectrometer, magnet system, a short definition of the terms "particles," "dark matter," "mass," "antimatter." It also explains the ATLAS collaboration and provides the ATLAS website address with some images of the detector and the ATLAS collaboration at work.

  9. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration; Medrano Llamas, R; Sciacca, G; Van der Ster, D C

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes more than 80 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate si...

  10. Improving ATLAS grid site reliability with functional tests using HammerCloud

    CERN Document Server

    Legger, F; The ATLAS collaboration

    2012-01-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short light-weight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site...

  11. Resource Utilization by the ATLAS High Level Trigger during 2010 and 2011 LHC running

    CERN Document Server

    Schaefer, D; The ATLAS collaboration; Ospanov, R

    2012-01-01

    Since starting in 2010, the Large Hadron Collider (LHC) has produced collisions at an ever increasing rate. The ATLAS experiment successfully records the collision data with high efficiency and excellent data quality. Events are selected using a three-level trigger system, where each level makes a more re ned selection. The level-1 trigger (L1) consists of a custom-designed hardware trigger which seeds two higher software based trigger levels. Over 300 triggers compose a trigger menu which selects physics signatures such as electrons, muons, particle jets, etc. Each trigger consumes computing resources of the ATLAS trigger system and online storage. The LHC instantaneous luminosity conditions, desired physics goals of the collaboration, and the limits of the trigger infrastructure determine the composition of the ATLAS trigger menu. We describe a trigger monitoring framework for computing the costs of individual trigger algorithms such as data request rates and CPU consumption. This framework has been used to...

  12. World Ocean Atlas 2005, Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  13. ATLAS online data quality monitoring

    CERN Document Server

    Cuenca Almenar, C; The ATLAS collaboration; Hadavand, H; Ilchenko, Y; Kolos, S; Slagle, K; Taffard, A

    2010-01-01

    Every minute the ATLAS detector is taking data, the monitoring framework serves several thousands physics events to monitoring data analysis applications, handles millions of histogram updates coming from thousands applications, executes over forty thousand advanced data quality checks for a subset of those histograms, displays histograms and results of these checks on several dozens of monitors installed in main and satellite ATLAS control rooms. The online data quality monitoring system has been of great help in providing quick feedback to the subsystems about the functioning and performance of the different parts of ATLAS by providing a configurable easy and fast visualization of all this information. The Data Quality Monitoring Display (DQMD) is a visualization tool for the automatic data quality assessment of the ATLAS experiment. It is the interface through which the shift crew and experts can validate the quality of the data being recorded or processed, be warned of problems related to data quality, an...

  14. Wheels lining up for ATLAS

    CERN Multimedia

    2003-01-01

    On 30 October, the mechanics test assembly of the central barrel of the ATLAS tile hadronic calorimeter was completed in building 185. It is the second wheel for the Tilecal completely assembled this year.

  15. Dartmouth Atlas of Health Care

    Data.gov (United States)

    U.S. Department of Health & Human Services — For more than 20 years, the Dartmouth Atlas Project has documented glaring variations in how medical resources are distributed and used in the United States. The...

  16. Nuclear Receptor Signaling Atlas (NURSA)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Nuclear Receptor Signaling Atlas (NURSA) is designed to foster the development of a comprehensive understanding of the structure, function, and role in disease...

  17. World Ocean Atlas 2005, Salinity

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  18. World Ocean Atlas 2005, Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — World Ocean Atlas 2005 (WOA05) is a set of objectively analyzed (1° grid) climatological fields of in situ temperature, salinity, dissolved oxygen, Apparent Oxygen...

  19. Two new wheels for ATLAS

    CERN Multimedia

    2002-01-01

    Juergen Zimmer (Max Planck Institute), Roy Langstaff (TRIUMF/Victoria) and Sergej Kakurin (JINR), in front of one of the completed wheels of the ATLAS Hadronic End Cap Calorimeter. A decade of careful preparation and construction by groups in three continents is nearing completion with the assembly of two of the four 4 m diameter wheels required for the ATLAS Hadronic End Cap Calorimeter. The first two wheels have successfully passed all their mechanical and electrical tests, and have been rotated on schedule into the vertical position required in the experiment. 'This is an important milestone in the completion of the ATLAS End Cap Calorimetry' explains Chris Oram, who heads the Hadronic End Cap Calorimeter group. Like most experiments at particle colliders, ATLAS consists of several layers of detectors in the form of a 'barrel' and two 'end caps'. The Hadronic Calorimeter layer, which measures the energies of particles such as protons and pions, uses two techniques. The barrel part (Tile Calorimeter) cons...

  20. ATLAS recognises its best suppliers

    CERN Multimedia

    Jenni, P

    The ATLAS Collaboration has recently rewarded two of its suppliers in the construction of very major detector components, fabricated in Japan. The ATLAS Supplier Award in recognition of excellent supplier performance was attributed on 2nd September 2002 during a ceremony in Hall 180 to Kawasaki Heavy Industries, while Toshiba Corporation received the award two months before at their headquarters in Japan. The ATLAS experiment will become a reality thanks to a large international collaboration partnership. The industrial suppliers for the components all over the world play a major role in the construction of this gigantic jigsaw for the LHC. And sometimes they perform so well, that their work deserves specially to be recognised. This is the case for Kawasaki Heavy Industries and Toshiba Corporation, producers of the Liquid Argon Barrel Cryostat and of the Superconducting Central Solenoid, respectively. With these awards, the ATLAS Collaboration wants to congratulate Kawasaki and Toshiba for fulfilling the hi...

  1. ATLAS Civil Engineering Point 1

    CERN Multimedia

    2001-01-01

    Different phases of realisation to Point 1: zone of the ATLAS experiment 14-02-2001Realising anchorage, isolations and scaffoldings at UX 15 18-04-2001Concreting the arch and posing the metal reinforcements at UX 15

  2. BioFuels Atlas (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Moriarty, K.

    2011-02-01

    Presentation for biennial merit review of Biofuels Atlas, a first-pass visualization tool that allows users to explore the potential of biomass-to-biofuels conversions at various locations and scales.

  3. The ATLAS Simulation Software

    CERN Document Server

    Marshall, Z

    2008-01-01

    We present the status of the ATLAS Simulation Pro ject. Recent detector description improvements have focussed on commissioning layouts, implementation of inert material, and comparisons to the as-built detector. Core Simulation is reviewed with a focus on parameter optimizations, physics list choices, visualization, large-scale production, and validation. A fast simulation is also briefly described, and its performance is evaluated with respect to the full Simulation. Digitization, the last step of the Monte Carlo chain, is described, including developments in pile up and data overlay.

  4. VH WW in ATLAS

    CERN Document Server

    Kinghorn-taenzer, Joseph Peter; The ATLAS collaboration

    2015-01-01

    A search for Higgs boson production in association with a W or Z boson, in the H -> WW decay channel, is performed with a data sample collected with the ATLAS detector at the LHC in proton– proton collisions at centre-of-mass energies sqrt(s) = 7 TeV and 8 TeV, corresponding to integrated luminosities of 4.5 fb-1 and 20.3 fb-1, respectively. The WH production mode is studied in two-lepton and three-lepton final states, while two-lepton and four-lepton final states are used to search for the ZH production mode.

  5. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    CPPM Laboratory Marseille Starting with the Workshop- adding modules to the strip 00:09:19 Exterior-entering the lab site by car, Sascha Rosanov and a PR lady walking, Lab sign on building -Physique des Particules de Marseille 00:20:00 Interviews of the ATLAS pixel work for bio-mediacal research 00:34:00 Interview of Roy Aleksov, Head of CPPM Laboratory, Working in international team, working with CERN and GRID The rest of the film inclusdes lab testingand some exterior shots.

  6. Dark Matter in ATLAS

    CERN Document Server

    Resconi, Silvia; The ATLAS collaboration

    2016-01-01

    An overview of Dark Matter searches with the ATLAS experiment at the Large Hadron Collider (LHC) is shown. Results of Mono-X analyses requiring large missing transverse momentum and a recoiling detectable physics object (X) are reported. The data were collected in proton-proton collisions at a centre-of-mass energy of 13 TeV. The observed data are in agreement with the expected Standard Model backgrounds for all analyses described. Exclusion limits are presented for Dark Matter models including pair production of Dark Matter candidates.

  7. Supersymmetry searches in ATLAS

    CERN Document Server

    Meloni, Federico; The ATLAS collaboration

    2015-01-01

    This document summarises recent ATLAS results for searches for supersymmetric particles using LHC proton-proton collision data. Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. We consider both R-Parity conserving and R-Parity violating SUSY scenarios. The searches involve final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures. Sensitivity projections for the data that will be collected in 2015 are also presented.

  8. Supersymmetry searches in ATLAS

    CERN Document Server

    Meloni, Federico; The ATLAS collaboration

    2015-01-01

    Despite the absence of experimental evidence, weak scale supersymmetry remains one of the best motivated and studied Standard Model extensions. This talk summarises recent ATLAS results for searches for supersymmetric (SUSY) particles. Weak and strong production in both R-Parity conserving and R-Parity violating SUSY scenarios are considered. The searches involved final states including jets, missing transverse momentum, light leptons, taus or photons, as well as long-lived particle signatures. Sensitivity projections for the data that will be collected in 2015 are also presented.

  9. QCD Measurements at ATLAS

    CERN Document Server

    Hubacek, Zdenek; The ATLAS collaboration

    2016-01-01

    This paper presents recent QCD related measurements from the ATLAS Experiment at the LHC at CERN. The results on the total inelastic cross- section, charged particle production, jet production, photon production, and W-, Z-bosons productions are briefly summarized. The measurments are performed at different center-of-mass energies sqrt(s) = 7, 8, and 13 TeV. The measured cross-sections are generally found to be in agreement with the expectations from the Standard Model within the estimated uncertainties.

  10. Dark Matter in ATLAS

    CERN Document Server

    Resconi, Silvia; The ATLAS collaboration

    2016-01-01

    Results of Dark Matter searches in mono-X analysis with the ATLAS experiment at the Large Hadron Collider are reported. The data were collected in proton–proton collisions at a centre-of-mass energy of 13 TeV and correspond to an integrated luminosity of 3.2 fb-1. A description of the main characteristics of each analysis and how the main backgrounds are estimated is shown. The observed data are in agreement with the expected Standard Model backgrounds for all analysis described. Exclusion limits are presented for Dark Matter models including pair production of dark matter candidates.

  11. Surveying the ATLAS cavern

    CERN Multimedia

    Laurent Guiraud

    2000-01-01

    The cathedral-like cavern into which the ATLAS experiment will be lowered and installed forms a vital part of the engineering work at CERN in preparation for the new LHC accelerator. This cavern, being measured by surveyors in these images, will have one of the largest spans of any man-made underground structure. The massive 46X25X25 cubic metre detector will be the largest of its type in the world when it is completed for the LHC start-up in 2008.

  12. ATLAS TV PROJECT

    CERN Multimedia

    2005-01-01

    Budker Nuclear Physics Institute, Novosibirsk Sequence 1 Reception for Markus Nordberg and Andrew Millington by about 20 physicists from the Budker Nuclear Physics Institute Host: Yuri Tikhonov Various short talks and exchanges, with coffee Sequence 2 Visit to BINP Facilities Tikhonov and Nordberg walking and talking Visit to electron accelerator, old solar detector Sequence 3 Visit to BNIP workshops Work on big wheel segments shots over-exposed Work on Atlas coils LHC Magnets Men playing chess, exterior shots of Tikhonov, Nordberg arriving Sequence 4 Shots from car of journey from workshop to main BNIP building.

  13. Exotics searches in ATLAS

    CERN Document Server

    Vranjes, N; The ATLAS collaboration

    2016-01-01

    We report on the latest searches for (non-SUSY) Beyond Standard Model phenomena performed with the ATLAS detector. The searches have been performed with the data from proton-proton collisions at a centre-of-mass energy of 7 TeV collected in 2010 and 2011. Various experimental signatures have been studied involving reconstruction and measurement of leptons, photons, jets, missing transverse energy, as well as reconstruction of top quarks. For most of the signatures, the experimental reach is significantly increased with respect to previous results.

  14. Top Physics at ATLAS

    OpenAIRE

    Barisonzi, Marcello

    2005-01-01

    The Large Hadron Collider LHC is a top quark factory: due to its high design luminosity, LHC will produce about 200 millions of top quarks per year of operation. The large amount of data will allow to study with great precision the properties of the top quark, most notably cross-section, mass and spin. The Top Physics Working Group has been set up at the ATLAS experiment, to evaluate the precision reach of physics measurements in the top sector, and to study the systematic effects of the ATLA...

  15. The Genome Atlas Resource

    DEFF Research Database (Denmark)

    Azam Qureshi, Matloob; Rotenberg, Eva; Stærfeldt, Hans Henrik;

    2010-01-01

    Abstract. The Genome Atlas is a resource for addressing the challenges of synchronising prokaryotic genomic sequence data from multiple public repositories. This resource can integrate bioinformatic analyses in various data format and quality. Existing open source tools have been used together...... with scripts and algorithms developed in a variety of programming languages at the Centre for Biological Sequence Analysis in order to create a three-tier software application for genome analysis. The results are made available via a web interface developed in Java, PHP and Perl CGI. User...

  16. Real-time Flavor Tagging in ATLAS:

    CERN Document Server

    Alison, John; The ATLAS collaboration

    2015-01-01

    In high-energy physics experiments, online selection is crucial to select interesting collisions from the large data volume. ATLAS b-jet and tau triggers are designed to identify heavy-flavour content in real-time and provide the only option to efficiently record events with fully hadronic final states containing b-jets or hadronic tau decays. In doing so, two different, but related, challenges are faced. The physics goal is to optimise as far as possible the rejection of light jets, while retaining a high efficiency on selecting b-jets or hadronic taus and maintaining affordable trigger rates without raising jet energy thresholds. This maps into a challenging computing task, as tracks and their corresponding vertices must be reconstructed and analysed for each jet above the desired threshold, regardless of the increasingly harsh pile-up conditions. We present an overview of the ATLAS strategy for online b-jet and tau selection for the LHC Run 2, including the use of novel methods and sophisticated algorithms...

  17. Event Reconstruction Algorithms for the ATLAS Trigger

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca-Martin, T.; /CERN; Abolins, M.; /Michigan State U.; Adragna, P.; /Queen Mary, U. of London; Aleksandrov, E.; /Dubna, JINR; Aleksandrov, I.; /Dubna, JINR; Amorim, A.; /Lisbon, LIFEP; Anderson, K.; /Chicago U., EFI; Anduaga, X.; /La Plata U.; Aracena, I.; /SLAC; Asquith, L.; /University Coll. London; Avolio, G.; /CERN; Backlund, S.; /CERN; Badescu, E.; /Bucharest, IFIN-HH; Baines, J.; /Rutherford; Barria, P.; /Rome U. /INFN, Rome; Bartoldus, R.; /SLAC; Batreanu, S.; /Bucharest, IFIN-HH /CERN; Beck, H.P.; /Bern U.; Bee, C.; /Marseille, CPPM; Bell, P.; /Manchester U.; Bell, W.H.; /Glasgow U. /Pavia U. /INFN, Pavia /Regina U. /CERN /Annecy, LAPP /Paris, IN2P3 /Royal Holloway, U. of London /Napoli Seconda U. /INFN, Naples /Argonne /CERN /UC, Irvine /Barcelona, IFAE /Barcelona, Autonoma U. /CERN /Montreal U. /CERN /Glasgow U. /Michigan State U. /Bucharest, IFIN-HH /Napoli Seconda U. /INFN, Naples /New York U. /Barcelona, IFAE /Barcelona, Autonoma U. /Salento U. /INFN, Lecce /Pisa U. /INFN, Pisa /Bucharest, IFIN-HH /UC, Irvine /CERN /Glasgow U. /INFN, Genoa /Genoa U. /Lisbon, LIFEP /Napoli Seconda U. /INFN, Naples /UC, Irvine /Valencia U. /Rio de Janeiro Federal U. /University Coll. London /New York U.; /more authors..

    2011-11-09

    The ATLAS experiment under construction at CERN is due to begin operation at the end of 2007. The detector will record the results of proton-proton collisions at a center-of-mass energy of 14 TeV. The trigger is a three-tier system designed to identify in real-time potentially interesting events that are then saved for detailed offline analysis. The trigger system will select approximately 200 Hz of potentially interesting events out of the 40 MHz bunch-crossing rate (with 10{sup 9} interactions per second at the nominal luminosity). Algorithms used in the trigger system to identify different event features of interest will be described, as well as their expected performance in terms of selection efficiency, background rejection and computation time per event. The talk will concentrate on recent improvements and on performance studies, using a very detailed simulation of the ATLAS detector and electronics chain that emulates the raw data as it will appear at the input to the trigger system.

  18. The ATLAS Level-2 Trigger Pilot Project

    CERN Document Server

    Blair, R; Haberichter, W N; Schlereth, J L; Bock, R; Bogaerts, A; Boosten, M; Dobinson, Robert W; Dobson, M; Ellis, Nick; Elsing, M; Giacomini, F; Knezo, E; Martin, B; Shears, T G; Tapprogge, Stefan; Werner, P; Hansen, J R; Wäänänen, A; Korcyl, K; Lokier, J; George, S; Green, B; Strong, J; Clarke, P; Cranfield, R; Crone, G J; Sherwood, P; Wheeler, S; Hughes-Jones, R E; Kolya, S; Mercer, D; Hinkelbein, C; Kornmesser, K; Kugel, A; Männer, R; Müller, M; Sessler, M; Simmler, H; Singpiel, H; Abolins, M; Ermoline, Y; González-Pineiro, B; Hauser, R; Pope, B; Sivoklokov, S Yu; Boterenbrood, H; Jansweijer, P; Kieft, G; Scholte, R; Slopsema, R; Vermeulen, J C; Baines, J T M; Belias, A; Botterill, David R; Middleton, R; Wickens, F J; Falciano, S; Bystrický, J; Calvet, D; Gachelin, O; Huet, M; Le Dû, P; Mandjavidze, I D; Levinson, L; González, S; Wiedenmann, W; Zobernig, H

    2002-01-01

    The Level-2 Trigger Pilot Project of ATLAS, one of the two general purpose LHC experiments, is part of the on-going program to develop the ATLAS high-level triggers (HLT). The Level-2 Trigger will receive events at up to 100 kHz, which has to be reduced to a rate suitable for full event-building of the order of 1 kHz. To reduce the data collection bandwidth and processing power required for the challenging Level-2 task it is planned to use Region of Interest guidance (from Level-1) and sequential processing. The Pilot Project included the construction and use of testbeds of up to 48 processing nodes, development of optimized components and computer simulations of a full system. It has shown how the required performance can be achieved, using largely commodity components and operating systems, and validated an architecture for the Level-2 system. This paper describes the principal achievements and conclusions of this project. (28 refs).

  19. The ATLAS data management software engineering process

    International Nuclear Information System (INIS)

    Rucio is the next-generation data management system of the ATLAS experiment. The software engineering process to develop Rucio is fundamentally different to existing software development approaches in the ATLAS distributed computing community. Based on a conceptual design document, development takes place using peer-reviewed code in a test-driven environment. The main objectives are to ensure that every engineer understands the details of the full project, even components usually not touched by them, that the design and architecture are coherent, that temporary contributors can be productive without delay, that programming mistakes are prevented before being committed to the source code, and that the source is always in a fully functioning state. This contribution will illustrate the workflows and products used, and demonstrate the typical development cycle of a component from inception to deployment within this software engineering process. Next to the technological advantages, this contribution will also highlight the social aspects of an environment where every action is subject to detailed scrutiny.

  20. The ATLAS fast tracker processor design

    CERN Document Server

    Volpi, Guido; Albicocco, Pietro; Alison, John; Ancu, Lucian Stefan; Anderson, James; Andari, Nansi; Andreani, Alessandro; Andreazza, Attilio; Annovi, Alberto; Antonelli, Mario; Asbah, Needa; Atkinson, Markus; Baines, J; Barberio, Elisabetta; Beccherle, Roberto; Beretta, Matteo; Biesuz, Nicolo Vladi; Blair, R E; Bogdan, Mircea; Boveia, Antonio; Britzger, Daniel; Bryant, Partick; Burghgrave, Blake; Calderini, Giovanni; Camplani, Alessandra; Cavaliere, Viviana; Cavasinni, Vincenzo; Chakraborty, Dhiman; Chang, Philip; Cheng, Yangyang; Citraro, Saverio; Citterio, Mauro; Crescioli, Francesco; Dawe, Noel; Dell'Orso, Mauro; Donati, Simone; Dondero, Paolo; Drake, G; Gadomski, Szymon; Gatta, Mauro; Gentsos, Christos; Giannetti, Paola; Gkaitatzis, Stamatios; Gramling, Johanna; Howarth, James William; Iizawa, Tomoya; Ilic, Nikolina; Jiang, Zihao; Kaji, Toshiaki; Kasten, Michael; Kawaguchi, Yoshimasa; Kim, Young Kee; Kimura, Naoki; Klimkovich, Tatsiana; Kolb, Mathis; Kordas, K; Krizka, Karol; Kubota, T; Lanza, Agostino; Li, Ho Ling; Liberali, Valentino; Lisovyi, Mykhailo; Liu, Lulu; Love, Jeremy; Luciano, Pierluigi; Luongo, Carmela; Magalotti, Daniel; Maznas, Ioannis; Meroni, Chiara; Mitani, Takashi; Nasimi, Hikmat; Negri, Andrea; Neroutsos, Panos; Neubauer, Mark; Nikolaidis, Spiridon; Okumura, Y; Pandini, Carlo; Petridou, Chariclia; Piendibene, Marco; Proudfoot, James; Rados, Petar Kevin; Roda, Chiara; Rossi, Enrico; Sakurai, Yuki; Sampsonidis, Dimitrios; Saxon, James; Schmitt, Stefan; Schoening, Andre; Shochet, Mel; Shoijaii, Jafar; Soltveit, Hans Kristian; Sotiropoulou, Calliope-Louisa; Stabile, Alberto; Swiatlowski, Maximilian J; Tang, Fukun; Taylor, Pierre Thor Elliot; Testa, Marianna; Tompkins, Lauren; Vercesi, V; Wang, Rui; Watari, Ryutaro; Zhang, Jianhong; Zeng, Jian Cong; Zou, Rui; Bertolucci, Federico

    2015-01-01

    The extended use of tracking information at the trigger level in the LHC is crucial for the trigger and data acquisition (TDAQ) system to fulfill its task. Precise and fast tracking is important to identify specific decay products of the Higgs boson or new phenomena, as well as to distinguish the contributions coming from the many collisions that occur at every bunch crossing. However, track reconstruction is among the most demanding tasks performed by the TDAQ computing farm; in fact, complete reconstruction at full Level-1 trigger accept rate (100 kHz) is not possible. In order to overcome this limitation, the ATLAS experiment is planning the installation of a dedicated processor, the Fast Tracker (FTK), which is aimed at achieving this goal. The FTK is a pipeline of high performance electronics, based on custom and commercial devices, which is expected to reconstruct, with high resolution, the trajectories of charged-particle tracks with a transverse momentum above 1 GeV, using the ATLAS inner tracker info...